• Cowbee [he/they]@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    21 days ago

    I have never said that a process has no effect on the person performing a process. You still aren’t adhering to materialism fully, even if you have improved. It’s not about being bad-faith, I’ve been good faith this entire time even as you’ve openly mocked me.

      • Cowbee [he/they]@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        21 days ago

        The core of your argument seems to be that using AI, under all circumstances, is cognitively damaging. You also call it a process and not a tool, but all tools have associated process, including correct and incorrect process. A hammer can be misgripped, causing strain on muscles and thus pain. You can also use a hammer for the wrong purpose, like driving a screw and not a nail. You can kinda do it, but it’s less efficient at best, and harmful at worst. AI is similar.

        • patatas@sh.itjust.worksOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          4
          ·
          21 days ago

          Yes, AI is cognitively damaging under all circumstances. Then you start talking about a hammer. Why not talk about the cognitive effect of AI?

          • Cowbee [he/they]@lemmy.ml
            link
            fedilink
            arrow-up
            4
            ·
            21 days ago

            How is AI cognitively damaging under all circumstances? You just left this hanging like it’s a fact, but that requires incredible effort to prove. Is using a calculator cognitively damaging? What about a search engine? What is it about using AI that makes it cognitively damaging?

              • Cowbee [he/they]@lemmy.ml
                link
                fedilink
                arrow-up
                4
                ·
                21 days ago

                Not a fan of the style of argument that consists of “read this thing,” but I read the whole thing. It doesn’t contradict me, it contradicts you. The article argues that machines cannot replace cognition, can’t replace art, for example, but acknowledges right at the beginning that it has uses, and that drawing clear lines between what’s damaging and what’s not, what LLMs can and cannot do, is the task at hand. Your argument is that the use of AI, in all circumstances, is cognitively damaging for the individual. This is an entirely distinct argument that your article doesn’t back up.

                • patatas@sh.itjust.worksOP
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  21 days ago

                  It absolutely backs my case up. The only reason for the existence of AI systems is to offload cognitive and creative effort. That is why the linked piece was written, to push back against the idea that it is possible! The fact that they can’t do that makes no difference to the fact that using them is only done for that purpose.

                  Literally the only thing you have consistently tried to argue that AI systems might theoretically be able to automate is ‘some stock images’.

                  Meanwhile, mountains of evidence pile up that what I am saying is true in practice.

                  The purpose of a system is what it does. These systems act as if they replace cognition (as argued in the piece) but fail to, and cause cognitive harm as a result.

                  • Cowbee [he/they]@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    5
                    arrow-down
                    1
                    ·
                    21 days ago

                    Quoting and bolding your own reference seems to be an easy way to counter here:

                    Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.

                    What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.

                    But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.

                    Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.

                    But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.

                    The article is deliberately railing against mystifying AI and attributing to it human cognition, but it fully acknowledges that AI in its present case has uses. Making those distinct from human cognition, and not as a replacement, is important, not fetishizing AI like some AI dogmatists do.

            • patatas@sh.itjust.worksOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              4
              ·
              21 days ago

              The fact that you think that AI can create the equivalent of any human output means you yourself are a cognitive casualty of AI

              • Cowbee [he/they]@lemmy.ml
                link
                fedilink
                arrow-up
                4
                ·
                edit-2
                21 days ago

                If two images are pixel-for-pixel the same, then their use is the same. I don’t appreciate calling me stupid just because I don’t believe there is a metaphysical quality to a .png taken with a camera that looks the exact same as an AI generated output, especially if it’s for something as mundane as getting across an idea like “office worker eats corncob while laughing.” Plus, I already told you, I don’t personally use it because I don’t have a use for it.

                • patatas@sh.itjust.worksOP
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  21 days ago

                  I was not trying to call you stupid, but apologies regardless.

                  My point was meant to be that the self-perception of human cognition is changed through the use of AI systems, in such a way that one believes them to be able to replace human cognition.

                  As for the pixel-perfect recreation, I entertained that idea as a pure hypothetical, but it’s not something these systems are actually capable of, and they never will be.

                  • Cowbee [he/they]@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    4
                    ·
                    21 days ago

                    Isn’t that argument fundamentally based on the user misanalyzing the use-case of AI, and what it can and cannot do? The article you linked argued for clearly understanding AI, its limits, etc, not rejecting it dogmatically in all cases.

                    As for pixel-perfect recreation, AI is improving, and will continue to improve whether or not you or I approve. The hypothetical is important because it reveals something about use-value.

                  • patatas@sh.itjust.worksOP
                    link
                    fedilink
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    21 days ago

                    So, as I said right from the beginning, the upshot is: using AI images for a banner instead of literally anything else sends the message to community members that the admins/mods do not value human cognitive and creative work differently from Markov chains or diffusion models.