Not a fan of the style of argument that consists of “read this thing,” but I read the whole thing. It doesn’t contradict me, it contradicts you. The article argues that machines cannot replace cognition, can’t replace art, for example, but acknowledges right at the beginning that it has uses, and that drawing clear lines between what’s damaging and what’s not, what LLMs can and cannot do, is the task at hand. Your argument is that the use of AI, in all circumstances, is cognitively damaging for the individual. This is an entirely distinct argument that your article doesn’t back up.
It absolutely backs my case up. The only reason for the existence of AI systems is to offload cognitive and creative effort. That is why the linked piece was written, to push back against the idea that it is possible! The fact that they can’t do that makes no difference to the fact that using them is only done for that purpose.
Literally the only thing you have consistently tried to argue that AI systems might theoretically be able to automate is ‘some stock images’.
Meanwhile, mountains of evidence pile up that what I am saying is true in practice.
The purpose of a system is what it does. These systems act as if they replace cognition (as argued in the piece) but fail to, and cause cognitive harm as a result.
Quoting and bolding your own reference seems to be an easy way to counter here:
Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.
The article is deliberately railing against mystifying AI and attributing to it human cognition, but it fully acknowledges that AI in its present case has uses. Making those distinct from human cognition, and not as a replacement, is important, not fetishizing AI like some AI dogmatists do.
You highlight the first paragraph but then ignore the third one
Anyway I have never once said that AI is capable of thinking. The problem is the effect it clearly has on its users.
And no, the article does not specify use cases. It seems likely to me that they are trying to de-program, as it were, AI believers in order to allow a proper analysis, and so underplaying the argument in order to allow the reader a mental off-ramp for their unfounded beliefs. Or at least, I hope so, because GenAI has in fact been complete garbage at anything it’s been tasked with.
I never once said you said AI is capable of thinking. I said the article is intended at de-mystifying AI dogmatists, as in dogmatic supporters, that think it can. Further, you’ve only supplied evidence that misusing and misunderstanding the purpose of AI and its limitations can be harmful, not how it is intrinsically damaging. The article you supplied disagrees with this idea.
This is silly. Now that it’s clear that the article is more in line with what I’m saying, that we need to be careful and understand its limitations and not confuse it for cognition, but that we can still use it, you’re just calling it a mental off-ramp. Here is the actual text:
Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
It directly states that there is transformative promise in AI, and that it’s changing how we work and make in arts, sciences, industry, and warfare. Message the author if you want to check if they were just providing a mental off-ramp, but I’m going to take the author at the text, as written, directly having a more grounded and materialist analysis than yours.
The author gave zero examples, so I have to assume it was rhetorical. Either way I’m not required to agree with every single statement they made. My takeaway is as I have described.
Here, again:
https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything
Not a fan of the style of argument that consists of “read this thing,” but I read the whole thing. It doesn’t contradict me, it contradicts you. The article argues that machines cannot replace cognition, can’t replace art, for example, but acknowledges right at the beginning that it has uses, and that drawing clear lines between what’s damaging and what’s not, what LLMs can and cannot do, is the task at hand. Your argument is that the use of AI, in all circumstances, is cognitively damaging for the individual. This is an entirely distinct argument that your article doesn’t back up.
It absolutely backs my case up. The only reason for the existence of AI systems is to offload cognitive and creative effort. That is why the linked piece was written, to push back against the idea that it is possible! The fact that they can’t do that makes no difference to the fact that using them is only done for that purpose.
Literally the only thing you have consistently tried to argue that AI systems might theoretically be able to automate is ‘some stock images’.
Meanwhile, mountains of evidence pile up that what I am saying is true in practice.
The purpose of a system is what it does. These systems act as if they replace cognition (as argued in the piece) but fail to, and cause cognitive harm as a result.
Quoting and bolding your own reference seems to be an easy way to counter here:
The article is deliberately railing against mystifying AI and attributing to it human cognition, but it fully acknowledges that AI in its present case has uses. Making those distinct from human cognition, and not as a replacement, is important, not fetishizing AI like some AI dogmatists do.
I think this thread has hit a Shitty Ask Lemmy record
Probably, lmao. It’s a stupid convo too
You highlight the first paragraph but then ignore the third one
Anyway I have never once said that AI is capable of thinking. The problem is the effect it clearly has on its users.
And no, the article does not specify use cases. It seems likely to me that they are trying to de-program, as it were, AI believers in order to allow a proper analysis, and so underplaying the argument in order to allow the reader a mental off-ramp for their unfounded beliefs. Or at least, I hope so, because GenAI has in fact been complete garbage at anything it’s been tasked with.
I didn’t ignore it, you took the third paragraph as the only point and ignored everything I highlighted.
See my edits which I was still typing when you replied
Alright, regarding your edits:
I never once said you said AI is capable of thinking. I said the article is intended at de-mystifying AI dogmatists, as in dogmatic supporters, that think it can. Further, you’ve only supplied evidence that misusing and misunderstanding the purpose of AI and its limitations can be harmful, not how it is intrinsically damaging. The article you supplied disagrees with this idea.
This is silly. Now that it’s clear that the article is more in line with what I’m saying, that we need to be careful and understand its limitations and not confuse it for cognition, but that we can still use it, you’re just calling it a mental off-ramp. Here is the actual text:
It directly states that there is transformative promise in AI, and that it’s changing how we work and make in arts, sciences, industry, and warfare. Message the author if you want to check if they were just providing a mental off-ramp, but I’m going to take the author at the text, as written, directly having a more grounded and materialist analysis than yours.
The author gave zero examples, so I have to assume it was rhetorical. Either way I’m not required to agree with every single statement they made. My takeaway is as I have described.