generative AI literally makes me feel like a boomer. people start talking about how it can be good to help you brainstorm ideas and i’m like oh you’re letting a computer do the hard work and thinking for you???
—nonbinaryelphaba
headspace-hotel
There are many difficult things that were replaced with technology, and it wasn’t a bad thing. Washing machine replaces washing clothes by hand. Nothing wrong with that. Spinning wheel replaces drop spindle. Nothing wrong with that.
Generative AI replaces thinking. The ability to think for yourself will always be important. People that want to control and oppress you want to limit your ability to think for yourself as much as possible, but continuing to practice it allows you to resist them.
mikkeneko
“This tool replaces thinking,” is a technology problem we (humans) have faced before. It’s a snark that I’ve seen pro-AI contenders take as well: I bet these same people would have complained about calculators! And books!
Well. They did, at the time.
We have records from centuries – even millennia back – of scholars at the time complaining that these new-fangled “books” were turning their students lazy; why, they can barely recite any poems in their entirety any more! And there are people still alive today who remember life before widely available calculators, and some of them complained – then and now – that bringing them into schools dealt a ruinous blow to math education, and now these young people don’t even know how to use a slide-rule.
And the thing is:
They weren’t wrong.
The human brain can, when called on, perform incredible feats of memorization. Bards and skalds of old could memorize and recite poems and epics that were thousands of lines long. This is a skill that is largely lost to most of the population. It’s not needed any more, and so it is not practiced.
There is a definite generational gap, between the people who were trained on slide-rules and reckoning and the generation that was taught on calculators. There came a year, when that first generation grew up and entered the workforce, when you suddenly started encountering grown adults who could not do math – not even the very basic arithmetic needed to count down from one hundred. I would go into a shop, buy an item for sixteen dollars, give the cashier a twenty and a one because I want a fiver back, and have them stare at the money in incomprehension – what do? They don’t know how to subtract sixteen from twenty-one. They don’t know how to calculate a fifteen-percent tip. They did not exercise the parts of their brain that handle this, because they always had a calculator to do it for them.
Nowadays, newer point-of-sale machines compensate for this; they will automatically calculate and dispense the change, no subtraction necessary on the part of the operator. Nowadays everyone carries a phone, and every phone carries a calculator, so if you need to do these calculations, the tool is right there. As more and more transactions go electronic and card, and cash fades further and further out of daily life, these situations happen less and less; it’s not a problem that most people can’t do math (until it is.)
The people who complained that these tools-that-replace-thinking would reduce the ability of the broad population to exercise these cognitive skills weren’t wrong. It’s simply that, as the pace of life changed, the environment changed so that in day-to-day life these skills were largely unnecessary.
So.
Isn’t this, ChatGPT and Generative AI, just the latest in a long series of tool-replaces-thought that has, broadly, worked out well for us? What’s different about this?
Well, two things are different.
- In the previous instances of tool-replaces-thinking, the cognitive skill that it replaced was a discrete and, on a day-to-day basis, unnecessary outlay of energy. Most people don’t need to memorize thousands of lines of poetry, or anything else for that matter. Most people don’t need to do more than cursory levels of math on a day to day basis.
This, however, is different. The cognitive skill that is being obsoleted here is more than “how to write essay” or “identify what is the capital of Rhode Island.” It encompasses the entire field of being able to generate new thoughts; of being able to consider and analyze new information; of being able to follow logical trains to their conclusions; of being able to order your thoughts to construct rational arguments; or indeed of being able to express yourself in any structured way. These cognitive tools are not occasional use; they are every day, all the time.
- In the previous instances of tool-replaces-thinking, the tool was good at what it did.
Calculators may have replaced reckoning, but calculators are also pretty good at what they do. The calculator will, as long as you give the right input, give the right answer. ChatGPT cannot be relied on to do this. ChatGPT will tell you, confidently and unhesitantly and dangerously, that 2+2=5, and it will not care that it is wrong.
Books may have replaced memorization, and books certainly could be wrong; but a fact, once in a book, is pretty stable and steady. There is not a risk that the Guy Who Owns All The Encylopedias might wake up one day and decide – to pick a purely hypothetical example – that the Gulf of Mexico is called something else, and suddenly all the encyclopedias say that.
Generative AI fails on both these counts. It fails on every count. It’s inaccurate, it’s unethical, it’s unreliable, it’s wrong.
I remember some time ago seeing someone say (it was a video about medieval footwear, actually) that “humans have a great energy-saving system: if we can be lazy about something, we are.”
This is not a ethical judgment about humans; this is how life works. Animals – including humans – will not do something the hard way if they can do it the easy way; this basic principle of conservation of resources is universal and morally neutral. Cognition is biologically expensive, and though our environment is not what it once was, every person still goes through every day choosing what is valuable enough to expend resources on and what is not.
Because of this, I don’t know if there is any solution, here. I think pushing back against the downhill flush of the-easy-way-out is a battle both uphill and against the tide.
So I’ll just close with this warning, instead:
Generative AI is a tool that cannot be trusted. Do not use it to replace thought.
calamity-cain
i’ve been waiting for a more nuanced take on generative AI and it’s finally here
haveasnickerss
I’m forever thankful that even though I grew up in the calculator era I was taught and encouraged at school to do math by hand. I only started using the calculator for more complex math and physics. Otherwise, use your brain it’s there for a reason.
Although an AI can be useful, it does not replace thinking. Critical thinking is so important, and it helps with basic problem solving. It’s something that is getting lost and it’s a basic survival skill. It’s happening bc I’ve seen it. People would look at me like I’ve grown a second head, like I was a know it all genius, with mystical powers just bc I gave a simple solution to a problem. And they weren’t complicated problems, just everyday problems, with easy solutions that you just needed to pause and think for two seconds to find them.
Also, as it was said before, AI gives you wrong answers. It does not care. It will lie to your face.
Doing things with AI gives you no source no credibility, it’s the “easy lazy” way. But you don’t learn. It deeply hurts me to see kids today using chatgpt to do their homework. It doesn’t work like that. They won’t be able to do basic things if they don’t learn to think and make an effort from early age.
If AI is used correctly it can become a good useful tool, mostly to save time, but it’s difficult to find the balance.
I’ll admit that I’ve used AI for schoolwork, but never to do my work for me. Never to write for me. I’ve used to narrow my search field (like once I was doing an investigation work and I asked chatgpt for authors and books about that specific subject- bc it was very specific and I had no clue where to start looking, so asking it for books and then reading said books and using them for my research, so I actually had sources and could compare authors and opinions was a responsible and good way to use the AI)
The difference between books, calculators and generative AI is that the first two allow you to make certain failures, while the latter produces failures of certainty.
You can write incorrectly, you can misread or misunderstand a book. When a book has an intelligence error it is obvious. You can misuse a calculator, and the failure is almost always based on your failure to properly use the tool.
Failure is something that is almost always based on the skill and execution of the user, and not the tool.
With AI, every single output is dynamic, every single response has an unknown level of inaccuracy. Any failure of the user to successfully execute the use of the tool is always inherently going to include an undetectable failure rate. People can no longer rely on the tool to correct itself or their methods, and that’s why they can’t learn from it without first knowing how to learn but also how to learn from an unreliable source.
“undetectable failure rate”. Not at all, we can detect failures by critically reading the output, just as we would with, say, a newspaper article. I think part of the danger is that we might start to think of an LLM as a reasoning calculator, rather than treating it like a person: biased, conflicted, malicious, whatever.
I only know facts. Fact 1… average IQ is dropping each year in North America.
That’s a confusing way to describe it which also seems to have no impact on this discussion. IQ is standardized each year or so to always average at 100. You’re probably referring to people in the 21st century getting lower scores on older tests, which is called the “reverse Flynn effect”. And it’s called the reverse Flynn effect for a reason: this is a new observation only in the 21st century, while the Flynn effect—where people scored higher on older tests—was observed for a much longer time throughout the 20th century, which saw a ton of innovation as well; in the USA, most notably the societal upheaval into suburbanism and consumerism among other things. And even the reverse Flynn effect so far has been observed independently and before AI, before the prospect of the replacement of thinking.
I don’t think that AI necessarily replaces thinking.
I think it replaces calculation and cogitation, which are each respectively important portions of thinking, but are not thinking in and of themselves.
Don’t get me wrong, I’m not an AI bro. I don’t think it’s necessarily a good thing, but I think that AI is a tool, and as such, it has its uses, just like auto-tune, and digital recording, and computer art programs like Photoshop and Krita.
Honestly, I have yet to hear a single argument against AI that will still be valid in 25 years.
People are still going to think.
People are still going to make art.
People are still going to make music.
People are still going to write, and tell stories.
AI is just going to be one of the tools they use to do those things.
It’s going to help unskilled people do things that currently require skill with ease.
And that’s not a bad thing, in and of itself.
I think the argument remains intact if you just s/thinking/cogitation. The argument is that it can replace cogitation and shouldn’t, not that it shouldn’t be used at all. Plus it’s non-deterministic unlike the other tools.
As a personal philosophy, I like to learn how something works before I allow myself to take it for granted. As a few examples: I can do math by hand, wash my own clothes with homemade detetgant, write a short story or poem, draw/illustrate with pencil pretty well, and I can even build a log cabin with hand tools. Only then do I allow myself to use calculators, washers, generative AI, and modern power tools to perform those skills for me. I can now embrace these new technologies form a standpoint of knowing what they made easier, AI included. Of course many will not think this way ingeneral society, but this is how I tackle the problem from an introspective angle
There’s a lot of people who say that using AI makes you dumber, and that’s reasonably true in one sense, but ultimately it’s reducing the kinds of work you need to do. This is a trend in humanity – photos replaced photorealistic painters - farm automation replaced manual labor. At every step, knowledge was in fact lost to society. But is that a problem really, that we lose obsoleted modes of work, in favor of automated systems that solve it forever? New skills emerge, people who know how to design factories, photography artists
I think people are afraid of AI removing jobs from the workforce, but what it’s really doing is making the workforce more efficient. The total amount of product can go up, that’s fine. Jobs like coding now look more like architectural design jobs rather than typing jobs. Creative work and original ideas will shine. New jobs will be created. Nothing new is going on
Jobs like coding now look more like architectural design jobs rather than typing jobs.
I don’t think you’re a programmer. The major part of programming has always been design, not syntax.
I’m not sure if you’ve even read the post. The entire point of this thread is that AI shouldn’t be replacing thinking, that it shouldn’t replace the architectural design, creativity, or the generation of original ideas, which people are using it for. Photos did not replace framing and composition, while limiting stroke style but giving the benefit of authenticity (which I admit was short-lived) and speed. AI gives the benefit of speed while heavily compromising the creativity and reliability of its output and should not substitute the ability to think for yourself. Not that you can’t use it for clerical tasks.
I did read the post, and if you think syntax and implementation weren’t a huge part of coding, then I’m not sure you’re a programmer either. Design is quick, writing it all out & integrating libs & figuring out bugs is slow
Implementation is a huge part but not the major part. Figuring out bugs is a part of design, and implementation probably seems to take more time because one hasn’t thought about the design and designs it on-the-fly, which is what we often do. Figuring out APIs does also take time but not nearly as much as all the design in engineering.
I don’t think anything in the post applies to using AI to figure out syntax.