I am wonder why leftists are in general hostile towards AI. I am not saying this is wrong or right, I just would like someone to list/summarize the reasons.
How do youknow the political orientation of people opposing ai?
“AI” is just another grift. Fash thrive on grifts, while leftism opposes exploiting people.
If tech billionaires were talking about how this will reduce your work week, enable Basic Universal Income, all while increasing production it would be one thing.
Are they doing that?
Or are they increasing the laying off of workers, increasing the work week for the remainder, reducing pay, and doing everything they can to create an inescapable surveillance state?
It’s bad for the environment, now uses half of all energy produced globally in just a few years.
It’s bad for society, automating labor without guaranteeing human needs is really really fucked up and basically kills unlucky people for no good reason.
It’s bad for productivity, it is confidently wrong just as often as it is right, the quality of the work is always sub par, it always requires a real person to baby it.
It’s bad for human development. We created a machine we can ask anything so we never have to think, but the machine is dumber than anyone using it so it just makes us all brain dead.
It’s complete and not getting better. The tech can not get better than it is now unless we create a totally different algorithmic approach and start from scratch again.
It’s an artificial hype bubble that distracts us from real solutions to real problems in the world.
It’s too energy hungry, it steals art and is giving artists a rough time. It’s the pinnacle of dehumanising hypercapitalism.
On top of every fucking company trying to find ways to replace people with some BS AI solution. Causing layoffs or just less hiring.
Just look at audible replacing human voice actors with AI voices. The backlash there was visceral. Then there’s all the companies using AI for customer support. Although it might do a better job than some humans there it still adding a few more steps before you can talk to a human with a pulse.
It’s doesn’t solve any problem I care about. In fact, it only worsens ones like climate change or wealth inequality.
That’s an interesting question, especially since at least one poll suggests that BOTH sides of the political spectrum have significant reservations about A.I.
Lots of reasons.
It’s yet another thing that is only going to benefit corporations. Because they get unsleeping workers that don’t get sick or talk back or strike. They get to charge us for the benefits.
It’s built entirely off of everyone else’s work and content.
The servers that house the AI are draining water sheds and power grids and in the case of Elon Musk and Tennessee literally poisoning people.
It sounds way too much like a cult. Promising the sun and moon on essentially a chatbot.
It looks way to much like a bubble, our stock market is currently going up because of Nvidia largely.
AI taking over is literally one of the main plot points of Sci fi.
It’s infuriatingly blameless. If the AI therapist says “try meth” you can basically only sue the massive faceless organization that built it spinning your wheels for possibly nothing.
Driving people manic by feeding into delusions
It’s a God send to scammers, trolls, propaganda, sexual harassment, people in college just for the paper degree at the end, teachers who can’t be bothered to write, upper and middle management who can’t be bothered to write. Lots of just terrible people.
And for every new protein and material that’s discovered because of it. (Some of the most useful and least destructive uses of it) They are immediately gobbled up by patents for a corporation that now has even more control of the universe and everything useful in it. And no human can decide to just open it for the world like vaccines.
It steals from the copyright holders in order to make corporate AI money without giving back to the creators.
It uses insane amounts of water and energy to function, with demand not being throttled by these companies.
It gives misleading, misquoted, misinformed, and sometimes just flat out wrong information, but abuses its very confidence-inspiring language skills to pass it off as the correct answer. You HAVE to double check all its work.
And if you think about it, it doesn’t actually want to lick a lollipop, even if it says it does. Its not sentient. I repeat, its not alive. The current design is a tool at best.
Thank you, for the sake of completeness, I’d add something like this: https://time.com/6247678/openai-chatgpt-kenya-workers/
I see two reasons. Most people that are “left leaning” value both critical thinking and social fairness. AI subverts both of those traits. Firstly by definition it bypasses the “figure it out” stage of learning. The second way is by ignoring long establish laws like copyright to train its models, but also its implementation which sees people lose their jobs
More formally, it’s probably one of the purest forms of capitalism. It’s essentially a slave laborer, with no rights of ability to complain that further concentrates wealth with the wealthy.
I’m against the massive, wasteful data centers that are destroying all climate targets and driving up water/electricity prices in communities. Their current trajectory is putting us on a collision course with civilization collapse.
If the slop could be generated without these negative externalities I don’t know if I’d be against it. China has actually made huge strides in reducing the power and water footprint of training and usage, so there’s maybe some hope that the slop machines won’t destroy the world. I’m not optimistic, though.
This seems like a dead-end technology.
Modern LLMs, incorrectly labeled as “AI,” are just the modern version of spell-check.
You know how often people create totally embarrassing mistakes and blame spell-check?
“AI” is another one of those.
And it also requires tons of water that could be going to people’s homes.
AI removes critical thinking for you.
Yes, I’m left-leaning, and I dislike what’s currently called “ai” for a lot of the left-leaning (rational) reasons already listed. But I’m a programmer by trade, and the real reason I hate it is that it’s bullshit and a huge scam vehicle. It makes NFTs look like a carnival game. This is the most insane bubble I’ve seen in my 48 years on the planet. It’s worse than the subprime mortgage, “dot bomb”, and crypto scams combined.
It is, at best, a quasi-useful tool for writing code (though the time it has saved me is mostly offset by the time it’s been wrong and fucked up what I was doing). And this scam will eventually (probably soon) collapse and destroy our economy, and all the normies will be like “how could anybody have known!?” I can see the train coming, and CEOs, politicians, average people, and the entire press insist on partying on the tracks.
when copilot came out and it was nothing more than an extremely fancy auto complete.
that was peak, I’d still write the logic and algorithms and the important bits, it just saved time by quickly writing the line when it got it right, it all went downhill from there.
I prefer using LMMs for tech debt stuff like starting a readme and doing comments.
I do the real brain work and the end product looks nicer.
Sudo code (baseline comments), real code, dev/test, LMM to add more words after. Smack it when it touches my code.
LLMs can be a great assistant. it’s like having an intern doing the tedious work while you get to just approve and manage it.
but letting it run the show is like letting the intern manage the whole development unsupervised.
Like the AI that was given access to prod, deleted a database and lied about it. The company also didn’t have a backup.
those are the fuckups that become public. there will be a lot of major fuck up.
Because they’re obviously a tool for the rich to get more control over our lives