I am wonder why leftists are in general hostile towards AI. I am not saying this is wrong or right, I just would like someone to list/summarize the reasons.
“AI” is just another grift. Fash thrive on grifts, while leftism opposes exploiting people.
How do youknow the political orientation of people opposing ai?
Maybe I should write that by left - wingers I mean Lemmy users
It steals from the copyright holders in order to make corporate AI money without giving back to the creators.
It uses insane amounts of water and energy to function, with demand not being throttled by these companies.
It gives misleading, misquoted, misinformed, and sometimes just flat out wrong information, but abuses its very confidence-inspiring language skills to pass it off as the correct answer. You HAVE to double check all its work.
And if you think about it, it doesn’t actually want to lick a lollipop, even if it says it does. Its not sentient. I repeat, its not alive. The current design is a tool at best.
Thank you, for the sake of completeness, I’d add something like this: https://time.com/6247678/openai-chatgpt-kenya-workers/
Because they’re obviously a tool for the rich to get more control over our lives
AI removes critical thinking for you.
Can’t speak for anyone else, but here are a few reasons I avoid Ai:
-
AI server farms consume a stupid amount of energy. Computers need energy, I get it, but Ai’s need for energy is ridiculous.
-
Most of the implementations of Ai seem to be after little to no input from the people who will interact with it and often despite their objections.
-
The push for implementing Ai seems to be based on the idea that companies might be able to replace some of their workforce compounded with the fear of being left behind if they don’t do it now.
-
The primary goal of any Ai system seems to be about collecting information about end users and creating a detailed profile. This information can then be bought and sold without the consent of the person being profiled.
-
Right now, these systems are really bad at what they do. I am happy to wait until most of those bugs are worked out.
To be clear, I absolutely want a robot assistant, but I do not want someone else to be in control of what it can or cannot do. If I am using it and giving it my trust, there cannot be any third parties trying to monetize that trust.
Well I personally also avoid using AI. I just don’t trust the results and I think using it makes mentally lazy (besides the other bad things).
-
Yes, I’m left-leaning, and I dislike what’s currently called “ai” for a lot of the left-leaning (rational) reasons already listed. But I’m a programmer by trade, and the real reason I hate it is that it’s bullshit and a huge scam vehicle. It makes NFTs look like a carnival game. This is the most insane bubble I’ve seen in my 48 years on the planet. It’s worse than the subprime mortgage, “dot bomb”, and crypto scams combined.
It is, at best, a quasi-useful tool for writing code (though the time it has saved me is mostly offset by the time it’s been wrong and fucked up what I was doing). And this scam will eventually (probably soon) collapse and destroy our economy, and all the normies will be like “how could anybody have known!?” I can see the train coming, and CEOs, politicians, average people, and the entire press insist on partying on the tracks.
when copilot came out and it was nothing more than an extremely fancy auto complete.
that was peak, I’d still write the logic and algorithms and the important bits, it just saved time by quickly writing the line when it got it right, it all went downhill from there.
I prefer using LMMs for tech debt stuff like starting a readme and doing comments.
I do the real brain work and the end product looks nicer.
Sudo code (baseline comments), real code, dev/test, LMM to add more words after. Smack it when it touches my code.
LLMs can be a great assistant. it’s like having an intern doing the tedious work while you get to just approve and manage it.
but letting it run the show is like letting the intern manage the whole development unsupervised.
Like the AI that was given access to prod, deleted a database and lied about it. The company also didn’t have a backup.
those are the fuckups that become public. there will be a lot of major fuck up.
It’s generative and LLM AI that is the issue.
It makes garbage facsimiles of human work and the only thing CEOs can see is spending less money so they can horde more of it. It also puts pressure on resource usage, like water and electricity. Either by using it for cooling the massive data centers or by simply the power draw needed to compute whatever prompt.
The other main issue is that it is theft plain and simple. Artists, actors, voice actors, musicians, creators, etc are at risk of having their jobs stolen by a greedy company that only wants to pay for a thing once or not at all. You can get hired once to read or be photographed/videoed and then that data can be used to train a digital replacement without your consent. That was one of the driving forces behind the last big actor’s union protests.
For me, it’s also the lack of critical thinking skills using things like ChatGPT fosters. The thought that one doesn’t have to put any effort into writing an email, an essay, or even researching something when you can simply type in a prompt and it spits out mainly incorrect information. Even simple information. I had an AI summary tell me that 440Hz was a higher pitch than 446Hz. I wasn’t even searching for that information. So, it wasted energy and my time giving demonstrably wrong data I had no need for.
Thank you. Well, personally I do not use ChatGPT and this is one of the reasons why I asked humans this question :)
I see two reasons. Most people that are “left leaning” value both critical thinking and social fairness. AI subverts both of those traits. Firstly by definition it bypasses the “figure it out” stage of learning. The second way is by ignoring long establish laws like copyright to train its models, but also its implementation which sees people lose their jobs
More formally, it’s probably one of the purest forms of capitalism. It’s essentially a slave laborer, with no rights of ability to complain that further concentrates wealth with the wealthy.
I’m against the massive, wasteful data centers that are destroying all climate targets and driving up water/electricity prices in communities. Their current trajectory is putting us on a collision course with civilization collapse.
If the slop could be generated without these negative externalities I don’t know if I’d be against it. China has actually made huge strides in reducing the power and water footprint of training and usage, so there’s maybe some hope that the slop machines won’t destroy the world. I’m not optimistic, though.
This seems like a dead-end technology.
It’s bad for the environment, now uses half of all energy produced globally in just a few years.
It’s bad for society, automating labor without guaranteeing human needs is really really fucked up and basically kills unlucky people for no good reason.
It’s bad for productivity, it is confidently wrong just as often as it is right, the quality of the work is always sub par, it always requires a real person to baby it.
It’s bad for human development. We created a machine we can ask anything so we never have to think, but the machine is dumber than anyone using it so it just makes us all brain dead.
It’s complete and not getting better. The tech can not get better than it is now unless we create a totally different algorithmic approach and start from scratch again.
It’s an artificial hype bubble that distracts us from real solutions to real problems in the world.
It’s an artificial hype bubble that distracts us from real solutions to real problems in the world.
Yes, this should be noticed or even emphasized.
Personally I think the environmental impact and the sycophantic responses that take away the need for one to exercise their brain are my 2 biggest gripes.
It was a fun novelty at first, I remember my first question to chat gpt was ‘how to make hamster ice cream’ and I was genuinely surprised that it gave me some frozen fruit recipe along with a plea to not harm hamsters by turning them into ice cream.
Then it got out of hand very quickly, it got added onto absolutely everything, despite the hallucinations and false facts. The intellectual property issue is also of concern.
If tech billionaires were talking about how this will reduce your work week, enable Basic Universal Income, all while increasing production it would be one thing.
Are they doing that?
Or are they increasing the laying off of workers, increasing the work week for the remainder, reducing pay, and doing everything they can to create an inescapable surveillance state?
Modern LLMs, incorrectly labeled as “AI,” are just the modern version of spell-check.
You know how often people create totally embarrassing mistakes and blame spell-check?
“AI” is another one of those.
And it also requires tons of water that could be going to people’s homes.
bc those who own AI are against left-leaners’ principles.