I am wonder why leftists are in general hostile towards AI. I am not saying this is wrong or right, I just would like someone to list/summarize the reasons.

  • 9tr6gyp3@lemmy.world
    link
    fedilink
    English
    arrow-up
    129
    arrow-down
    1
    ·
    8 days ago

    It steals from the copyright holders in order to make corporate AI money without giving back to the creators.

    It uses insane amounts of water and energy to function, with demand not being throttled by these companies.

    It gives misleading, misquoted, misinformed, and sometimes just flat out wrong information, but abuses its very confidence-inspiring language skills to pass it off as the correct answer. You HAVE to double check all its work.

    And if you think about it, it doesn’t actually want to lick a lollipop, even if it says it does. Its not sentient. I repeat, its not alive. The current design is a tool at best.

  • 20cello@lemmy.world
    link
    fedilink
    arrow-up
    76
    arrow-down
    1
    ·
    8 days ago

    Because they’re obviously a tool for the rich to get more control over our lives

  • BillDaCatt@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    edit-2
    8 days ago

    Can’t speak for anyone else, but here are a few reasons I avoid Ai:

    • AI server farms consume a stupid amount of energy. Computers need energy, I get it, but Ai’s need for energy is ridiculous.

    • Most of the implementations of Ai seem to be after little to no input from the people who will interact with it and often despite their objections.

    • The push for implementing Ai seems to be based on the idea that companies might be able to replace some of their workforce compounded with the fear of being left behind if they don’t do it now.

    • The primary goal of any Ai system seems to be about collecting information about end users and creating a detailed profile. This information can then be bought and sold without the consent of the person being profiled.

    • Right now, these systems are really bad at what they do. I am happy to wait until most of those bugs are worked out.

    To be clear, I absolutely want a robot assistant, but I do not want someone else to be in control of what it can or cannot do. If I am using it and giving it my trust, there cannot be any third parties trying to monetize that trust.

    • 33550336@lemmy.worldOPM
      link
      fedilink
      arrow-up
      12
      ·
      8 days ago

      Well I personally also avoid using AI. I just don’t trust the results and I think using it makes mentally lazy (besides the other bad things).

  • baggachipz@sh.itjust.works
    link
    fedilink
    arrow-up
    48
    ·
    8 days ago

    Yes, I’m left-leaning, and I dislike what’s currently called “ai” for a lot of the left-leaning (rational) reasons already listed. But I’m a programmer by trade, and the real reason I hate it is that it’s bullshit and a huge scam vehicle. It makes NFTs look like a carnival game. This is the most insane bubble I’ve seen in my 48 years on the planet. It’s worse than the subprime mortgage, “dot bomb”, and crypto scams combined.

    It is, at best, a quasi-useful tool for writing code (though the time it has saved me is mostly offset by the time it’s been wrong and fucked up what I was doing). And this scam will eventually (probably soon) collapse and destroy our economy, and all the normies will be like “how could anybody have known!?” I can see the train coming, and CEOs, politicians, average people, and the entire press insist on partying on the tracks.

  • ninjabard@lemmy.world
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    8 days ago

    It’s generative and LLM AI that is the issue.

    It makes garbage facsimiles of human work and the only thing CEOs can see is spending less money so they can horde more of it. It also puts pressure on resource usage, like water and electricity. Either by using it for cooling the massive data centers or by simply the power draw needed to compute whatever prompt.

    The other main issue is that it is theft plain and simple. Artists, actors, voice actors, musicians, creators, etc are at risk of having their jobs stolen by a greedy company that only wants to pay for a thing once or not at all. You can get hired once to read or be photographed/videoed and then that data can be used to train a digital replacement without your consent. That was one of the driving forces behind the last big actor’s union protests.

    For me, it’s also the lack of critical thinking skills using things like ChatGPT fosters. The thought that one doesn’t have to put any effort into writing an email, an essay, or even researching something when you can simply type in a prompt and it spits out mainly incorrect information. Even simple information. I had an AI summary tell me that 440Hz was a higher pitch than 446Hz. I wasn’t even searching for that information. So, it wasted energy and my time giving demonstrably wrong data I had no need for.

    • 33550336@lemmy.worldOPM
      link
      fedilink
      arrow-up
      6
      ·
      8 days ago

      Thank you. Well, personally I do not use ChatGPT and this is one of the reasons why I asked humans this question :)

  • alexc@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    8 days ago

    I see two reasons. Most people that are “left leaning” value both critical thinking and social fairness. AI subverts both of those traits. Firstly by definition it bypasses the “figure it out” stage of learning. The second way is by ignoring long establish laws like copyright to train its models, but also its implementation which sees people lose their jobs

    More formally, it’s probably one of the purest forms of capitalism. It’s essentially a slave laborer, with no rights of ability to complain that further concentrates wealth with the wealthy.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    31
    ·
    8 days ago

    I’m against the massive, wasteful data centers that are destroying all climate targets and driving up water/electricity prices in communities. Their current trajectory is putting us on a collision course with civilization collapse.

    If the slop could be generated without these negative externalities I don’t know if I’d be against it. China has actually made huge strides in reducing the power and water footprint of training and usage, so there’s maybe some hope that the slop machines won’t destroy the world. I’m not optimistic, though.

    This seems like a dead-end technology.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    edit-2
    8 days ago

    It’s bad for the environment, now uses half of all energy produced globally in just a few years.

    It’s bad for society, automating labor without guaranteeing human needs is really really fucked up and basically kills unlucky people for no good reason.

    It’s bad for productivity, it is confidently wrong just as often as it is right, the quality of the work is always sub par, it always requires a real person to baby it.

    It’s bad for human development. We created a machine we can ask anything so we never have to think, but the machine is dumber than anyone using it so it just makes us all brain dead.

    It’s complete and not getting better. The tech can not get better than it is now unless we create a totally different algorithmic approach and start from scratch again.

    It’s an artificial hype bubble that distracts us from real solutions to real problems in the world.

    • 33550336@lemmy.worldOPM
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      It’s an artificial hype bubble that distracts us from real solutions to real problems in the world.

      Yes, this should be noticed or even emphasized.

  • matelt@feddit.uk
    link
    fedilink
    arrow-up
    26
    ·
    8 days ago

    Personally I think the environmental impact and the sycophantic responses that take away the need for one to exercise their brain are my 2 biggest gripes.

    It was a fun novelty at first, I remember my first question to chat gpt was ‘how to make hamster ice cream’ and I was genuinely surprised that it gave me some frozen fruit recipe along with a plea to not harm hamsters by turning them into ice cream.

    Then it got out of hand very quickly, it got added onto absolutely everything, despite the hallucinations and false facts. The intellectual property issue is also of concern.

  • DragonTypeWyvern@midwest.social
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    7 days ago

    If tech billionaires were talking about how this will reduce your work week, enable Basic Universal Income, all while increasing production it would be one thing.

    Are they doing that?

    Or are they increasing the laying off of workers, increasing the work week for the remainder, reducing pay, and doing everything they can to create an inescapable surveillance state?

  • morphballganon@mtgzone.com
    link
    fedilink
    English
    arrow-up
    24
    ·
    8 days ago

    Modern LLMs, incorrectly labeled as “AI,” are just the modern version of spell-check.

    You know how often people create totally embarrassing mistakes and blame spell-check?

    “AI” is another one of those.

    And it also requires tons of water that could be going to people’s homes.