And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling

  • LeninWeave [none/use name, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    5
    ·
    19 days ago

    an alternative where you can get instant feedback when you’re journaling

    GenAI isn’t giving you feedback. It’s not a person. The entire thing is a social black hole for a society where everyone is already deeply alienated from each other.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    37
    arrow-down
    2
    ·
    19 days ago

    It’s a toy. I’m not against toys, but the amount of energy and resources we are pouring into this toy is alarming.

  • knfrmity@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    9
    ·
    19 days ago
    • It’s a complete waste of resources
    • The economic fallout of the bubble bursting could be unprecedented. (Yes shareholder value ≠ quality of life, but we’ve seen how working people get fucked over when the stock market crashes)
    • The environmental fallout is rarely considered
    • The cost to human knowledge and even thinking ability is huge
    • The emotional relationships people form with these models are concerning
    • What’s the societal cost of further isolating people?
    • What opportunity cost is there? How many actually useful things aren’t being discovered because the big seven are too focused on LLMs?
    • Nobody even wants LLMs. There’s no path to profitability. GenAI is a trillion dollar meme.
    • Even when it does generate useful output sometimes, LLMs are probabilistic and therefore outputs are not reproducible
    • Why do you need instant feedback when you’re doing absolutely anything? (Sometimes it’s warranted but then talk with a person)
    • LeninWeave [none/use name, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      5
      ·
      edit-2
      19 days ago

      The cost to human knowledge and even thinking ability is huge

      100%.

      We are communists. We should understand the labor theory of value. Therefore, we should understand why GenAI does not create any new value: it’s not a person and it does no labor. It recycles existing knowledge into a lower-average-quality slurry, which is dispersed into the body of human knowledge used to train the next model which is used to produce slop that is dispersed into the… and so on and so forth.

      • Cowbee [he/they]@lemmygrad.ml
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        19 days ago

        I don’t think that’s the point Marxists that are less anti-AI are making. Liberals might, but they reject the LTV. If we apply the law of value to generative AI, then we know that it’s the same as all machinery, it’s simply crystallized former labor that can lower the socially necessary labor time of certain commodities in certain conditions.

        Take, say, a stock image for a powerpoint slide that illistrates a concept. We can either have people dedicated to making stock images in broad and unique enough situations, and have people search for and select the right image, or we can generate an image or two and be done with it. Side by side, the end products are near-identical, but the labor-time involved in the chain for each is different. The value isn’t higher for the generated image, it lowers the socially necessary labor time for stock images.

        We are communists, here, and while I do think there’s some merit to the argument that misunderstanding the boundaries and limitations of LLMs leads to some workers and capitalists relying on it in situations it cannot be, I also think the visceral hatred I see for AI is sometimes clouding people’s judgements.

        TL;DR AI does have use cases. It isn’t creating new value, but it can lower SNLT in certain situations, and we as communists need to properly analyze those rather than dogmatically dismiss it whole-cloth. It’s over-applied in capitalism due to the AI bubble, that doesn’t mean it’s never usable.

        • LeninWeave [none/use name, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          19 days ago

          I generally agree with you here, my problem is that despite this people do treat AI as though it’s capable of thought and of labor. In this very thread there are some (luckily not many) people doing it. As you say, it’s crystallized labor, just like a drill press.

          • Cowbee [he/they]@lemmygrad.ml
            link
            fedilink
            arrow-up
            10
            arrow-down
            1
            ·
            19 days ago

            Some people treat it that way, and I agree that it’s a problem. There’s also the people that take a dogmatically anti-AI stance that teeters into idealist as well. The real struggle around AI is in identifying how we as the proletariat can make use of it, identifying what its limits are, while using it to the best of our abilities for any of its actually useful use-cases. As communists, we sit at an advantage already by understanding that it cannot create new value, and is why we must do our best to take a class-focused and materialist analysis of how it changes class dynamics (and how it doesn’t).

            • LeninWeave [none/use name, any]@hexbear.net
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              edit-2
              19 days ago

              I agree with you here, although I want to make a distinction between “AI” in general (many useful use cases) and LLMs (personally, I have never seen a truly convincing use case, or at least not one that justifies the amount of development going into them). Not even LLM companies seem to be able to significantly reduce SNLT with LLMs without causing major problems for themselves.

              Fundamentally, in my opinion, the mistaken way people treat it is a core part of the issue. No capitalist ever thought a drill press was a human being capable of coming up with its own ideas. The fact that this is a widespread belief about LLMs leads to widespread decision making that produces extremely harmful outcomes for all of society, including the creation of a generation of workers who are much less able to think for themselves because they’re used to relying on the recycled ideas of an LLM, and a body of knowledge contaminated with garbage that’s difficult to separate from genuine information.

              I think any materialist analysis would have to consume that these things have very dubious use cases (maybe things like customer service chat bots) and therefore that most of the labor and resources put into their development are wasted and would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

              • CriticalResist8@lemmygrad.ml
                link
                fedilink
                arrow-up
                8
                arrow-down
                1
                ·
                edit-2
                19 days ago

                would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

                This is what China is developing currently, and many other cool things with AI. Although medical imaging AI was also found to have their limitations, but maybe they need to use a different neural method.

                Even if capitalist companies say that you can or should use their bot as a companion doesn’t mean you have to. We don’t have to listen to them. I’ve used AI to code stuff a lot, and it got the results – all for volunteer and free work, where hiring someone would have been prohibitive, and AI (LLM specifically) was the difference between offering this feature or canceling the idea completely.

                There’s a guy on youtube who bought Unitree’s top of the line humanoid robot (yes they ship to your doorstep from China lol) and with LLM help codes for it, because the documentation is not super great yet. Then with other models he can have real-time image detection, or use the LIDAR more meaningfully than without AI. I’m not sure where he’s at today with his robot, he was working on getting it to fetch a beer from the fridge - baby steps, because at this stage these bots come with nothing in them except the SDK and you have to code literally everything you want it to do, including standing idle. The image recognition has an LLM in it so that it can detect any object, he showed an interesting demo: in just one second, it can detect the glass bottles in the camera fram and even their color, and adds a frame around it. This is a new-ish model and I’m not entirely sure how it works but I assume it has to have an LLM in it to describe the image.

                I’m mostly on Deepseek these days, I’ve completely stopped using chatGPT because it just sucks at everything. It hallucinates so much less and becomes more and more reliable, although it still outputs nonsensical comparisons. But it’s like with everything you don’t know: double-check and exercise critical thinking. Before LLMs to ask our questions we had wikipedia, and it wasn’t any better (and still isn’t). edit - like when deepseek came out with reasoning, which they pioneered, it completely redefined LLM development and more work has been done from this new state of things, improving it all the time. They find new methods to improve AI. I think if there was a fundamental criticism I would make of it is that perhaps it was launched too soon (though neural networks have existed for over a decade), and of course overpromised by tech companies who rely on their AI product to survive. OpenAI is dying because they don’t have anything else to offer than GPT, they don’t make money on cloud solutions or hardware or anything like that. If their model dies, they die along with it. So they’re in startup philosophy mode where they try to iterate as fast as possible and consider any update is a good update (even when it’s not) just to try and retain users. They bleed 1 billion $ a month and live entirely on investor value, startup mode just doesn’t scale that high up. It’s not their 20$ subscriptions that are ever going to keep them afloat lol.

              • Cowbee [he/they]@lemmygrad.ml
                link
                fedilink
                arrow-up
                7
                ·
                19 days ago

                I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility. What we need to do as communists is take an active role in clarifying the limitations and use-cases of AI, be they generative images, LLMs, or things like imaging analysis. I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

                • LeninWeave [none/use name, any]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  2
                  ·
                  edit-2
                  19 days ago

                  I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility.

                  True, but like I said, companies don’t seem to be able to successfully reduce labor requirements using LLMs, which makes it seem likely that they’re not useful in general. This isn’t an issue of capitalism, the issue of capitalism is that despite that they still get a hugely disproportionate amount of resources for development and maintenance.

                  I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

                  I do oppose the tool (LLMs, not AI) because I have yet to see any use case that justifies the development and maintenance costs. I’ll believe that this technology has useful applications once I actually see those useful applications in practice, I’m no longer giving the benefit of the doubt to technology we’ve seen fail repeatedly to be implemented in a useful manner. Even the few useful applications I can think of, I don’t see how they could be considered proportional to the costs of producing and maintaining the models.

      • CriticalResist8@lemmygrad.ml
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        19 days ago

        I don’t follow. LLMs are a machine of course, what does that imply? That Something needs to be productive to exist? By the same LTV, LLMs reduce socially necessary labor time, like all machines.

        • LeninWeave [none/use name, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          19 days ago

          LLMs are a machine of course, what does that imply?

          That they create nothing on their own, and the way they are used currently leads to a degradation of the body of knowledge used to train the next generation of LLMs because people treat them like they’re human beings capable of thought and not language recyclers, spewing their output directly into written works.

      • chgxvjh [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        19 days ago

        Sure that tells us that some of the massive investments are stupid because their end-product won’t have much or any value.

        You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.

        So the conclusion of the analysis ends up fairly similar, you just sound more like a dork in the process.

        • LeninWeave [none/use name, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          19 days ago

          You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.

          A lot of the applications of AI specifically minimize worker involvement, meaning the output is 100% slop. That slop is included in the training data for the next model, leading to a cycle of degradation. In the end, the pool of human knowledge is contaminated with plausible-sounding written works that are wrong in various ways, the amount of labor required to learn anything is increased by having to filter through it, and the amount of waste due to people learning incorrect things and acting on them is also increased.

    • CriticalResist8@lemmygrad.ml
      link
      fedilink
      arrow-up
      20
      arrow-down
      2
      ·
      19 days ago

      These are all historical problems of capitalism; we need to be able to cut through the veil instead of going around it, and attack the root cause, otherwise we are just reacting to new developments.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          14
          arrow-down
          1
          ·
          edit-2
          19 days ago

          I didn’t want to dump a point-by-point on you unprompted but if you let me know I can write one up happily. A lot of what is said about AI is just capitalism developing as it does, the technology might be novel and unprecedented (it’s not entirely, a lot of what AI and AI companies do was already commonplace), but the trend is perfectly in line with historical examples and the theory.

          Some less political people might say we just need better laws to steer companies correctly but of course we know where that goes, so the solution is to transform the class character of the state to transform the relations of production, and we recognized this long before AI existed. So my bigger point is that we need to keep sight on what’s important, socialism; not simply reacting to new developments any time they happen as this would only keep us running circles within the existing state of things.

          A lot of what happens in the western tech sphere is happening in other industries under late-stage capitalism, chasing shorter and shorter term profits and therefore shorter-term commodities as well. But there is also a big ecosystem of open-source AI that exists inside capitalism, though it’s again not unique to AI and open-source under capitalism has its own contradictions.

          It’s like… at this point I think a DotP is more likely than outlawing AI is lol. And I think it’s healthy to see it like this.

    • 10TH_OF_SEPTEMBER_CALL [any, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      19 days ago

      Most of the harm comes from the hype and social panic around it. We could have threaded it as the interesting gadget it is, but the crapitalists thoughts they finally had a way to get rid of human labour and crashed the work economy… again

  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    32
    ·
    19 days ago

    My impression is that a lot of people realize this tech will be used against them under capitalism, and they feel threatened by it. The real problem isn’t with the tech itself, but with capitalist relations, and that’s where people should direct their energy.

  • HakFoo@lemmy.sdf.org
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    19 days ago

    What I don’t like is that they’re selling a toy as a tool, and arguably as the One And Only Tool.

    You’re given a black box and told to just keep prompting it to get lucky. That’s fine for toys like “give me a fresh low-quality wallpaper every morning.” or “pretend you’re Monkey D. Luffy and write a song from his perspective.”

    But it’s not appropriate for high-stakes work. Professional tools have documented rules, behaviours, and limits. They can be learned and steered reliably because they’re deterministic to a fault. They treat the user with respect and prioritixe correctness. Emacs didn’t wrap it in breathess sycopantic language when the code didn’t compile. Lotus 1-2-3 didn’t decide to replace half the “7’s” in your spreadsheet with some random katakana becsuse it was close enough. AutoCAD didn’t add a spar in the middle of your apartment building because it was statistically probable after looking at airplane wings all day.

    • CriticalResist8@lemmygrad.ml
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      19 days ago

      I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can’t figure out and people just learn to work around the bug. Photoshop is made on 20 year old legacy code and also uses non-deterministic algorithms that predate AI (the spot healing brush for example which you often have to redo several times to get a different result). I agree that there’s a big black box aspect to LLMs and GenAI, can’t say for all AI, but I don’t think it’s necessarily inherent to the tech or means it shouldn’t be developed more.

      Actually image AI is severely simple in its methods. Provide it with the exact same inputs (including the seed number) and it will output the exact same image every time, with only very minor variations. Should it have no variations? Depends; image gen AI isn’t an engineering tool and doesn’t profess to have a 0.1mm margin of error like other machines might need to.

      Back in 2023 already China used an AI (they didn’t say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy. It used to take a team of engineers one year to do this and an AI did it in 24 hours. There’s a lot of toy aspects to LLMs but this is also a trap of capitalism as this is what tech companies in startup mode are banking on. It’s not all neural models are capable of doing.

      You might be interested that the Iranian government has recently published guidelines on AI in academia. Unfortunately I don’t have a source as this comes from an Iranian compsci student I know, they say that you can use LLMs in university but need to note the specific model used, time of usage, and can prove you understand the topic then that’s 100% clean for Iranian academic standards.

      Iran is investing a lot in tech under heavy sanctions, and making everything locally (it is estimated 40-50% of all uni degrees in Iran are science degrees). To them AI is a potential way to improve their conditions under this context, and that’s what they’re exploring.

      • Sleepless One@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        19 days ago

        Back in 2023 already China used an AI (they didn’t say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy.

        Do you have a link to the story? I ask because AI is a broad umbrella that many different technologies fall under, so it isn’t necessarily synonymous with generative AI/machine learning (even if that’s how the term has been used the past few years). Hell, machine learning isn’t even synonymous with neural networks.

        Circling back to the Chinese ship, one type of AI I could plausibly see being used is a solver for a constraint satisfaction problem. The techniques I had to learn for these in college don’t even involve machine learning, let alone generative AI.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          19 days ago

          I sent the story on perplexity and looked at its sources :P (people often ask me how I find sources, I just ask perplexity and then look at its links and find one that fits)

          https://asiatimes.com/2023/03/ai-warship-designer-accelerating-chinas-naval-lead/ they report here that a paper was published in a science journal, though Chinese-language.

          I did find this paper: https://www.sciencedirect.com/science/article/abs/pii/S004579492400049X but it’s not from the same team and seems to be about a different problem, though still in ship design (hull specifically) and mentions neural networks.

          • Conselheiro@lemmygrad.ml
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            18 days ago

            This is sort of the issue with “AI” often just meaning “good software” rather than any specific technique.

            From a quick read the first one seems to refer to a knowledge-base or auto-CAD solution which is fundamentally different from any methods related to LLMs.

            The second one is some actually really impressive feature engineering used to solve an optimization problem with Machine Learning tools, which is actually much closer to a statistician using linear regressions and data mining than somebody using an LLM or a GAN.

            Importantly, neither method is as computationally intensive as LLMs, and the second one at least is a very involved process requiring a lot of domain knowledge, which is exactly the opposite of how GenAI markets itself.

      • 10TH_OF_SEPTEMBER_CALL [any, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        19 days ago

        I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can’t figure out and people just learn to work around the bug

        yeah my dad can kill a dozen people if something goes wrong at work. Yet they use windows and proprietary shit.

        If software isn’t secured it shouldn’t be used.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          8
          arrow-down
          2
          ·
          19 days ago

          We can make software less prone to errors with proper guidelines and procedures to follow, as with anything. Just to add that it’s not solely on software devs to make it failproof.

          I would make the full switch to Linux but I need Windows for photoshop and premiere lol. And I never got Wine to work on Mint, but if I could I would ditch windows today. I think helping people get acquainted with linux is something AI can really help with, and may help more people make the switch.

          • 10TH_OF_SEPTEMBER_CALL [any, any]@hexbear.net
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            edit-2
            19 days ago

            yes. It’s a tool that can (and must) be seized and re-appropriated imo. But it’s not magic. Main issue is that capitalists are selling it as some kind of genius in a bottle.

          • Horse {they/them}@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            5
            ·
            19 days ago

            I never got Wine to work on Mint, but if I could I would ditch windows today.

            apologies if this is annoying, but have you tried Lutris?
            it’s designed for games, but i use it for everything that needs wine because it makes it easy to manage prefixes etc. with a nice gui

            • CriticalResist8@lemmygrad.ml
              link
              fedilink
              arrow-up
              7
              ·
              19 days ago

              No worries, I haven’t tried it but I also don’t have my Mint install anymore lol (Windows likes to delete the dual boot file when it updates and I never bothered to get it working again). I might give it another try down the line but I’m not ready to ditch Adobe yet. I’ll keep it in mind for if I make the switch in the future.

  • CoreComrade@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    ·
    19 days ago

    For myself, it is the projected environmental impact. The power demand for data centers has already been on the rise due to the growth of the internet. With the addition of AI and the training thereof, the amount of power is rising/will rise at an unsustainable rate. The amount of electricity used creates strain on existing power grids, the amount of water that goes into cooling the hardware for the data centers creates strain on water supply, and this all plays into a larger amount of carbon emissions.

    Here is a good link that speaks to the environmental impact: genAI Environmental Impact

    Beyond the above the threat of people losing jobs within an already brutal system is a bit terrifying to me. Though others have already wrote more in length here regarding this.

    • CriticalResist8@lemmygrad.ml
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      edit-2
      19 days ago

      We have to be careful how we wield the environmental arguments. In the first phase, it’s often used to demonize Global South countries that are developing. Many of these countries completely skipped the personal computer step and are heavy consumers of smartphones and 4G data because it came around the time they could begin to afford the infrastructure (it’s why China is developing 6G already), but there’s a lot of arguments people make against smartphones (how the materials for them are produced, how you have to recharge a battery, how they get disposed of, how much electricity 5G consumes etc), but if they didn’t have smartphones then these countries would just not have the internet.

      edit: putting it all under the spoiler dropdown because I ended up writing an essay anyway lol.

      environmental arguments

      In the second phase in regards to LLM environmental impact it really depends and can already be mitigated. I’ll try not to make a huge comment because I don’t want to write an essay, but the source’s claims need scrutiny. Everything consumes energy - even we as human bodies release GHG. Going to work requires energy and using a computer for work requires energy too. If AI can do in 10 seconds what takes a human 2 hours, then you are certainly saving energy, if that’s the only metric we’re worried about.

      So it has to be relativized which most AI environmental articles don’t do. A chatGPT prompt consumes five times more electricity than a google search, sure, but that amount is close to 0 watts. Watching Youtube also consumes energy, a minute of youtube consumes much more energy than an LLM query does.

      Some people will say that we need to stop watching Youtube, no more treats or fun for workers, which is obviously not something we take seriously (deleting your emails to make room in data centers was a huge thing on linkedin a few years ago too).

      And all of this pales in comparison to the fossil fuel industry that we keep pumping money into in the west or obsolete tech that does have greener alternatives but we keep forcing on people because there’s money to be made.

      edit - and the meat and animal industry… Beef is very water-intensive and polluting, it’s not even close to AI. If that’s the metric then those that can should become vegan.

      Likewise for the water usage, there was that article about texas telling people to take fewer showers because it needs the water for data centers… I don’t know if you saw it at the time, it went viral on social media. It was a satirical article against AI, that people used as a serious argument. Texas never said to take fewer showers, these datacenters don’t use a lot of water at all as a share of total consumption in their respective geographical areas. In the US a bigger problem imo is the damming of the Colorado River so that almost no water reaches Mexico downstream, and the water is given out to farmers for free in arid regions so they can grow water-intensive crops like rice or dates (and US dates don’t even taste good)

      It also has sort of an anti-civ conclusion… Everything consumes energy and emits pollution, so the most logical conclusion is to destroy all technology and go back to living like the 13th century. And if we can keep some technology how do we choose between AI and Youtube?

      Rather I believe investments in research make things better over time, and this is the case for AI too (and we would have much better, safe nuclear power plants too if we kept investing in research instead of giving in to fearmongering and halting progress but I digress). I changed a lot of my point of view on environmentalism when back in 2020 people were protesting against 5G because “microwaves” and “we don’t need it” and I was on board (4G was plenty fast enough) until I saw how in some places they use 5G for remote surgery and that’s a great thing that they couldn’t do with 4G because there was too much latency. A doctor in China with 6G could perform remote surgery on a child in the Congo.

      In China electricity is considered a solved problem; at any time the grid has 2-3x more energy than it needs. The west has decided to stop investing in public projects and instead concentrate all surplus value in the hands of a select few. We have stopped building housing, we stopped building roads and rail, but we find the money to build datacenters that could be much greener, but why would they be when that costs money and there’s no laws that mandate it?

      Speaking of China they use a lot of coal still (comparatively speaking) but they also see it just an outdated means of energy production that can be replaced by newer, better alternatives. It’s very different, they’re doing a lot of solar and wind - in the west btw chinese solar panels are tariffed to hell and back, if they weren’t every single building in europe would be equipped with solar panels - and even pioneering new methods of energy production and storage, like the sodium battery or gravity storage. Gravity battery storage (raising and lowering heavy blocks of concrete over the day) is not necessarily Chinese but in Europe this is still just a prototype. In China they’re already building them as part of their energy strategy. They don’t demonize coal as uniquely evil like liberals might, but rather that once they’re able to, they’ll ditch coal because there’s better alternatives now.

      In regards to AI in China there’s been a few articles posted on the grad and it’s promising. They are careful about efficiency because they have to be. I don’t know if you saw the article from a few days ago about Alibaba Cloud cutting the number of GPUs needed to host their model farm by 82%. The test was done on NVidia H20 cards which is not a coincidence, it’s the best China can get by US decree. The top of the line model is the H100 (the H20 having only 20% of the capabilities) but the US has an order not to export anything above the H20 to China, so they find creative ways to stretch it. And now they’re developing their own GPU industry and the US shot itself in the foot again.

      Speaking of model farm… it’s totally possible to run models locally. I have a 16GB GPU and I can generate realistic pictures (if that’s the benchmark) in 30 seconds, the model only needs 5GB Vram but the architecture inside the card is also important for speed. For LLM generation I can run 12B models, rarely higher, and with new efficiency algorithms I think over time that will stretch to bigger and bigger models, all on the same card. They run model farms for the cloud service because so many people connect to it at the same time, but it’s not a hard requirement for running LLMs. In another comment I mentioned how Iran is interested in LLMs because like 4G and other modern tech that lags a bit in the west, they see it as a way to stretch their material conditions more (being heavily sanctioned economically).

      There’s also stuff being done in the open source community, for example LORAs are used in image generation and help skew the generation towards a certain result. This means you don’t need to train a whole model, loras are usually trained by people on their machines with like 100 images. As for training time it can be done in 30 minutes to train a lora. So what we see is comparatively few companies/groups making full models (either LLM or image gen, called checkpoints) and most people making finetunes for these models.

      Meanwhile in the West there’s a 500 billion $ “plan” to invest in the big tech companies that already have a ton of money, that’s the best they can muster. Give them unlimited money and expect that they won’t act like everything is unlimited. Deepseek actually came out shortly after that plan (called Stargate) and I think pretty much killed it before it even took off lol. It’s the destiny of capitalism to con the government into giving them money, of course they were not going to say “no actually if we put some personal investment we could make a model that uses 5x less energy”, because they would not get 500 billion $ if they did. They also don’t care about the energy grid, that’s an externality for them - the government will take care of it, from their pov.

      Anyway it’s not entirely a direct response to your comment because I’m sure you don’t believe in all the fearmongering, but it’s stuff I think is important to keep in mind and I wanted to add here. And I ended up writing an essay anyway lol.

  • fox [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    19 days ago

    isn’t providing an alternative where you can get instant feedback when you’re journaling

    ELIZA was written in the 60s. It’s a natural language processor that’s able to have reflective conversations with you. It’s not incredible but there’s been sixty years of improvements on that front and modern ones are pretty nice.

    Otherwise, LLMs are a a probabilistic tool: the input doesn’t determine the output. This makes them useless at things tools are good at, which is repeatable results based on consistent inputs. They generate text with an authoritative voice but all domain experts find that they’re wrong more often than they’re right, which makes them unsuitable as automation for white-collar jobs that require any degree of precision.

    Further, LLMs have been demonstrated to degrade thinking skills, memory, and self-confidence. There are published stories about LLMs causing latent psychosis to manifest in vulnerable people, and LLMs have encouraged suicide. They present a social harm which cannot be justified by their limited use cases.

    Sociopolitically, LLMs are being pushed by some of the most evil people alive and their motives must be questioned. You’ll find oceans of press about all the things LLMs can do that are fascinating or scary, such as the TaskRabbit story (which was fabricated entirely). The media is culpable in the image that LLMs are more capable than they are, or that they may become more capable in the future and thus must be invested in now.

  • Darkcommie@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    ·
    19 days ago

    Because we can see what it does without properly regulation and also it’s very overhyped by tech companies in how much utility it actually has

    • The Free Penguin@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      19 days ago

      Ye imo they’re not regulating it in the right places They’re uber-focused on making it reject making how-to guides for things they don’t like that they don’t see the real problem: Technofascist cults like palantir being able to kill random people with the press of a button

  • KalergiPlanner@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    ·
    19 days ago

    “And i don’t mean stuff like deepfakes/sora/palantir/anything like that” bro, we don’t live in a world where LLMs are excluded from those uses

    the technology itself isn’t bad, but we live in a shitty capitalist world where every instance of automation, rather than liberating mankind, fucks them over. a thing that can allow one person to do the labor of many is a beautiful thing, but under capitalism increases of productivity only lead to unemployment; though, on the bright side, it consequently also causes a decrease in the rate of profit.

  • infuziSporg [e/em/eir]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    19 days ago

    Why would you want instant feedback when you’re journaling? The whole point of journaling is to have something that’s entirely your own thoughts.

    • The Free Penguin@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      19 days ago

      I dont like writing my own thoughts down and just having them go into the void lol and i want a real hoomin to talk to about these things but i dont have one TwT

      • infuziSporg [e/em/eir]@hexbear.net
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        19 days ago

        What does “go into the void” mean? The LLM may use them as context for a while or it may not use them as context at all, it may even periodically erase its memory of you.

        I find talking about heavy or personal things way easier with strangers than with people you know. There’s no stakes with a stranger you can literally walk up to someone on the street or in a park who doesn’t look busy and ask them if they want to talk.

        • Fruitbat [she/her]@lemmygrad.ml
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          19 days ago

          Is it okay if I push back a bit? Since your last comment just feels a little dismissive? I don’t know the Free Penguin, but I will point out to other things why someone might not be able to easily talk to someone? Like for example, if someone can’t can’t walk or get around, they won’t be able to just talk to someone like that. Mainly speaking about my mom before she died since she had copd and her health decline after something happened to her at her former work place. But anyways she really hurt her spine and couldn’t really get around. I remember her being very upset with how alone she felt.

          Then also speaking for myself, I have a speech impediment, + anxiety, so it is really difficult for me to just approach someone and talk to them depending on various factors. along with that, another thing to, but some strangers can be outright hostile and make things worse and someone else might just have a lot of bad interactions with strangers. Since to go back to myself, people do judge how someone speaks and tends to see little of you, like if you have an accent or have trouble speaking.

          • infuziSporg [e/em/eir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            19 days ago

            Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.

            Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.

            I am a rather awkward person in many ways, I am instantly recognizable by many people as “weird”, I have my own share of anxiety that I’ve gotten better at masking over the years. If I spent ages 19-25 interacting with a digital yes-man instead of with humans, I would have no social skills.

            Your response sounds closely analogous to when car proponents use the disabled as a shield. We don’t need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.

            • Fruitbat [she/her]@lemmygrad.ml
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              19 days ago

              I feel like you might be taking me at bad faith here or misinterpreting me.

              Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.

              I agree? I’m very aware.

              Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.

              I would argue that depends. Not everywhere has a lot of different approaches to these things. If anything, if we go to LLM’s, all they did was take inherit contradictions and brought them to new heights, but that these things were already there to begin with, maybe smaller in form.

              Your response sounds closely analogous to when car proponents use the disabled as a shield. We don’t need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.

              Again, where do I say that besides being taken at bad faith or misread into? All I’m simply is trying to point out that there usually reasons why someone would turn to something like an LLM or might not easily talk to someone else. As you said, the root problem is not being addressed. To add, it also just leaves a bad taste in my mouth and kind of hurts, to be that what I said sounds closely analogous to using the disable as a shield, especially when I was talking about myself or my mom.

              Since for example, when my mom was in the hospital before the last few weeks she died. She had to communicate on a white board for staff since they couldn’t understand her. I also had to use the same white board to because staff couldn’t understand what I was saying either. Just to give you an idea of how I have trouble speaking to others. I’m not saying someone shouldn’t try to interact with others you know and just go talk to a chatbot. People should have another person to talk to.

          • infuziSporg [e/em/eir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            19 days ago

            The ability to self-actualize and shape the world belongs to those who are willing to potentially cause momentary discomfort.

            Also the default status of many people is lonely and/or anxious; receiving social energy from someone often at least takes their mind off that.

            Advancements in material technology in the past half century have often ended up stunting our social development and well-being.

      • ZWQbpkzl [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        19 days ago

        I would be extremely cautious about that sort of usage of AI. Commercial AI’s are psychopathic sychophants and have been known to drive people insane by constantly gassing them up.

        Like you clearly want someone to talk to about your life and such (who doesn’t?) and I understand not having someone to talk to (fewer and fewer do these days). But you’re opting for a corporate machine which certainly has instructions to encourage your dependence on it.

          • ZWQbpkzl [none/use name]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 days ago

            No idea. But I’d say its less likely. Especially if you’re running a local model with Ollama.

            I think key here is to prevent the AI from developing a “profile” on you and self controlled ollama sessions are the surest bet for that.

        • The Free Penguin@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          19 days ago

          Also i delete my convos about these things after 1 prompt so i dont have a lasting convo on that But tbh exposure to the raw terms of the topic has let me go from tech allegories to T9 cipher to where i am now where i can at least prompt a robot using A1Z26 or hex to obscure the raw terms a bit

  • big_spoon@lemmygrad.ml
    cake
    link
    fedilink
    arrow-up
    20
    arrow-down
    4
    ·
    19 days ago

    there’s the people who hate it bc they have petit-bourgueois leanings and think at the stuff as “stealing content” and “copyrighted material” like artist people, “code monkeys” or writers

    and there’s the people that hate it because it’s an obvious grift made to siphon resources and “try” to be a big replacement for proles, and a huge wasteful technology that dries water sources and rises electricity bills with their data centers

    yeah, it’s kinda useful to make a drawing, fill blank space in a document, or being a dumb assistant who hallucinates anything to pretend that it knows stuff

    • LeninWeave [none/use name, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      6
      ·
      19 days ago

      there’s the people who hate it bc they have petit-bourgueois leanings and think at the stuff as “stealing content” and “copyrighted material” like artist people

      It’s actually not petty bourgeois for proletarians in already precarious positions to object to the blatant theft of their labor product by massive corporations to feed into a computer program intended to replace them (by producing substandard recycled slop). Even in the cases where these people are so-called “self-employed” (usually not actually petty bourgeois, but rather precarious contract labor), they’re still correct to complain about this - though the framing of “copyrighted material” is flawed (you can’t use the master’s tools to dismantle his house). No offense, but dismissing them like this is a bad take. I agree with the rest of your comment.

      • 10TH_OF_SEPTEMBER_CALL [any, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        19 days ago

        I agree with you, a friend of mine was illustrator, she lost all her jobs. Now she does scenery.

        But they lost their jobs because there’s a downward pressure to cost everywhere in society. That money is, once again, stolen.

        Also you forgot to mention the 2 dollars-a-day kenyan labourer that trained chatGPT in the first place. Or the awful water consumption.

        I dont think the tech itself is evil, or even special, it’s just a big array of number at the end. The issues are the techbros and their new hype.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          19 days ago

          I am/was in visual arts (or rather graphic design) and it’s been a long time coming. Spec work was all the rage to rally against back in the day, and despite the protests it hasn’t gone anywhere. I’m sure that back in the 90s some old-school designers were against Photoshop too. And of course we are the first person to lose our jobs when a crisis hits, because marketing takes a backseat.

          For years Photoshop was the standard in website mockups which is just wild to me as it’s not what it’s meant for. Today we have tools like Figma, which unfortunately exist as SaaS (Software-as-a-service, with a monthly subscription and on the ‘cloud’). The practice still endures despite the fact that you have to code for mobile now and monitors don’t come in the standard 4:3 aspect ratio anymore but in many variations.

          Oh, I could add SaaS too to the difficulties designers face. For example Wordpress has a lot of prefab themes, so you don’t even need to mockup anything in Photoshop or figma anymore. You just pick one and start building your website - it’s how I made all my websites, I don’t need to duplicate elements in image files and I honestly have no idea how I would even start making a modern website on Photoshop. The footer for example which is the same on all pages is easily editable from Wordpress. I feel like I would be wasting time making a footer on Photoshop when I can just edit it on the website directly in a visual editor and it will update itself on every page.

          I don’t see that any of the above as a bad thing overall. What we see is that society adapts to changing conditions. The fight is still for socialism.

          • PolandIsAStateOfMind@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            18 days ago

            I’m sure that back in the 90s some old-school designers were against Photoshop too.

            Yes, CGI in general was demonised and at some point even came close to the current shitstorm, with the same arguments about death of the art and human creativity etc. in reality it just vastly increased the output of the art, especially commercial one.

    • ComradeSalad@lemmygrad.ml
      link
      fedilink
      arrow-up
      10
      arrow-down
      6
      ·
      edit-2
      19 days ago

      Those aren’t petit bourgeois tendencies, those are pre-capitalist artisanal tendencies.

      Except for the copyright aspect, but independent artists rarely clamour about their “copyrights”, as their issue is more about how their work, whether copyrighted or not, is getting fed into a capitalist black hole machine designed to replace workers to benefit no one but a few capitalists.

      Most within the art world could care less if someone took the time to learn their style one to one by studying their work. That’s the entire point of art to a degree, as the end product is still an expression of labour value. Something that can’t be said about GenAI

  • Catalyst_A@lemmygrad.ml
    link
    fedilink
    arrow-up
    15
    ·
    19 days ago

    There doesn’t need to be an alternative option to offer. I don’t support genAI because its flooded the internet with fake content thatnhas no label to differentiate it. It’s unreversible.

  • ZWQbpkzl [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    19 days ago

    crowd isn’t providing an alternative where you can get instant feedback when you’re journaling

    Side bar: This is a very specific usage of GenAI. Are you like writing your diary into ChatGPT?

  • ZWQbpkzl [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    19 days ago

    GenAI really is taking people’s jobs. It might not do it better. It might be less safe. It might even be less cost efficient. It’s still happening.

    Its not even a case of Do you think you can be replaced by AI? Instead its, Does your employer think you can be replaced with AI? Any white collar worker would be foolish to think that’s something their employer has not considered. Corporate advertising is pleading them to reconsider multiple times a day.

    • CriticalResist8@lemmygrad.ml
      link
      fedilink
      arrow-up
      9
      ·
      19 days ago

      Exactly, and this process keeps happening in capitalism making AI neither unique nor truly new in its social repercussions. Therefore the answer is socialism so that tech frees us instead of creating crises.

      Although the companies that bought on the promise of replacing labor are now walking it back as they realize it doesn’t replace but enhances labor. It’s like Zuckerberg not allowing his kids on Facebook, AI companies are not replacing their employees with AI either but they sell the package because capitalism needs to make money, not social good.

      • ZWQbpkzl [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 days ago

        AI companies are not replacing their employees with AI either

        You sure about that? I mean they’re obviously hiring more because they have the investors at the moment but that doesn’t mean they arent using AI internally.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          5
          ·
          19 days ago

          They’re rehiring now, for example Klarna laid off their customer service reps to replace with AI but they’re walking it back and rehiring human reps (tbh klarna has other problems right now lol).

            • CriticalResist8@lemmygrad.ml
              link
              fedilink
              arrow-up
              3
              ·
              19 days ago

              I read too fast lol. I was talking about the engineers that work on the models in this case; tech companies would never replace them with AI because they know it wouldn’t work out.

              But I looked more broadly into it couldn’t find any source that says the mass layoffs we are seeing in tech currently are replacing jobs with AI, rather it seems they’re getting rid of the job entirely as happens routinely in the industry (as they are shifting to another focus which currently is AI). There’s Amazon who is building automated warehouses but YMMV; they also started on these before AI and have been at it for a while.

              For new AI companies like openAI, the jobs they are giving to AI (such as customer service) were never created in the first place, so it’s not replacing a worker, since the job never existed.

  • GreatSquare@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    19 days ago

    It’s not feedback. That’s not what the tool is for. It doesn’t have an opinion. There’s no one on the other side of the screen. The “A” stands for Artificial.

      • GreatSquare@lemmygrad.ml
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        19 days ago

        I don’t see that as feedback but it depends on your definition of feedback. Just having something come out of the AI is not feedback to me.

        A writer writes something. An AUDIENCE provides feedback on their writing. An AI can be processing the writing but can’t be an audience because it is just a tool. Just because the AI returned some text back won’t change that fact regardless of the content of the text.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          5
          arrow-down
          2
          ·
          19 days ago

          Data is feedback for example. If you change something on a web page and notice a huge drop in visits then that provides actionable information, i.e. feedback. The visitors didn’t vocalize it, you only see it as numbers on a spreadsheet.

          • GreatSquare@lemmygrad.ml
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            19 days ago

            True but OP isn’t using AI to collate or analyse data of the visitors to his website.

            As I said, it’s how you use the tool. Every usecase is not valid. In a LOT of cases AI is not useful or efficient and it’s sometimes doing more harm than good.

    • The Free Penguin@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      19 days ago

      And tbh im not looking for opinions or human interaction on there im just looking for something that says my posts in another way for idek what reason but a human would get uncomfy reading them so yeah