LeninWeave [none/use name, any]

  • 0 Posts
  • 73 Comments
Joined 4 years ago
cake
Cake day: July 18th, 2021

help-circle

  • I agree with you, however they still tend to act mostly after some damage has been done. That’s the purpose of the socialist market economy, to allow development of productive forces but catch it when it goes wrong. That doesn’t mean capitalists are prevented from making harmful decisions to begin with. The difference with Hong Kong is that there’s basically no catching, it just gets worse and worse.

    As I said, a typical recent example was the housing bubble the government had to step in to catch. It was allowed to go on for a while before it was stopped, and it did harm people during that time.



  • In many of those applications, we are seeing that the required labor is not reduced. The translations need significant editing, just like machine translations of the past. The summaries contain mistakes that lead to wasted time and money. The same thing with the code, plus the maintenance burden is increased even in cases where the code is produced faster, which is often not actually the case. Companies lay people off, then hire them back because it doesn’t work. We can see this being proven in real time as the hype slowly collapses.

    I lump these distinct tools together because they are all conflated anyways, and opposition to, say, generative AI and LLMs is often tied together by liberals.

    I’m not a liberal and I’ve been extremely cautious to avoid conflating different types of so-called “AI” in this thread. If you keep doing so, we’re just going to be talking past each other.

    We are experiencing a temporary flood of investment in a tool with far more narrow use-cases than Liberalism will acknowledge, and when this proves divorced from reality and the AI bubble crashes, we will be able to more properly analyze use-cases.

    100% agreed, and it can’t come soon enough. In a few years at most we’ll see where SNLT was actually, meaningfully reduced by these tools, and my prediction is that it will be very narrow (as you say). At that point, I’ll believe in the applications that have been proven. What’s tragic is not only the wasted resources involved, but the opportunity cost of the technologies not explored as a result. Especially other forms of “AI” that are less hyped but more useful.

    I’m against both the idea that AI has no utility, and the idea that AI is anything more than just another tool that needs to be correctly analyzed for possible use-cases. Our job as communists is to develop a correct line on AI and agitate for that line within the broader working class, so that it can be used (in however big or small the capacity) for proletarian liberation. It will not be some epoch-changing tool, and will only be one tool in a much larger toolbox, but it does have utility and already exists whether we like it or not.

    As I said, I agree. It’s not special, it’s just treated as special by the people who hype it, to potentially disastrous consequences. The use cases are unproven, and the mounting evidence indicates that a lot of the use cases aren’t real and AI actually doesn’t reduce SNLT.


  • I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility.

    True, but like I said, companies don’t seem to be able to successfully reduce labor requirements using LLMs, which makes it seem likely that they’re not useful in general. This isn’t an issue of capitalism, the issue of capitalism is that despite that they still get a hugely disproportionate amount of resources for development and maintenance.

    I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

    I do oppose the tool (LLMs, not AI) because I have yet to see any use case that justifies the development and maintenance costs. I’ll believe that this technology has useful applications once I actually see those useful applications in practice, I’m no longer giving the benefit of the doubt to technology we’ve seen fail repeatedly to be implemented in a useful manner. Even the few useful applications I can think of, I don’t see how they could be considered proportional to the costs of producing and maintaining the models.



  • I agree with you here, although I want to make a distinction between “AI” in general (many useful use cases) and LLMs (personally, I have never seen a truly convincing use case, or at least not one that justifies the amount of development going into them). Not even LLM companies seem to be able to significantly reduce SNLT with LLMs without causing major problems for themselves.

    Fundamentally, in my opinion, the mistaken way people treat it is a core part of the issue. No capitalist ever thought a drill press was a human being capable of coming up with its own ideas. The fact that this is a widespread belief about LLMs leads to widespread decision making that produces extremely harmful outcomes for all of society, including the creation of a generation of workers who are much less able to think for themselves because they’re used to relying on the recycled ideas of an LLM, and a body of knowledge contaminated with garbage that’s difficult to separate from genuine information.

    I think any materialist analysis would have to consume that these things have very dubious use cases (maybe things like customer service chat bots) and therefore that most of the labor and resources put into their development are wasted and would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.



  • You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.

    A lot of the applications of AI specifically minimize worker involvement, meaning the output is 100% slop. That slop is included in the training data for the next model, leading to a cycle of degradation. In the end, the pool of human knowledge is contaminated with plausible-sounding written works that are wrong in various ways, the amount of labor required to learn anything is increased by having to filter through it, and the amount of waste due to people learning incorrect things and acting on them is also increased.





  • And at the end of the day, Americans putting a figure like Polk/Roosevelt as the face of the American space program wouldn’t turn our stomachs as much as when Americans put Paperclip-nazi Werner von Braun there in 1960, for example.

    Nobody cared or cares about von Braun either, it’s just completely ignored and excused that he was a fascist. Everyone just automatically believes (without even having to be told) that he was a smol bean apolitical scientist, even though that’s obviously ridiculous and not even close to true.

    And lebensraum mostly matters to liberals in countries directly affected by it, liberals outside of those often care about as little as they care about manifest destiny. The academic differences you point out, I agree with other users that those are just due to racism. One affected europeans and the other didn’t.



  • yea It isn’t even a unique take, somehow there are many leftists still using “luddite” as an insult in the context of the “AI” “debate”. It’s an incredible self own, they’re literally arguing against their own point without realizing it.

    I think it’s a form of cope to think that anything China does is automatically different of better. They explicitly have a market economy, meaning that the same things can happen there that happen in the west, in the same ways. Just because the socialist government is there to catch it when it goes wrong, doesn’t mean it can’t go wrong to begin with.



  • All I know is that this new form of luddism will disipate into history similarly to the past luddism of past century.

    You’re aware that the luddites were correct, right? They weren’t vulgar technology haters, they had valid concerns about their pay and the quality of the products produced (actually an excellent comparison to many people who oppose LLMs), which turned out to be accurate. The idea of luddites as you use it here is explicitly liberal propaganda used to smear labor movements for expressing valid concerns, and they didn’t dissipate into history, there were and are subsequent similar labor movements.



  • there’s the people who hate it bc they have petit-bourgueois leanings and think at the stuff as “stealing content” and “copyrighted material” like artist people

    It’s actually not petty bourgeois for proletarians in already precarious positions to object to the blatant theft of their labor product by massive corporations to feed into a computer program intended to replace them (by producing substandard recycled slop). Even in the cases where these people are so-called “self-employed” (usually not actually petty bourgeois, but rather precarious contract labor), they’re still correct to complain about this - though the framing of “copyrighted material” is flawed (you can’t use the master’s tools to dismantle his house). No offense, but dismissing them like this is a bad take. I agree with the rest of your comment.