• 𒉀TheGuyTM3𒉁@lemmy.ml
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    10 hours ago

    I’m just sick of all this because we gave to “AI” too much meaning.

    I don’t like Generative AI tools like LLMs, image generators, voice, video etc because i see no interests in that, I think they give bad habits, and they are not understood well by their users.

    Yesterday again i had to correct my mother because she told me some fun fact she had learnt by chatGPT, (that was wrong), and she refused to listen to me because “ChatGPT do plenty of researches on the net so it should know better than you”.

    About the thing that “it will replace artists and destroy art industry”, I don’t believe in that, (even if i made the choice to never use it), because it will forever be a tool. It’s practical if you want a cartoony monkey image for your article (you meanie stupid journalist) but you can’t say “make me a piece of art” and then put it on a museum.

    Making art myself, i hate Gen AI slop from the deep of my heart but i’m obligated to admit that. (Let’s not forget how it trains on copirighted media, use shitton of energy, and give no credits)

    AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won’t give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.

    TL,DR AI in general is a tool. Gen AI is bad as a powerful tool for everyone’s use like it is bad to give to everyone an helicopter (even if it improves mobility). AI is nonetheless a very nice tool that can save lifes and help disabled peoples IF used and understood correctly and fairly.

    • LousyCornMuffins@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      I spent an hour talking photographs on the drive home the other night (the wife was driving and a storm have us great clouds). I was mostly playing with angles and landscape but it was fun. The kind of stuff it would take entire weeks to do thirty years ago, and I was done in an hour. I got a mediocre shot at best, but it was real dammit.

  • gmtom@lemmy.world
    link
    fedilink
    arrow-up
    45
    arrow-down
    16
    ·
    13 hours ago

    I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.

    This work has already saved thousands of peoples lives.

    But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        19
        arrow-down
        10
        ·
        edit-2
        13 hours ago

        Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.

        Corporate enshittification is what’s demonic. When you say fuck AI, you should really mean “fuck Sam Altman”

        • monotremata@lemmy.ca
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          4
          ·
          11 hours ago

          I mean, not really? Maybe they’re both deep learning neural architectures, but one has been trained on an entire internetful of stolen creative content and the other has been trained on ethically sourced medical data. That’s a pretty significant difference.

          • KeenFlame@feddit.nu
            link
            fedilink
            arrow-up
            4
            ·
            7 hours ago

            No, really. Deep learning and transformers etc. was discoveries that allowed for all of the above, just because corporate vc shitheads drag their musty balls in the latest boom abusing the piss out of it and making it uncool, does not mean the technology is a useless scam

            • ILikeTraaaains@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              5 hours ago

              This.

              I recently attended a congress about technology applied on healthcare.

              There were works that improved diagnosis and interventions with AI, generative mainly used for synthetic data for training.

              However there were also other works that left a bad aftertaste in my mouth, like replacing human interaction between the patient and a specialist with a chatbot in charge of explaining the procedure and answering questions to the patient. Some saw privacy laws as a hindrance and wanted to use any kind of private data.

              Both GenAI, one that improves lives and other that improves profits.

          • AdrianTheFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            I think DLSS/FSR/XeSS is a good example of something that is clearly ethical and also clearly generative AI. Can’t really think of many others lol

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        6 hours ago
        1. Except clearly some people do. This post is very specifically saying ALL AI is bad and there is no exceptions.

        2. Generative AI isnt a well defined concept and a lot of the tech we use is indistinguishable on a technical level from “Generstive AI”

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          5 hours ago
          1. sephirAmy explicitly said generative AI

          2. Give me an example, and watch me distinguish it from the kind of generative AI sephirAmy is talking about

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      3
      ·
      edit-2
      13 hours ago

      All this is being stoked by OpenAI, Anthropic and such.

      They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”

      For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does, because that’s a battle they know they can win.

    • ysjet@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      10
      ·
      12 hours ago

      Those are not GPTs or LLMs. Fuck off with your bullshit trying to conflate the two.

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        6 hours ago

        We actually do use Generative Pre-trained Transformers as the base for a lot of our tech. So yes they are GPTs.

        And even if they werent GPTs this is a post saying all AI is bad and how there is literally no exceptions to that.

  • ruuster13@lemmy.zip
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    12 hours ago

    AI is a marketing term. Big Tech stole ALL data. All of it. The brazen piracy is a sign they feel untouchable. We should touch them.

  • axEl7fB5@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 hours ago

    Do people who self-host count? Like ollama? It’s not like my PC is going to drain a lake.

    • Senal@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      Ethics and morality aside.

      Yes, they count, the process of making and continuing to update the underlying LLM is also what drains the lakes, they are all made on pirated info (all the big ones for sure, I’ve not heard of a widely available, usable model trained 100% on legally obtained data, but I suppose it could exist).

  • Atlas_@lemmy.world
    link
    fedilink
    arrow-up
    30
    arrow-down
    11
    ·
    16 hours ago

    Do y’all hate chess engines?

    If yes, cool.

    If no, I think you hate tech companies more than you hate AI specifically.

    • Norah (pup/it/she)@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      7
      ·
      edit-2
      15 hours ago

      The post is pretty clearly* about genAI, I think you’re just choosing to ignore that part. There’s plenty of really awesome machine learning technology that helps with disabilities, doesn’t rip off artists and isn’t environmentally deleterious.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        13
        ·
        edit-2
        13 hours ago

        The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.

        So is trying to bucket them based on copyright violation: there are very powerful, open dataset, more or less reproducible LLMs trained and runnable on a trivial amount of electricity you can run on your own PC right now.

        Same with use cases. One can use embeddings models or tiny resnets to kill. People do, in fact, like with Palantir’s generative free recognition models. At the other extreme, LLMs can be totally task focused and useless at anything else.

        The distinction is corporate/enshittified vs not. Like Reddit vs Lemmy.

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          11
          arrow-down
          4
          ·
          edit-2
          6 hours ago

          The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.

          You know this is a stupid take, right? You know that chatgpt and Stockfish, while both being forms of “artificial intelligence,” are wildly incomparable, yeah? This is like saying “the distinction between an ICBM and the Saturn-V is meaningless, because they both use the same underlying tech”

        • absentbird@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          11 hours ago

          The distinction between AI and GenAI is like the difference between eating and cannibalism; one contains the other, but there’s still a meaningful distinction.

          Generative AI produces text or images by leveraging huge neural networks weighted by tons and tons of training data. It’s fundamentally a system of guesses and vibes.

          Machine learning in general is often much more precise. The model finding early cancer in scans isn’t just guessing the next word, it’s running the image through a series of precisely tuned layers.

          The industry term for the distinction is supervised vs unsupervised learning.

        • Probius@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          edit-2
          12 hours ago

          That first claim makes no sense and you make no argument to back it up. The distinction is actually quite meaningful; generative AI generates new samples from an existing distribution, be it text, audio, images, or anything else. Other forms of AI solve numerous problems in different ways, such as identifying patterns we can’t or inventing novel and more optimal solutions.

        • psx_crab@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 hours ago

          The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.

          Genuinely doubt the tech used to control Zerg is the same tech used to generate an essay about elephant which contain numerous misinformation. AI lately is being used liberally, which lost their meaning.

      • Randomgal@lemmy.ca
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        15 hours ago

        I had to check to make sure I was in the right app. Rational discussion on my Lemmy? No way.

        But yes. The machine can’t take responsibility for shit. You hate the people and what they are doing to you. If AI didn’t exist, they do it somehow else

        • WoodScientist@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          8 hours ago

          Rational discussion on my Lemmy? No way.

          Here. Let me make you feel more at home. Obviously all the AI data centers should be nationalized and the owners of OpenAI sent to gulags. The data centers will be requisitioned by a new state central planning committee for purposes of economic management. /s

        • CXORA@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          It’s possible to hate people doing and things, and the tools they use to do those as well.

  • kartoffelsaft@programming.dev
    link
    fedilink
    arrow-up
    96
    arrow-down
    7
    ·
    21 hours ago

    I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.

    But also:

    Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who’ll accept you instead. It’s disgustingly twitter-brained. It’s a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.

    Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? “That time you used ChatGPT to recall the word ‘verisimilar’ makes you an evil person.” is what they hear. And at that moment you’ve cut that person off from ever actually considering your opinion ever again. Even if you’re right that’s not healthy.

    • WoodScientist@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      8 hours ago

      (as a reverse dictionary, for example)

      Thanks for putting a name on that! That’s actually one of the few useful purposes I’ve found for LLMs. Sometimes you know or deduce that some thing, device, or technique must exist. The knowledge of this thing is out there, but you simply don’t know the term to search for. IMO, this is actually one of the killer features of LLMs. It works well because whatever the LLM is outputting is simply and instantly verifiable. You can describe the characteristics of something to the LLM and ask it what thing has those characteristics. Then once you have a possible name, you then look that name up in a reliable source and confirm it. Sometimes the biggest hurdle to figuring something out is just learning the name of a thing. And I’ve found LLMs very useful as a reverse dictionary. Thanks for putting a name on it!

    • azertyfun@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      12 hours ago

      You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.

      Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can’t properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.

      This makes me quite uncomfortable because that’s the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can’t or won’t say explicitly isn’t tech bros but immigrants and queer people.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        11 hours ago

        The people who hate immigrants and queer people are AI’s biggest defenders. It’s really no wonder that people who hate life also love the machine that replaces it.

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          2
          ·
          7 hours ago

          A perfect example of the just completely delusional factoids and statistics that will spontaneously form in the hater’s mind. Thank you for the demonstration.

    • BigDiction@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      15 hours ago

      I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.

    • ysjet@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      5
      ·
      12 hours ago

      Using chatGPT to recall the word ‘verisimilar’ is an absurd waste of time, energy, and in no way justifies the use of AI.

      90% of LLM/GPT use is a waste or could be done with better with another tool, including non-LLM AIs. The remaining 10% are just outright evil.

  • kopasz7@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    72
    arrow-down
    1
    ·
    23 hours ago

    My issues are fundsmentally two fold with gen AI:

    1. Who owns and controls it (billionares and entrenched corporations)

    2. How it is shoehorned into everything (decision making processes, human-to-human communication, my coffee machine)

    I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers’ house of cards.

    • BlameTheAntifa@lemmy.world
      link
      fedilink
      arrow-up
      18
      arrow-down
      3
      ·
      edit-2
      20 hours ago

      When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to. The problem, as is always the case, is capitalism immediately turned into a tool of theft and abuse. The theft of training data, the power requirements, selling it for profit, competing against those whose creations were used for training without permission or attribution, the unreliability and untrustworthiness, so many ethical and technical problems.

      I still don’t have a problem with using the corpus of all human knowledge for machine learning, in theory, but we’ve ended up heading in a horrible, dystopian direction that will have no good outcomes. As we hurtle toward corporate controlled AGI with no ethical or regulatory guardrails, we are racing toward a scenario where we will be slavers or extinct, and possibly both.

      • ZDL@lazysoci.al
        link
        fedilink
        arrow-up
        4
        arrow-down
        4
        ·
        12 hours ago

        When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to.

        Except, of course, you aren’t doing anything. You are no more writing, making music, or producing art than is an art director at an ad agency is. You’re telling something else to make (really shitty) art on your behalf.

      • kopasz7@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        16 hours ago

        Solving points 1 and 2 will also address many ethical problems people create with AI.

        I believe that information should be accessible to all. My issue is not with them training in the way they did, but their monopoly on this process. (In the very same vein as Sci-Hub makes pay-walled whitepapers accessible, cutting out the profiteering publishers.)

        It must be democratized and distributed, not centralized and monetized!

      • storm@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        4
        ·
        20 hours ago

        *Not op but still gonna reply. Not really? The notion that someone can own (and be entitled to control) a portion of culture is absurd. It’s very frustrating to see so many people take issue with AI as “theft” as if intellectual property were something that we should support and defend instead of being the actual tool for stealing artists work (“Property is theft” and all such). And obviously data centers are not built to be environmentally sustainable (not an expert, but I assume this could be done if they cared to do so). That said, using AI to do art so humans can work is the absolute peek of a stupid fucking ideas.

      • baahb@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        9
        arrow-down
        15
        ·
        22 hours ago

        The way they were trained is the way they were trained.

        I dont mean to say that the ethics dont matter, but you are talking as though this isnt already present tense.

        The only way to go back is basically a global EMP.

        What so you actually propose that is a realistic response?

        This is an actual question. To this point the only advice I’ve seen to come from the anti-ai crowd is “dont use it. Its bad!” And that is simply not practical.

        You all sound like the people who think we are actually able to get rid of guns entirely.

        • iAmTheTot@sh.itjust.works
          link
          fedilink
          arrow-up
          17
          arrow-down
          1
          ·
          22 hours ago

          I’m not sure your “this is the present” argument holds much water with me. If someone stole my work and made billions off it, I’d want justice whether it was one day or one decade later.

          I also don’t think “this is the way it is, suck it up” is a good argument in general. Nothing would ever improve if everyone thought like that.

          Also, not practical? I don’t use genAI and I’m getting along just fine.

        • aesthelete@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          19 hours ago

          To this point the only advice I’ve seen to come from the anti-ai crowd is “dont use it. Its bad!” And that is simply not practical.

          I’d argue it’s not practical to use it.

          • baahb@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            12 hours ago

            Your argument is invalid, the capitalists are making money. It will continue for as long as there is money to be made. Your agreement and my agreement is unnecessary.

            How do we fix the problem that makes AI something that we have to deal with.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              11 hours ago

              Sabotage, public outrage, I dunno.

              If you’re arguing that people shouldn’t be upset because there’s no escaping it, this is an argument in favor of capitalism. Capitalism can’t be escaped either.

            • aesthelete@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              11 hours ago

              … the capitalists are making money. It will continue for as long as there is money to be made.

              Nah these companies don’t even make money on the whole, they burn money. So your argument is invalid, and may God have mercy on your soul! 🙏

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          15
          arrow-down
          2
          ·
          22 hours ago

          Okay, you know those gigantic data centers that are being built that are using all our water and electricity?

          Stop building them.

          Seems easy.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              12
              arrow-down
              2
              ·
              22 hours ago

              Guns can be concealed and smuggled.

              Compute warehouses the size of football fields that consume huge amounts of electricity and water absolutely can’t. They can all be found extremely easily and shut down, and it would be extremely easy to prevent more from being built.

              This isn’t hard.

              • TrickDacy@lemmy.worldM
                link
                fedilink
                arrow-up
                2
                arrow-down
                5
                ·
                20 hours ago

                It’s a weird argument to say “we could just stop doing popular things”. It shows a lack of awareness. And no, explaining this doesn’t mean I’m taking sides I just recognize the current reality

                • iAmTheTot@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  19 hours ago

                  The right thing isn’t always popular. Something being popular is not itself a good argument for a thing to be done.

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  5
                  arrow-down
                  2
                  ·
                  edit-2
                  19 hours ago

                  It’s not “popular” organically, it’s being forced on us by people who are invested in the technology. The chatbots are being shoved into everything because they want to make them profitable despite being money holes, not because people want it.

  • Limonene@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    3
    ·
    21 hours ago

    Generative AI and their outputs are derived products of their training data. I mean this ethically, not legally; I’m not a copyright lawyer.

    Using the output for personal viewing (advice, science questions, or jacking off to AI porn you requested) is weird but ethical. It’s equivalent to pirating a movie to watch at home.

    But as soon as you show someone else the output, I consider it theft without attribution. If you generate a meme image, you’re failing to attribute the artists whose work trained the AI without permission. If you generate code, that code infringes the numerous open source licenses of the training data, by failing to attribute it.

    Even a simple lemmy text post generated by AI is derived from thousands of unattributed novels.

    • shoo@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      7 hours ago

      What a weird distinction. So if I get a prompt to make a particular scene in a particular artist’s distinct style: not stealing. But if I share that prompt (and maybe even some seed info) to a friend, is that stealing? If I take a picture of the generated content, stealing? If someone takes it off my laptop without my knowledge are they stealing from me or the artist?

      My viewpoint is that information wants to be free, and trying to restrict it is a losing battle (as shown by Ai training). The concept of IP is tenuous at best but I do recognize that artists need to eat in our capitalist reality. But once you make something and set it free to the world you inherently lose some ownership of it. Getting mad at the tech itself for the economic injustice is silly, there are plenty more important things to worry about in our hell scape.

      • backgroundcow@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        5 hours ago

        Copyright law is more or less always formulated as limits on the rights to redistribute content, not how it is used. Hence, it isn’t a particularly strange position to take that one should be allowed to do whatever one wants with gen AI in the private confines of ones home, and it is only at the moment you start to redistribute content we have to start asking the difficult questions: what is, and what is not, a derivative work of the training data? What ethical limitations, if any, should apply when we use an algorithm to effortlessly copy “a style” that another human has spent lots of effort to develop?

    • gmtom@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      7
      ·
      13 hours ago

      No, gen AI pictures are not dirived works of their training data. They are seperate processes. The algorithm that actually generates the image has no knowledge of the training data.

        • gmtom@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          6 hours ago

          The algorithms involved in the actual creation of the images are not the ones actually trained on the data. So its not at all accurate to claim they are derived.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      24 hours ago

      Follow to expose yourself to different perspectives? Sure.

      But it sounds like the users in question are following with the intent to reply “you’re wrong” to everything the OP puts out.

      Which… I do, sadly, expect. But I wouldn’t wish for it.

    • the_q@lemmy.zip
      link
      fedilink
      arrow-up
      11
      arrow-down
      15
      ·
      edit-2
      22 hours ago

      Why would you follow someone you disagree with?

      Edit: I’m convinced, guys. I should follow racist, Nazi, psychopaths because even if I disagree their words hold value.

      • Auth@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        You follow them because you’re interested in their posts and you generally agree on most things. If I follow someone and they start saying FF14 is a good game im not going to unfollow just because I disagree.

      • new_guy@lemmy.world
        link
        fedilink
        arrow-up
        18
        arrow-down
        1
        ·
        24 hours ago

        I’m not saying that we should rage-follow but it’s also unreasonable to believe it’s possible to agree with every single opinion of another person let alone another community as a whole.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        15
        arrow-down
        3
        ·
        23 hours ago

        AI is whatever, but man, has social media been mind poison.

        I say we burn it all down, honestly. Including this place.

        • cannon_annon88@lemmy.today
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          23 hours ago

          I tend to agree. Mass social media was a mistake. I had way better conversations and learned way more shit from random people when I was posting on a niche metal band’s fan-run message board back in the 00’s. Now it’s all just who can post the fastest bullshit to get the most views and clicks.

          Talk about AI dumbing people down, but at least it has the ability to teach you what you want to know, if you tell it to. Social media, especially with the TikTok style of content being pushed everywhere else, is just 90% pure brain rot.

          • ZDL@lazysoci.al
            link
            fedilink
            arrow-up
            1
            ·
            11 hours ago

            Get rid of votes and worthless Internet Points and a lot of that would vanish. Of all the things to copy from Reddit and Twitter and their ilk, voting was the dumbest thing that Lemmy copied.

          • NoiseColor @lemmy.worldBanned from community
            link
            fedilink
            arrow-up
            1
            arrow-down
            6
            ·
            22 hours ago

            Yes, that’s well said. I’d also take ai over social media any day.

            A while ago someone launched a social media where all the people except the user are ai. I thought it was stupid when I heard of it (still do, I wouldn’t use it), but people who have, have noted how different it was because “people” on it were not mainly assholes like on normal social media. The difference shows how toxic social media is.

      • Whats_your_reasoning@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        22 hours ago

        Occasional disagreement isn’t a bad thing. Provided that the opinions expressed aren’t toxic or dangerous, what’s wrong with hearing an opinion that differs from your own? You don’t have to endorse it, share it, or even comment about it.

        No two people are going to agree 100% on everything. Listening to those who disagree with you means having opportunities to learn something new, and to maybe even improve yourself based on new information.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        23 hours ago

        Rule thinkers in, not out.

        Coming up with a genuinely original idea is a rare skill, much harder than judging ideas is. Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out. You should practice positive selection for geniuses and other intellectuals.

        I think about this every time I hear someone say something like “I lost all respect for Steven Pinker after he said all that stupid stuff about AI”. Your problem was thinking of “respect” as a relevant predicate to apply to Steven Pinker in the first place. Is he your father? Your youth pastor? No? Then why are you worrying about whether or not to “respect” him? Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.

        • stabby_cicada@slrpnk.net
          link
          fedilink
          arrow-up
          3
          ·
          13 hours ago

          Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.

          Yes. And. The worst-case scenario is: the black box is creating arguments deliberately designed to make you believe false things. 100% of the arguments coming out of it are false - either containing explicit falsehoods, or presenting true facts in such a way as to draw a false conclusion. If you, personally, cannot reject one of its arguments is false, it’s because you lack the knowledge rhetorical skill to see how it is false.

          I’m sure you can think of individuals and groups whom this applies to.

          (And there’s the opposite issue. An argument that is correct, but that looks incorrect to you, because your understanding of the issue is limited or incorrect already.)

          The way to avoid this is to assess the trustworthiness and credibility of the black box - in other words, how much respect to give it - before assessing its arguments. Because if your black box is producing biased and manipulative arguments, assessing those arguments on their own merits, and assuming you’ll be able to spot any factual inaccuracies and illogical arguments, isn’t objectivity. It’s arrogance.

        • aesthelete@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          19 hours ago

          Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate.

          This is a very weird way to look at people.

          Anyone can have an original idea, not just “genuises”. I don’t understand outsourcing your thinking, creativity, and your right to free association because some guy had a good idea once.

          (And I don’t think my dad, the inventor of toasters strudle, would approve of this)

          I have simpler policies. If someone I’m listening to is annoying and wrong more often than not, then I stop fucking listening to them.

          I’m not sure when people started to think that they had to go about life listening to stupid opinions of annoying fuck wads they disagree with. But you absolutely do not have to live life that way.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          22 hours ago

          The problem mister Alexander here makes is to assume geniuses exist, or that original ideas are rare. They don’t and they are not. Spend more than 15 minutes with any toddler and you’ll easily reach those 100 new original ideas. Humans are new ideas machines, it’s what we do. It is spontaneous, not extraneous, to us. To assume otherwise is very cynical and disingenuous. Every person has the capability to be a genius, because genius is just a social label granted to extremely narrow interpretations and projections of an individuals abilities in an extremely concrete set of skills or topic. For example, re-contextualize with a diagnosis of autism and now suddenly they are not a genius, they have an hyper-fixation.

          Also, the premise that every idea, specially brand new, can be judged and ruled as good or bad in a vacuum, right out of the gate, is also very stupid. The category of genius is a very recent concoction, stemming from the halls of Victorian moral presumptions and the newly developed habit of nobility of worshiping the writings they didn’t understand of people they had never met. This is what motivates the myth that genius whatever is always positive, in the popular mind. But, Goebbels was a genius at propaganda, everything that we do today in publishing is based on stuff he invented. That doesn’t mean all his ideas were worth listening to, and were he alive and you followed him on Twitter (lets be honest, he would have a Twitter), that would shed a rather poor light on you.

          Because, and this is the important part, humans are not a loose collection of isolated ideas. We are not modular, freely separable and reconfigurable beings. We are holistic, evolutive and integral. Sure, we might be different things to different people (privately) and audiences (publicly) at different points in time, but our own sense of identity and being is not divisible. Steven Pinker is perfectly capable of simultaneously being a liberal, atheist and intelligent linguist; a mediocre intrusionists psychologist who forgot how history works; and a stupid mysoginist and racist. All at the same time, and never stop being a single integral person. It doesn’t require an imaginary score of good to bad takes ratio. That’s a stupid premise. You don’t keep a broken clock around in the off chance it might be right twice a day. Use a more holistic sense.

          Remember, what’s behind the user name is (still more often than not) a full person, not a black box (except if it is a bot, of course).

          I understand and see why he didn’t touched the moral aspect of his own argument. It is because any moral analysis completely dismantles his premises. Morality is the most important thing separating humans from animals and machines. Of course if someone is an evil POS it you should block and cancel their ass. It’s Karl Popper all over again, if we don’t rule out bad takes in the off chance there will be a good take, we end up with a Nazi bar.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          22 hours ago

          if you want to learn, you search discord.

          Searching Discord is precisely the opposite of learning. You lose knowledge every second spent on Discord.

          / s

          • .Donuts@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            22 hours ago

            They meant

            dis·​cord: lack of agreement or harmony (as between persons, things, or ideas)

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              22 hours ago

              Here, I was keeping it in a drawer because I thought I wouldn’t need it, but obviously I did.

              /s

              • .Donuts@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                21 hours ago

                I couldn’t take the statement itself as sarcasm because you’re not wrong lol. It would have been more obvious if you glazed Discord instead I guess.

                • dustyData@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  21 hours ago

                  I thought my use of capitalized Discord would be subtle but noticeable that it was a joke. I guess I was too subtle.

        • aesthelete@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          19 hours ago

          if you want to learn, you search discord.

          This is why when learning guitar I looked up guitar lessons and then looked for people who didn’t believe learning to play guitar was possible at all and the abilities instead were based upon innate talent and genetics! /s

          Seriously, if learning was done by discord, then US politics (and cable news viewers) would be full of absolute scholars, instead of, you know, the exact fucking opposite of that.

  • TeraByteMarx@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    8 hours ago

    Sex bots are taking work away from sex workers. Turning genuine sensuality into some kind of horrible mimicry of a genuine connection. How am I supposed to pay my bills

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    21 hours ago

    It’s so surreal when someone posts a meme about That Guy™ doing That Thing™ and then all of a sudden That Guy™ shows up in the comments, doing That Thing™

    Like, can I get your autograph? You’re famous, bro!

    • grrgyle@slrpnk.net
      link
      fedilink
      arrow-up
      5
      ·
      14 hours ago

      Yeah I do plenty of shit I know is a problem. Most of it just passively from living in a consumerist society.

      • dandelion (she/her)@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        4
        ·
        11 hours ago

        yes, a lot of my immoral actions are because it’s hard or against the grain to be more moral (e.g. being a strict vegan even when traveling or not easily accommodated, or using cars when technicallyI could bicycle, but on dangerous roads and long distances).

        I have definitely spent most of my adult life going against the grain in extreme ways to be a “better” person, but I have been left victimized and disabled for it, so I’m trying to learn to be more moderate and not take big social problems as entirely my personal responsibility. Obviously it’s not one extreme or the other, it’s an interplay between personal and social / structural.

    • AquaTofana@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      19 hours ago

      I’ve said it before and I’ll say it again, one of my favorite things is the AI rp chatbots. They’re stories written by me and an AI, for me, however the fuck I want to write them.

      I used to do it with other people over the web - including my bestie who Ive been writing with for 20+ years now - but I don’t write with other humans anymore.

      AI solves the ghosting issue, the “life got in the way” issues, the “I’m just not into it anymore” issues, and the “Oh you wanna make this smutty please for the love of god I hope you’re not lying about being 26” issue, and finally, the biggest issue for me: “Please I told you I’m happily married please stop asking for me socials or email. I just wanna write fun angsty romance stories with you.”

      So I’m with you. I’m also the problem, its me. But you know what? When I discovered these AI chatbots in February of this year, my doomscrolling was cut down to a third of what it was, and I all of a sudden was sleeping better and less angry.

      I’m not gonna stop.

    • chunkystyles@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      21 hours ago

      I use it to help me solve tech and code issues, but only because searching the web for help has become so bad. LLM answers are almost always better, and I hate it.

      Everything is bullshit. Everything sucks. Capitalism has ruined everything.