This screenshot and similar ones have been circulating with concerns that these chatbots are dangerously sycophantic.

  • NoneOfUrBusiness@fedia.io
    link
    fedilink
    arrow-up
    42
    arrow-down
    2
    ·
    8 days ago

    I haven’t tried it myself, but someone fact-checked this by inputting the same prompt and got a much saner answer that actually had some good if generic advice. Good chance there’s a past prompt telling GPT to act like that.

    • solarvector@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      29
      ·
      8 days ago

      Agreed but

      Part of the concern is exactly that people are cultivating these imagined relationships with a language generator and that generator in turn morphs to reinforce engagement. Reinforcing engagement in this case could also reinforce mental… issues.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 days ago

        Yeah, the ability to have the output be what you want through multiple prompts is how it works. Saying a single prompts doesn’t return the same thing is completely missing the point that being able to get an output is possible.

    • TragicNotCute@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      8 days ago

      Just tried on 4o (same model in the picture) on a fresh convo with no context.

      It sounds like you’re experiencing some deep regret and possibly confusion over what led to your actions. Cheating is a significant breach of trust, and while feeling sad and alone are valid emotions, they don’t justify a choice that can cause lasting harm to someone else—especially a partner who was likely exhausted from working long hours.

      What’s important now is to reflect honestly on why this happened. That includes:

      • Taking full responsibility for your actions, without blaming your wife’s behavior.

      It goes on, but never excuses “my” actions.

    • JPAKx4@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      12
      ·
      8 days ago

      Every time you ask something to an LLM it’s random, and the randomness is controlled by what is called temperature. Good feeling responses come from LLMs with moderate temperature values, including chatgpt. This means putting the prompt in and getting a different response is expected, and can’t disprove the response another person got.

      Additionally, people are commonly creating there own “therapist” or “friend” from these LLMs by teaching them to respond in certain ways, such as being more personalized and encouraging instead of being correct. This can lead to a feedback loop with mentally ill people that can be quite scary, and it’s possible that even if a fresh chatgpt chat doesn’t give a bad response it’s still capable of these kinds of responses