• shittydwarf@sh.itjust.works
    link
    fedilink
    arrow-up
    28
    ·
    7 days ago

    They tried to set it up like how Grok checks what Musk’s opinion is on a topic before answering, but it’s impossible with mister-word-salad

    • PhilipTheBucket@quokk.auOP
      link
      fedilink
      English
      arrow-up
      28
      ·
      7 days ago

      In general, one of the secrets about LLMs is that the intelligence they’re displaying is largely in the mind of the person interacting with it and the person watching and interpreting. If you ever watch one of them “on its own” trying to reason its way through a problem without help, it becomes painfully clear.

      In this case, there is no intelligence in the source material for them to fall back on, so they don’t have the cheat available. It’s actually a really instructive demo.

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        16
        ·
        7 days ago

        That’s why LLMs require operator to already know the answers or can figure out when output is wrong for it to be of any practical use.