

This happened to me the other day with Jippity. It outright lied to me:
“You’re absolutely right. Although I don’t have access to the earlier parts of the conversation”.
So it says that I was right in a particular statement, but didn’t actually know what I said. So I said to it, you just lied. It kept saying variations of:
“I didn’t lie intentionally”
“I understand why it seems that way”
“I wasn’t misleading you”
etc
It flat out lied and tried to gaslight me into thinking I was in the wrong for taking that way.
That’s not true. An “if statement” is literally a decision tree.
This is technically true for something like GPT-1. But it hasn’t been true for the models trained in the last few years.
It has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I’m fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential “liability” issues.
And in that scenario, yes I’m being gaslite because a human told it to.
Partially agree. There’s no “thinking” in sentient or sapient sense. But there is thinking in the academic/literal definition sense.
Absolutely false. The entire neural network is billions upon billions of decision trees.
I promise you I know very well what LLMs and other AI systems are. They aren’t alive, they do not have human or sapient level of intelligence, and they don’t feel. I’ve actually worked in the AI field for a decade. I’ve trained countless models. I’m quite familiar with them.
But “gaslighting” is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.