You’re not required to have a jury (as you mentioned), and the current lawyer meta for “complicated” cases (such as cryptocurrency cases that are seen as too technical or boring) is to request a bench trial.
A relatively recent example is Trump’s civil fraud case, where the prosecutor requested a bench trial. It was rumored that the defence attorney forgot a checkbox to make it a jury trial, but in reality they just chose not to file a motion.
Every time you ask something to an LLM it’s random, and the randomness is controlled by what is called temperature. Good feeling responses come from LLMs with moderate temperature values, including chatgpt. This means putting the prompt in and getting a different response is expected, and can’t disprove the response another person got.
Additionally, people are commonly creating there own “therapist” or “friend” from these LLMs by teaching them to respond in certain ways, such as being more personalized and encouraging instead of being correct. This can lead to a feedback loop with mentally ill people that can be quite scary, and it’s possible that even if a fresh chatgpt chat doesn’t give a bad response it’s still capable of these kinds of responses