

Turns out I had overlooked the fact that he was specifically seeking to replace chloride rather than sodium, for whatever reason (I’m not a medical professional). If Google search (not Google AI) tells the truth, this doesn’t sound like a very common idea, though. If people turn to chatbots for questions like these (for which very little actual resources may be available), the danger could be even higher, I guess, especially if chatbots had been trained to avoid disappointing responses.
A while ago, I uploaded a .json file to a chatbot (MS Copilot, I believe). It was a perfectly fine .json, with just one semicolon removed (by me). The chatbot was unable to identify the problem. Instead, it claimed to have found various other “errors” in the file. Would be interesting to know if other models (such as GPT-5) would perform any better here, as to me (as a layperson) this sounds somewhat similar to the letter counting problem.