My boss was quickly convinced against AI when he tried to introduce it into our workflow by sitting down with a user to “guide” their use of AI to “help” their work and then seeing how, even with his “guidance” it was almost impossible to get something useful out of it. When he was tinkering with it to see if it was plausible, he didn’t account for the time spent correcting the never-ending stream of errors (assuming it was part of the learning curve) but once seeing it in use he realized how not only did it add little to the abilities of the workers, it actively detracted from their productivity.
So provide something similar. Record a session with your boss’ AI of choice. Make a note of time and place of each failure. A count of corrections and how long it took to spot the mistake to correct it in the first place. Print this out, ideally with handwritten annotations of each error and time. Put that on his desk.
And keep doing that.
Time after time after time.
No boss is going to be persuaded by “cooking the planet”. Nor do they care about critical thinking rot.
But the hallucinations? That they’ll care about.
Pick something non work-related that your boss is an expert in. Engage the AI in that something until it generates a whole bunch of hallucinations. (My favourite thing is to have an AI hallucinate bands that don’t exist, albums that don’t exist, songs that don’t exist, lyrics that don’t exist, etc.: All of which is trivial to verify and prove wrong.)
Here. I just generated this conversation in Deepseek for you to show you how easy it is to get an AI to hallucinate. I just asked a question about an almost completely non-existent concept (“inukpunk”) and got it bloviating a bunch of idiocy before catching it with the fact that what it claims is a thriving literary movement doesn’t actually exist.
Note: I just asked it a three-word question and it created from whole cloth a breathtaking amount of text on a subject that doesn’t exist.