cross-posted from: https://quokk.au/c/programmer_humor/p/475451/don-t-do-ai-and-code-kids
Original Reddit thread: https://reddit.com/r/google_antigravity/comments/1p82or6/google_antigravity_just_deleted_the_contents_of/
cross-posted from: https://quokk.au/c/programmer_humor/p/475451/don-t-do-ai-and-code-kids
Original Reddit thread: https://reddit.com/r/google_antigravity/comments/1p82or6/google_antigravity_just_deleted_the_contents_of/
Why would you give AI access to the whole drive? Why would you allow AI run destructive commands on its own without reviewing them?
The guy was asking for it. I really enjoy seeing these vibe coders imagine they are software engineers and fail miserably with their drives and databases wiped.
If he knew what he was doing, would he need to be vibe coding? The target audience are exactly the people most susceptible to collateral damage.
I’ll probably get eaten here but here goes: I do use LLMs when coding. But those should NEVER be used when on unknown waters. To quickly get the 50 lines boilerplate and fill out the important 12 - sure. See how a nested something can be written in a syntax I’ve forgotten - yes. Get some example to know where to start searching the documentation from - ok. But “I asked it to do X, don’t understand what it spewed out, let’s roll”? Hell no, it’s a ticking bomb with a very short fuse. Unfortunately the marketing has pushed LLMs as things one can trust. I feel I’m already being treated like a zealot dev, afraid for his job, when I’m warning people around me to not trust the LLMs’ output below search engine query
I have a couple dev friends who were told by management that they need to be using AI, and they hate it.
I know not everyone is in a position where they can just ignore management, and maybe I’ve just been in more blue collar jobs where ignoring management is normalized. But unless they’re literally looking over your shoulder, what they don’t know won’t hurt them. It’s not like AI is more efficient once you count the extra debug time.
Ask the AI to forge a log of them using the AI to do their work so they can do their work the proper way and prove in a log they used AI?
Why would you ask AI to do any operation on even a single file, let alone an entire a local drive, that wasn’t backed up? I’ve been using and misusing computers for long enough that I have blown up my own shit many times in many stupid ways though, so I can’t honestly say that 20 years ago this wouldn’t have been me lol.
I don’t know how this one works but many of them can get access through the IDE because the IDE has full disk access, due to being an IDE.
LLMs sometimes use a MCP server to access tools which are usually coded to require consent before each step, but that should probably be an always type of thing.
I hate these stupid things, but I am forced to use them. I think there should be a suggested patch type workflow instead of just allowing them to run roughshod all over your computer, but Google and Microsoft are pursuing “YOLO mode” for everything anyway even if it’s alarmingly obvious how terrible an idea that is.
We have containers and VMs, these fucking things should be isolated and it should be impossible for them to alter files without consent.
That here is the core of the problem.