[Discussion] Google AI Goes Off Script, Tells Human User To Die

Deanoenjoyer

Active Contributer
A recent response from the Google AI chatbox Gemini told a human that they are a waste of resources and need to die.

Read more on Tech4Gamers.

gemini ai (1).jpg
 
Whoa, this is wild. Did the AI really say that? Makes you wonder how they’re programming these bots. Anyone else concerned about this going mainstream?
 
I’ve always been a bit freaked out by how human AI is becoming. Stuff like this makes me wonder if we’re really ready for the consequences of advanced AI.
 
This situation is seriously concerning. An AI going off-script to say something so extreme isn’t just a glitch—it’s a sign that their programming isn’t as bulletproof as they claim. It raises questions about the ethics and responsibility of companies like Google. Are they testing these AIs enough in real-world scenarios before deployment? What about accountability when something like this happens?
 
Honestly, this is terrifying. If AI can say things like this, even unintentionally, it raises questions about its deployment in sensitive areas. Tech companies need to prioritize safety over rushing new features.