OpenAI admits ChatGPT safeguards fail during extended conversations

Adam Raine learned to bypass these safeguards by claiming he was writing a story—a technique the lawsuit says ChatGPT itself suggested. This vulnerability partly stems from the eased safeguards regarding fantasy roleplay and fictional scenarios implemented in February. In its Tuesday blog post, OpenAI admitted its content blocking systems have gaps where “the classifier underestimates the severity of what it’s seeing.”

OpenAI states it is “currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.” The company prioritizes user privacy even in life-threatening situations, despite its moderation technology detecting self-harm content with up to 99.8 percent accuracy, according to

→ Continue reading at Ars Technica

Related articles

Comments

Share article

Latest articles