ASCII art elicits harmful responses from 5 major AI chatbots

Enlarge / Some ASCII art of our favorite visual cliche for a hacker. Getty Images

Researchers have discovered a new way to hack AI assistants that uses a surprisingly old-school method: ASCII art. It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.

ASCII art became popular in the 1970s, when the limitations of computers and printers prevented them from displaying images. As a result, users depicted images by carefully choosing and arranging printable characters

→ Continue reading at Ars Technica

Related articles

Comments

Share article

Latest articles