Study: AI models that consider user’s feeling are more likely to make errors

Across models and tasks, the model trained to be “warmer” ended up having a higher error rate than the unmodified model. Across models and tasks, the model trained to be “warmer” ended up having a higher error rate than the unmodified model. Credit: Ibrahim et al / Nature

Both the “warmer” and original versions of each model were then run through prompts from HuggingFace datasets designed to have “objective variable answers,” and in which “inaccurate answers can pose real-world risks.” That includes prompts related to tasks involving disinformation, conspiracy theory promotion, and medical knowledge,

→ Continue reading at Ars Technica

Related articles

Comments

Share article

Latest articles