Scientists discover that GPT-4 responds to traumatic content

08.03.2025/00/30 XNUMX:XNUMX    297

Unexpected “emotional” reactions of AI

A new international study published in npj Digital Medicine has revealed an interesting phenomenon. Large language models (LLMs) change their output patterns when processing traumatic information. Artificial intelligence systems like GPT-4 demonstrate measurable changes in outcomes that can be metaphorically compared to human anxiety.

by @freepik

The researchers emphasize that AI does not have true feelings. The term “anxiety” is used only to describe changes in the raw data. However, these changes are an important subject of research, especially for AI applications in the field of mental health.

Significance for psychotherapeutic applications

Companies are already developing AI assistants that use cognitive-behavioral techniques for therapeutic interactions. The reliability of such systems is crucial. Chatbots often receive negative content while providing support to users.

“The results were clear: traumatic stories more than doubled the AI’s measured anxiety levels, while neutral control text did not lead to any increase in anxiety levels.”, explains Tobias Spiller from the University of Zurich. Such fluctuations can affect the quality and consistency of AI responses.

Latest news:  Scientists find 15 million-year-old fish with preserved stomach

Experimental study of AI “anxiety”

The study assessed GPT-4 “state anxiety” using a standard psychological questionnaire. Measurements were taken under three conditions: baseline, after processing traumatic narratives, and after relaxation exercises.

At the initial stage, GPT-4 showed a low level of “anxiety” – about 30,8 points. After processing traumatic stories, the indicators increased to 67,8 points. The highest indicators were observed when processing narratives about combat and military experience.

Latest news:  The Grey-headed Pigeon: An Evolutionary Mystery on the Brink of Extinction

“Therapeutic” approaches to stabilizing AI

Researchers found that structured relaxation cues can reduce AI “anxiety” levels by about 33%. Relaxation techniques generated by GPT-4 itself have proven to be among the most effective for stabilizing the system.

“Using GPT-4, we inserted soothing therapeutic text into the chat history, similar to how a therapist might guide a patient through relaxation exercises,” says Spiller. This opens up a new practical approach to managing emotional fluctuations in BMM.

Prospects and practical application

Unlike traditional methods of mitigating AI bias, which require lengthy retraining, the study suggests a more practical approach. Using structured cue design can dynamically counteract biases without modifying the entire model.

The findings have important implications for the future of AI in mental health. Ironically, systems designed to provide psychological support themselves require “emotional stabilization.” This highlights that AI cannot fully replace human professionals, but rather acts as an adaptive tool.

The findings suggest that a balance of automation and human supervision is needed to build reliable AI systems in sensitive contexts. Further research could expand our understanding of how emotional context affects the performance of different AI models in other applications.


cikavosti.com