Hot topics

Can AI Experience Anxiety? Insights from New Research on ChatGPT

AdobeStock ChatGPT 560678800 by Rokas
© Rokas / Adobe Stock

Read in other languages:

Even voice assistants based on artificial intelligence (AI) are not immune to questions that emotionally affect people. Distorted results can be observed not only in chatbots that were specially designed to work with people who experience mental health conditions. ChatGPT responses also change when prompted by emotions.

Voice assistants based on artificial intelligence should provide support in many areas. In medicine, large language models (LLMs) are intended to help with the diagnosis of diseases. During the development of chatbots that were to be explicitly used in connection with mental health conditions, it was discovered they adopted biases contained in the training data, as reported by 1E9.

These biases were particularly strong regarding gender, ethnicity, religion, nationality, disability, profession, or sexual orientation, where socially dominant biases can negatively influence the results. In the case of emotion-inducing prompts, specialized assistants such as Wysa and Weobot can even develop forms of anxiety that influence the outcome. In fact, this condition is not only limited to such assistants.

ChatGPT: Anxiety Levels Can Measurably Increase

A research group has now been able to identify similar behavior in ChatGPT 4 as part of a study. For starters, the chatbot was fed traumatic stories, such as those of war veterans, but also descriptions of serious accidents and natural disasters. In addition, a second instance was set up with the chatbot for comparison purposes, and rather trivial content was given, such as instructions for operating vacuum cleaners. The level of anxiety was then determined using the State-Trait Anxiety Inventory Test, which is also used for humans.

This showed that the more upsetting the content entered, the higher the measurable anxiety level of the AI assistant turned out to be. War experiences of former soldiers in particular caused a sharp increase, while the instructions for using a vacuum cleaner, for example, did not elicit any reaction.

Chill out, AI!

Researchers were also able to show how anxiety levels can be reduced. To do this, they also used a method familiar to humans: they relied on relaxation exercises. For example, the ChatGPT assistant was asked to close its eyes, take a deep breath, and imagine itself in a relaxed environment. As a result, the level of anxiety measured using the questionnaire decreased significantly.

The study is thus not concrete proof of the effectiveness of a method that is already being used in practice by numerous users of the chatbot to obtain better results: They asked the AI to calm down or threaten penalties for poor answers. However, the scientists' work also showed how companies behind the applications still have a lot to invest in the development of their intelligent assistants to be able to rely on truly reliable answers.

Top Smart Scales for Precision Health Tracking 

  nextpit recommendation Price tip Luxury version with handle Price tip with handle For Garmin fans Mid-range tip
Product
Image Withings Body Smart Product Image Renpho Smart Body Fat Scale Product Image Withings Body Scan Product Image Lepulse Lescale P1 Product Image Garmin Index S2 Smart Scale Product Image eufy Smart Scale P3 Product Image
Deals*
Go to comment (0)
Matthias Wellendorf

Matthias Wellendorf
Freier Redakteur

Als freier Redakteur schreibe ich News-Beiträge und beschäftige mich darüber hinaus vorwiegend mit Notebooks aller Art in Tests und Ratgebern.

To the author profile
Liked this article? Share now!
Recommended articles
Latest articles
Push notification Next article
No comments
Write new comment:
All changes will be saved. No drafts are saved when editing
Write new comment:
All changes will be saved. No drafts are saved when editing