Google's new AI models can recognize emotions

10.12.2024/00/30 XNUMX:XNUMX    286

Google has introduced a new family of artificial intelligence models PaliGemma 2, able to analyze images and create captions that describe objects, actions and emotions of people. However, the emotion recognition feature has raised serious concerns among academics and experts due to potential risks. As Heidi Hlaaf of the AI ​​Now Institute states: "Emotion is a complex category that cannot be reliably determined based on visual information alone."

by @freepik

Emotion analysis techniques are based on the research of Paul Ekman, who identified six basic emotions: joy, fear, anger, sadness, surprise and disgust. However, current research indicates a significant influence of cultural and individual contexts on the expression of emotions, which calls into question the universality of such systems. In addition, such models may be biased. For example, an MIT study found that algorithms were more likely to attribute negative emotions to black people compared to white people, which could lead to discrimination.

Latest news:  Scientists have solved the mystery of how ancient pterosaurs rose into the air

Google claims to conduct demographic bias tests using a database FairFace, but experts believe that this is not enough. The database covers a limited set of categories, which does not allow for a full risk assessment. In addition, European AI law already prohibits the use of emotion recognition systems in schools and workplaces, although such technologies are still allowed in law enforcement.

Latest news:  "Sunken worlds" discovered in the Earth's mantle under the Pacific Ocean

Criticism also concerns the risk of using such models in hiring personnel or making financial decisions, which can increase discrimination.

"If the definition of emotions is based on pseudoscientific assumptions, it can lead to unjustified discrimination in a wide variety of areas," says Hlaaf.

Google officials say they conduct a thorough analysis of the risks, but many experts believe that these efforts are not enough. As Sandra Wachter of the University of Oxford points out, "responsible development of technologies requires analysis of their consequences at every stage." She calls for transparency and accountability in model development to minimize ethical risks and prevent possible abuse.




Similar news

  1. A system of sensors on the face recognizes emotions in real time Scientists have developed a personalized facial interface that combines facial muscle strain detection and vocal cord vibration to recognize emotions in real time.
  2. New AI algorithm detects depression with 97,5% accuracy Lithuanian scientists have created an artificial intelligence algorithm that can diagnose depression with high accuracy in five minutes by analyzing data on voice and brain activity.
  3. Can artificial intelligence adhere to human standards of morality Oxford University students are investigating whether artificial intelligence systems can act ethically, analyzing the possibility of their compliance with human moral standards.
Latest news:  Scientists have discovered the hidden heat source of the northern lights


cikavosti.com