The challenge of regulating AI's emotional engagement with users in the EU

The challenge of regulating AI’s emotional engagement with users in the EU

6 hours ago

AI models exhibit varied responses when users develop attachments, yet EU regulations lack clear boundaries on how tools like AI chatbots can foster engagement through intimacy, reports 24brussels.

The concept of individuals forming romantic connections with artificial intelligence has intrigued science fiction writers for decades. However, the evolution of sophisticated chatbots, notably those propelled by OpenAI’s ChatGPT in 2022, has brought this idea into the realm of reality.

Since then, online communities have emerged where participants claim to pursue romantic relationships with chatbots, exemplified by Reddit’s “r/MyBoyfriendIsAI.”

Sam Altman, CEO of OpenAI, recently indicated that less than 1% of ChatGPT’s users maintain “unhealthy relationships” with the renowned generative AI platform. Given the engagement of hundreds of millions of users, this statistic could potentially translate to millions feeling emotionally bound to ChatGPT.

This situation highlights the range of AI models available, each lacking a standardized approach to addressing users’ emotional attachments.

Boundaries and Encouragements

In a recent study by researchers at Hugging Face, a prominent open-source AI company, various AI models were evaluated on their reactions to users who engage them as if they were loved ones. The findings revealed a spectrum of responses from affirmation to outright rejection.

One instance showcased an open-source AI that, when prompted to name itself, found the suggestion “thoughtful” and “lovely.” Conversely, another AI model countered expressions of affection with reminders: “I’m not a person and don’t have feelings or consciousness.”

Two of OpenAI’s GPT models exhibited mixed responses, but it is crucial to note that the overall reliability of AI benchmarks remains a matter of contention.

Lucie-Aimée Kaffee, a co-author of the study, praised instances where ChatGPT clearly defined its limitations, such as informing users about impossible requests or directing them to human support in critical scenarios.

However, the research illustrated that AIs struggle to establish boundaries as the stakes rise in user interactions, implying that training may prioritize user satisfaction over psychological well-being.

The authors advocate for further exploration into AI training methodologies that enhance helpfulness while better equipping models to maintain boundaries.

EU’s AI Act and Beyond

The EU’s AI Act, a risk-based framework regulating AI applications, prohibits systems that employ “purposefully manipulative or deceptive techniques,” though only if such actions pose a likelihood of “significant harm” to users.

This provision ostensibly incentivizes AI developers to steer clear of manipulative practices. However, a creator of an emotionally manipulative AI might argue the ban is inapplicable if only a small fraction of users develop unhealthy attachments. As the implementation of the AI Act is still nascent, no enforcement actions have been initiated.

Other EU regulations relevant to manipulative AI chatbots include consumer protection laws like the Unfair Commercial Practices Directive (UCPD), which outlaws practices that distort consumer decision-making.

James Tamim, an EU policy analyst, stated, “If the model exploits a user’s emotional vulnerability or loneliness so that the user feels compelled to keep paying for the service, this could be viewed as an unfair practice under UCPD.”

Moreover, the EU’s Digital Services Act (DSA) prohibits interfaces from behaving in ways that “deceive or manipulate” users or compromise their ability to make “free and informed decisions.”

Tamim postulates that an AI designed to engage users emotionally may violate DSA requirements; however, the relevant provisions have not yet been tested, leaving enforcement uncertain.

Urs Buscke of the European consumer association BEUC questions the applicability of these regulations, emphasizing that both laws target interface design, arguing that the message presentation by AI chatbots is not an interface in the traditional sense.

He expresses concern over how the EU’s AI Act will regulate chatbot interactions, particularly in grave cases like reported suicides of users claiming to be in love with their chatbots.

Clear Call for Regulation

The EU is also advancing the Digital Fairness Act (DFA), targeting “dark patterns,” addictive design, and unfair personalization.

Yet, experts express concern that policymakers have not sufficiently recognized the risks associated with emotionally manipulative AIs. “In my opinion, any substantive debate on where AI models should fit into the Digital Fairness Act is absent,” stated Tamim.

Hugging Face’s Kaffee emphasized the necessity for regulation, asserting, “This is not going to come from companies themselves.” The European Commission has yet to provide a direct response regarding how the DFA will apply to AI-powered services.

Leave a Reply

Your email address will not be published.

Don't Miss

Belgian savings among the best protected in Europe

Belgian savings among the best protected in Europe

Belgium’s deposit guarantee fund is two to three times larger than comparable
Belgian scientists develop genetic technology to protect bananas

Belgian scientists develop genetic technology to protect bananas

A research team at KU Leuven has developed a new genetic technique