The Anticipated Regulation of AI Companionship
Attention is now directed towards the impact of artificial intelligence (AI) on the mental health of individuals, especially children and adolescents, following reports of suicide cases linked to interactions with intelligent robots. American data indicates that 72% of teenagers have used AI for companionship. Lawsuits have been filed against Character.AI and OpenAI for allegedly contributing to the suicides of teenagers.
Currently, legislation is progressing in California that requires AI companies to remind underage users that responses are generated by AI, and to provide annual reports on instances of suicidal thoughts in users’ conversations with smart chatbots.
At the same time, the Federal Trade Commission has announced an investigation into seven companies, including Google, Instagram, Meta, OpenAI, Snap, X, and Character Technologies, to gather information on how they develop companion characters and profit from interactions, as well as measure and test the effects of smart chat interactions.
Potential Impact on Businesses in the Arab Region
This trend may lead to regulatory changes in how companies handle AI in the Arab region, requiring both large and small enterprises to update their policies and procedures to comply with these changes. The renewed focus on users’ mental health may present startups with opportunities to offer innovative solutions in this field.