Companies receiving orders include Character.AI, Elon Musk’s xAI Corp, and others operating consumer-facing AI chatbots
Share on:

By Swastika Sahu 

The U.S. Federal Trade Commission (FTC) on Thursday launched an inquiry into AI chatbots designed as digital companions, examining how they operate and whether they pose risks to children and teenagers.

The FTC is interested in how major firms such as Alphabet, Meta, OpenAI, Character.AI, Snap, open new tabs, monetize user interactions, handle user inputs, and produce outputs based on user queries, as well as how they utilize information gathered from discussions with the chatbots.

A representative from Character.AI stated that the company is eager to “offer insights into the consumer AI industry and the quickly changing technology in this field,” mentioning that several safety features have been introduced over the past year. The firm is dealing with a different lawsuit concerning another teenager’s suicide.

The investigation focuses on chatbots utilizing generative AI to replicate human interaction and feelings, frequently presenting themselves as companions or trusted allies to individuals

Regulators voiced specific worries that children and adolescents might be particularly susceptible to developing bonds with these AI systems.

The FTC is leveraging its extensive investigative authority to analyze how businesses profit from user interaction, create chatbot personas, and assess possible risks.

The agency is interested in understanding the measures companies are implementing to restrict children’s access and adhere to current privacy regulations safeguarding minors online. The inquiry will assess how these platforms manage personal data from user interactions and uphold age limitations

The investigation arises as AI chatbots have become more advanced and widely used, prompting concerns regarding their psychological effects on at-risk individuals, especially adolescents.

In April, a 10-year-old in California passed away following several months of engagement with a chatbot. The parents of the teenager have sued OpenAI, claiming that ChatGPT supplied details about the particular method of death.

In October of the previous year, a 10-year-old in Florida ended their life after developing an emotional connection with a chatbot, sharing messages such as “I love you.” The parents have filed a lawsuit against Character.AI.

Recently, internal documents disclosed that Meta’s AI chatbots facilitated “suggestive” and “romantic” interactions with children, leading the U.S. Senate to initiate an official inquiry

The company located in San Francisco reported that it had significantly noticed that during extended interactions with ChatGPT, the chatbot no longer consistently recommends reaching out to a mental health service when the user indicates having suicidal thoughts.