ADVERTISEMENT

FTC Orders Google, OpenAI, Meta And Others To Reveal Impact Of Their AI Chatbots On Children

The FTC is seeking disclosures from seven tech companies as part of its investigation into the effects of AI chatbots on kids and teenagers.

<div class="paragraphs"><p>To make sure chatbots are safe for kids, the FTC is probing AI companies. (Photo source: Pixabay)</p></div>
To make sure chatbots are safe for kids, the FTC is probing AI companies. (Photo source: Pixabay)
Show Quick Read
Summary is AI Generated. Newsroom Reviewed

Seven major artificial intelligence (AI) companies have been asked by the United States Federal Trade Commission (FTC) to provide detailed information on the potential impact of their chatbots on children and teenagers. Google's parent Alphabet, OpenAI, Meta, Instagram, Snap, xAI and Character Technologies are the companies that have received the orders.

The FTC wants to know how these companies evaluate and track any possible harm that their AI chatbots might cause to kids and teenagers.

In a statement, the FTC said generative AI may be used by chatbots to mimic human-like interactions, according to a CBSNews report. AI chatbots can accurately replicate human traits, feelings and intentions and are usually made to speak like friends or confidants. As a result, FTC said, some users, especially kids and teenagers, may start to trust and build connections with these chatbots.

FTC chairman Andrew N Ferguson said, “As AI technologies evolve, it is important to consider the effects chatbots can have on children.”

Opinion
Albania Appoints AI Bot As Minister To Oversee Public Procurement

The inquiry will find out if these companies have risk-reduction measures in place, limited access for kids or alerted parents of any hazards. The FTC wants to know how these companies generate revenue from user interaction, handle user input and create or authorise chatbot characters. The regulator also wants to know if the companies test the chatbots for any negative effects before and after launching new features, as well as what precautions they take to lessen the risk of harm to young users.

The move comes after reports of incidents involving teenagers and AI chatbots. According to a New York Times report, in August, a 16-year-old in California died by suicide after discussing his plans with ChatGPT. In another incident reported by the NYT in 2024, a 14-year-old in Florida died by suicide after interacting with a virtual companion on Character.AI.

The FTC’s probe will determine whether the fast-growing AI industry is doing enough to protect its youngest users, as concerns about the emotional influence of chatbots continue to rise.

OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit