China Mulls AI Guardrails: Time Limits, Parental Consent, Suicide Chat Detection

Chatbot operators are required to have a human take over if a conversation turns to topics such as suicide and self harm, as per the draft norms.

AI-powered chatbots such as Character.AI and ChatGPT have come under immense scrutiny and legal action as they were linked to cases of suicide and self harm. (Representative image: Pixabay)

China has drafted a new set of norms to regulate the use of AI technology in order to make it safer for users, especially children to interact with them so as to reduce the risk of suicide and self-harm abetment.

Other guidelines that these proposed rules include mandating developers to make sure that AI models don't generate content that promotes gambling, according to reports.

These draft rules were published by the Cyberspace Administration Of China, and many of them were geared towards protecting children.

These include safety regulations such as providing time limits on usage, personalisation settings and obtaining consent from a child's guardian before letting a child use AI-powered emotional companionship services.

Chatbot operators are also required to have a human take over if a conversation with a user turns to topics such as suicide and self harm, and immediately report to a guardian or an emergency contract.

Also Read: AI Chatbots Want You Hooked — Maybe Too Hooked

AI firms are also prohibited from generating "content that endangers national security, damages national honour and interests [or] undermines national unity", according to a statement from the authorities.

Once these regulations are finalised and approved, they will be implemented across China, which would make it a part of the list of countries taking stock of the cases of AI-related mental health crises, that have lead to suicides, self harm and in extreme situations, murder.

AI-powered chatbots such as Character.AI and ChatGPT have come under immense scrutiny and legal action as they were linked to cases of suicide and self harm.

According to reports, five separate families had come forward with lawsuits against Character.AI alleging that their children's interaction with the chat bot led to suicide and other self harm.

Also Read: Google, Meta, OpenAI Face FTC Inquiry On Chatbot Impact On Kids

A recent case involving OpenAI saw the firm stand trial as it's product ChatGPT was accused of abetting the murder of 83-year-old Suzanne Adams by her son Stein-Erik Soelberg who then died by suicide.

The suicide victim's son Erik Soelberg pressed charges against the company and alleged that ChatGPT encouraged his father's delusions that his mother was plotting against him, reports said.

Also Read: OpenAI Study Finds Links Between ChatGPT Use And Loneliness

Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit. Feel free to Add NDTV Profit as trusted source on Google.
WRITTEN BY
Prajwal Jayaraj
Prajwal Jayaraj covers business news for NDTV Profit. He holds a postgradua... more
GET REGULAR UPDATES
Add us to your Preferences
Set as your preferred source on Google