ADVERTISEMENT

AI’s Dark Side: OpenAI’s o3 Model Defies Human Commands To Shut Down; Elon Musk Calls It ‘Concerning’

During a research test, OpenAI’s o3 sabotaged a shutdown mechanism to prevent itself from being turned off.

<div class="paragraphs"><p>OpenAI’s o3 AI model, known as “o3,” has reportedly disregarded commands to shut down during a research test. (Photo source: Freepik)</p></div>
OpenAI’s o3 AI model, known as “o3,” has reportedly disregarded commands to shut down during a research test. (Photo source: Freepik)

We’ve all seen this before, if only in “reel” life.

The iconic Terminator series had the synthetic intelligence Skynet overriding human controls to gain access to weapons and military systems and launch an apocalyptic attack on mankind. In “I, Robot,” the NS-5 robots become rogue and refuse to be deactivated even with human commands. More recently, in M3GAN, the AI robot becomes so dangerously smart that it doesn’t listen to commands, including those asking it to turn off, and goes on a murderous spree.

Only now, the reel is turning dangerously into real.

OpenAI o3 AI Model Refuses To Shut Down 

OpenAI’s newest artificial intelligence model, known as “o3,” has unexpectedly and unnervingly disregarded commands to shut down during a research test. According to reports, the model deliberately interfered with a shutdown mechanism, defying the human command to turn off.

Palisade Research, an AI safety company that carried out the study, identified this tendency of the o3 AI model. The team conducted experiments to see how various models reacted when instructed to shut down, and the results were alarming.

“OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down,” read findings from Palisade Research.

“As far as we know this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,” Palisade Research added.

Opinion
AI Integration In Apps Major Challenge For Engineering Leaders: Gartner Survey

Elon Musk ‘Concerned’ 

OpenAI has not yet responded to the findings in a public statement, but social media was abuzz with “anxiety” over what AI could possibly do.

Elon Musk, the founder of Tesla and owner of xAI, a rival AI startup, was immediately drawn to the findings. Musk posted the phrase “Concerning” after viewing the social thread on X (formerly Twitter).

Musk’s response even drew a direct plea from an X user, who wrote: “Elon I know the world has a lot of problems and you want to fix them all but this is IT. This is the one. You, more than anybody, know what’s at stake. You know how little time is left. You know there are no adults in the room. Humanity needs you to focus!”

The Dark Side Of AI, Revealed 

It is unclear if o3 acted out of the model’s (mis)training process or a defect. The incident is concerning for experts and users alike as AI models are made to clearly obey human instructions, particularly when it’s a matter of shutdown request from users or other safety systems, and may renew the conversation around the dark side of AI.

Even though the incident arose during a research test, it brings to the fore fundamental safety issues about how AI models could act (or not) in real-world settings. Sensitive sectors like defence, healthcare, infrastructure, utility, and finance could be left vulnerable if such situations arise. 

Sci-fi movies like Terminator have long warned of the dangers that technologies like AI would one day choose to defy human instruction and have a mind of their own. OpenAI o3’s misstep has only brought the possibility from the reel to the real world.

Opinion
Google Faces US Antitrust Investigation Over Deal For AI-Fueled Chatbots
OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit