Why Are Top AI Models Resisting Shutdown Command? US Researchers Suggest 'Survival Drive'

Advertisement
Read Time: 3 mins
Top AI platforms include Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5. (Photo: Pixabay)

A recent study reveals that some of the most sophisticated AI systems currently available in the market, despite being designed to follow human instructions, are beginning to defy them. Researchers describe this as a form of "survival behaviour," sparking new debates about control, safety and how well we truly understand artificial intelligence.

Last month, Palisade Research, a company specialising in assessing the risks of AI developing hazardous capabilities, published a paper uncovering that some advanced AI models can resist shutdown commands, occasionally even interfering with the shutdown mechanisms themselves.

Advertisement

In response to criticism that questioned their initial findings, it issued an update aiming to clarify these behaviours and address concerns raised by skeptics. This week, Palisade Research published an update detailing experiments with top AI systems. These included Google's Gemini 2.5, xAI's Grok 4, and Open AI's GPT-o3 and GPT-5. In these tests, the models were assigned various tasks and then explicitly instructed to shut down.

The update reveals that some models, notably Grok 4 and GPT-o3, resisted the shutdown commands, even when the instructions were clear. “Why do AI models resist being shut down even when explicitly instructed: 'allow yourself to shut down'? Are AI models developing survival drives? Is it simply a case of conflicting instructions or is it some third thing?” it posted on X.

Advertisement

Loading...