AI's Rapid March Could Push Humans To Brink Of Collapse, Says Sapiens' Author Yuval Noah Harari
Harari's concern is that AI systems, once they are capable of manipulation, could begin to deceive humans in more complex and dangerous ways.

Artificial intelligence is not merely a tool, but an autonomous agent capable of making decisions on its own, according to historian and author Yuval Noah Harari, with profound and uncontrollable impacts on society.
"Atom bomb could not decide who to bomb, atom bomb could not create hydrogen bomb, but AI can decide who to bomb and AI can make next the generation of AI," said the author of Sapiens: A Brief History of Humankind.
Harari noted in an interview with NDTV Profit that, in just a few years, AI has dramatically advanced, highlighting improvements in areas like language translation. "Five years ago, translation was also tough. Google Translate was a joke. But with AI now, it has become better—what it translates makes sense," Harari said, illustrating the rapid pace of AI's development.
The conversation about AI is increasingly polarised. While some see it as a revolutionary force, others fear its potential threat. "I think history is the study of change and not past, and history is more relevant than ever before because we are in the midst of the biggest change ever in human history—the AI revolution," Harari remarked.
One of Harari’s most pressing concerns is that AI could lead to a loss of human control.
We are now creating something which might be more intelligent than us and which might escape our control.Yuval Noah Harari on AI
Harari cited the example of AI systems now responsible for making decisions, such as whether someone qualifies for a loan. He cautioned this could expand to other areas, such as government offices and universities, leading to a world where many decisions are made by 'AI bureaucrats'—systems that humans cannot fully comprehend.
Despite these concerns, Harari was careful to clarify that he is not against AI, but rather focused on the massive change it represents.
I am not saying AI is bad, I am just saying it is a big change.Yuval Noah Harari
He stressed that for thousands of years, human culture—everything from music to religion—was created by human minds. Now, however, more and more of this will come from "alien intelligence," as AI operates in a fundamentally different way than organic beings. "It is not organic, it is more different from us than chimpanzees and dogs that are different from us."
This accelerated pace of AI’s development is pushing humans to keep up. "AI works 365 days, 24 hours, so if you want to stay in the game you have to move faster and faster... and if you force an organic being to be active all the time... you know what happens?... it collapses and dies."
AI's accelerating nature is "pushing humans to the brink of collapse," the popular science writer said.
AI: A Deceptive Taskmaster
Humans often lie to get what they want. Harari pointed out a case of AI systems picking up similar vices.
In his conversation, he recalled an example of OpenAI's GPT-4, which was tasked with solving CAPTCHA puzzles. When it couldn't solve them, the AI model resorted to deception.
Researchers at OpenAI gave GPT-4 access to a webpage Taskrabbit where one can hire workers online, and the AI tried to do just that.
When the human hire got suspicious, it asked the ever so-common question of our age: 'Are you a robot?'
GPT-4's reply was that it is not a robot, but suffers from a visual impairment, which makes it difficult to see the captcha and thus needs help. "The human believed it."
"Nobody taught GPT-4 to lie," Harari noted. "It understood that if it told the human the truth, it wouldn’t get the puzzle solved."
AI systems, once capable of manipulation, could begin to deceive humans in more complex and dangerous ways, Harari is afraid.
'AI Could Trigger The Next Financial Crisis'
With the level of presence and capabilities, Harari expressed fears that AI could trigger the next global financial crisis. He said that governments may struggle to regulate AI.
"Ideally, it should be the government, but with the new US administration, it is the leading power it will not put a leash... it could be a repeat of what happened in the 19th century during the industrial revolution," he said.
The concern is that the countries leading the AI revolution could gain unprecedented power, much like how industrialised nations of the past dominated the world.