'Anti-Goal': AI Superintelligence Can Be Too Powerful To Control, Warns Microsoft AI Chief
The company is building its own 'humanist' version of superintelligence, which will be 'aligned to our interests, on our team, in our corner backing us up', says Mustafa Suleyman.

Microsoft AI's chief executive officer Mustafa Suleyman has termed AI superintelligence an 'Anti-Goal', according to statements he made in a podcast, warning against the endeavour.
Suleyman defined AI superintelligence as a system that can improve upon itself, set its own goals and act independent of humans in an episode of 'Silicon Valley Girl Podcast'.
"It would be very hard to contain something like that or align it to our values. And so that should be the anti-goal," Suleyman said.
He also made a distinction between artificial general intelligence and superintelligence, saying that AGI is a step before AI superintelligence.
"You can think of AGI as a step before maybe super intelligence, but roughly speaking, they’re used fairly interchangeably," he said.
The Microsoft AI CEO said that the company is building its own "humanist" version of superintelligence, which, according to him, would be "aligned to our interests, on our team, in our corner backing us up".
Suleyman also said that he agreed with Google DeepMind CEO and his former co-founder Demis Hassabis' projection that AGI would be achieved within five years.
He stated that AI would be able to reach human levels of performance in a majority of tasks within that time frame.
Suleyman claimed that AI already has the ability to undertake tasks such as summarisation, translation, transcription, research, document writing and poetry, better than humans.
He added that AI models are "taking steps towards being as good as a human at being a project manager or being a marketing person or an HR person”.
Stating that it would "fundamentally change work in a profound way."
"It's going to change the work that we do," he said.
Suleyman also tried to make the case for AI "democratising intelligence." He argued that it would lead to an "unbelievable amount of competition because the distance between an idea and the realisation of that idea is going to collapse".
“People are just going to be thinking new companies into existence, new products, new pieces of poetry," he said.
He further stated that there would also have to be safeguards, rules and regulations to keep AI in check and ensure that they work with humans.
