ADVERTISEMENT

Google Is Making Breakthroughs Much Bigger Than AI

Generative AI is great, but what’s more impressive is the continued march toward quantum supremacy.

Sundar Pichai, chief executive officer of Alphabet Inc., speaks about quantum computing during the virtual Google I/O Developers Conference on a laptop computer/tablet computer in Tiskilwa, Illinois, U.S., on Tuesday, May 18, 2021. The event showcases the company's latest software developments for developers, and provides them with the means to create the next generation of apps.
Sundar Pichai, chief executive officer of Alphabet Inc., speaks about quantum computing during the virtual Google I/O Developers Conference on a laptop computer/tablet computer in Tiskilwa, Illinois, U.S., on Tuesday, May 18, 2021. The event showcases the company's latest software developments for developers, and provides them with the means to create the next generation of apps.

Hype surrounding the rise of ChatGPT and the supposed ground Google is losing to Microsoft Corp. and OpenAI in the search wars has overshadowed more important developments in computing, progress that will have far greater implications than which website serves up better tax advice. 

Quantum computing is the holy grail of scientists and researchers, but it’s still decades away from reality. However, Google’s parent company, Alphabet Inc., moved the ball down the field last month with news that it found ways to ameliorate one of the biggest problems facing the nascent field: accuracy. 

To date, all computing is done on a binary scale. A piece of information is stored as either 1 or 0, and these binary units (bits) are clumped together for further calculation. We need 4 bits to store the number 8 (1000 in binary), for example. It’s slow and clunky, but at least it’s simple and accurate. Silicon chips have been holding and processing bits for almost seven decades. 

Quantum bits — qubits — can store data in more than two forms (it can be both 1 and 0 at the same time). That means larger chunks of information can be processed in a given amount of time. Among the many downsides is that the physical manifestation of a qubit requires super-cold temperatures — just above 0 degrees Kelvin — and is susceptible to even the minutest amount of interference, such as light. They’re also error-prone, which is a big problem in computing. 

In a paper published in  last month, Google claims to have made a huge breakthrough in an important subfield called quantum error correction. The approach is quite simple. Instead of relying on individual physical qubits, scientists store information across many physical qubits but then view this collection as a single one (called a logical qubit). 

Google had theorized that clumping a larger number of physical qubits to form a single logical qubit would reduce error rate. In its research paper, outlined in a blog post by Chief Executive Officer Sundar Pichai, the team found that a logical qubit formed from 49 physical qubits did indeed outperform one comprised of 17. 

In reality, dedicating 49 qubits to the handling of just a single logical one sounds inefficient and even overkill. Imagine storing your photos on 49 hard drives just to ensure that, collectively, a single hard drive is error-free. But given the vast potential of quantum computing, even such baby steps amounts to significant progress.

More important, it gives the broader scientific community a basis from which to build on this knowledge to further advance related fields including materials science, mathematics and electrical engineering, which will all be needed to make an actual quantum computer reality. The hope of building a system that can solve a problem that no current machine could feasibly manage is called quantum supremacy.(1)  

Four years ago, Google said it completed a test in 200 seconds for a task that would take a conventional supercomputer thousands of years, proof that we’re on the path to quantum supremacy. 

But like artificial intelligence tools such as ChatGPT, proving they work is only one part of the puzzle. High accuracy and low error rates — something recent chatbots are prone to — remain elusive. Improvement on this front is a major goal for developers of both technologies, with OpenAI this week saying its new GPT-4 is 40% more likely to produce factual results than its predecessor.

Unfortunately, a supercooled computer crunching data isn’t as fun as a digital assistant that can write limericks or draft a school essay. But in the future, these breakthroughs will be as comparable as the entertainment value of television versus the world-changing feat of landing a human on the moon.

More From Bloomberg Opinion:

  • The Race Is On to Fight a Threat That Doesn't Exist: Tim Culpan
  • Google Faces a Serious Threat From ChatGPT: Parmy Olson
  • US Chip Curbs Highlight Cracks in China AI Strategy: Tim Culpan

(1) Feasible is a nebulous term, but generally means completion in a reasonable amount of time such as minutes or days, instead of years.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Tim Culpan is a Bloomberg Opinion columnist covering technology in Asia. Previously, he was a technology reporter for Bloomberg News.

More stories like this are available on bloomberg.com/opinion

©2023 Bloomberg L.P.