Facebook's AI Chief Researching New Breed of Semiconductor

Yann LeCun says company leaving “no stone unturned” in chip effort.  

(Bloomberg) -- Facebook Inc.’s chief AI researcher has suggested the company is working on a new class of semiconductor that would work very differently than most existing designs.

Yann LeCun said that future chips used for training deep learning algorithms, which underpin most of the recent progress in artificial intelligence, would need to be able to manipulate data without having to break it up into multiple batches. Most existing computer chips, in order to handle the amount of data these machine learning systems need to learn, divide it into chunks and processes each batch in sequence.

"We don’t want to leave any stone unturned, particularly if no one else is turning them over," he said in an interview ahead of the release Monday of a research paper he authored on the history and future of computer hardware designed to handle artificial intelligence.

Intel Corp. and Facebook have previously said they are working together on a new class of chip designed specifically for artificial intelligence applications. In January, Intel said it planned to have the new chip ready by the second half of this year.

Facebook is part of an increasingly heated race to create semiconductors better suited to the most promising forms of machine learning. Alphabet Inc.’s Google, which has created a chip called a Tensor Processing Unit that helps power AI applications in its cloud-computing datacenters. In 2016, Intel bought San Diego-based startup Nervana Systems, which was working on an AI specific chip.

In April, Bloomberg reported that Facebook was hiring a hardware team to build its own chips for a variety of applications, including artificial intelligence as well as managing the complex workloads of the company’s vast datacenters.

For the moment the most commonly-used chips for training neural networks -- a kind of software loosely based on the way the human brain works -- are graphical processing units from companies such as Nvidia Corp., originally designed to handle the computing intensive workloads of rendering images for video games.

LeCun said that for the moment, GPUs would remain important for deep learning research, but the chips were ill-suited for running the AI algorithms once they were trained, whether that was in datacenters or on devices like mobile phones or home digital assistants.

Instead, LeCun said that future AI chip designs would have to handle information more efficiently. When learning, most neurons in a system -- such as a human brain -- don’t need to be activated. But current chips process information from all the neurons in the network at every step of a computation, even if they’re not used. This makes the process less efficient.

Several startups have tried to create chips to more efficiently handle sparse information. Former NASA Administrator Daniel Goldin founded a company called KnuEdge that was working on one such chip, but the company struggled to gain traction and in May announced it was laying off most of its workforce.

LeCun, who is also a professor of computer science at New York University, is considered one of the pioneers of a class of machine learning techniques known as deep learning. The method depends on the use of large neural networks. He is especially known for applying these deep learning techniques to computer vision tasks, such as identifying letters and numbers or tagging people and objects in images.

©2019 Bloomberg L.P.

Watch LIVE TV , Get Stock Market Updates, Top Business , IPO and Latest News on NDTV Profit.
GET REGULAR UPDATES