ADVERTISEMENT

The DeepSeek AI Revolution Has A Security Problem

The model that shocked Silicon Valley by doing more with less might be doing too little on safety. That could hurt its business prospects.

<div class="paragraphs"><p>DeepSeek’s models were especially vulnerable to “goal hijacking” and prompt leakage, LatticeFlow said. (Andrey Rudakov. Bloomberg)</p></div>
DeepSeek’s models were especially vulnerable to “goal hijacking” and prompt leakage, LatticeFlow said. (Andrey Rudakov. Bloomberg)
(Bloomberg Opinion) -- Last week, DeepSeek sent Silicon Valley into a panic by proving you could build powerful AI on a shoestring budget. In some respects, it was too good to be true. Recent testing has shown that DeepSeek’s AI models are more vulnerable to manipulation than those of its more expensive competitors from Silicon Valley. That challenges the entire David-vs-Goliath narrative on “democratized” AI that has emerged from th...
To continue reading this story
You must be an existing Premium User
OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit