DeepSeek’s models were especially vulnerable to “goal hijacking” and prompt leakage, LatticeFlow said. (Andrey Rudakov. Bloomberg)
(Bloomberg Opinion) -- Last week, DeepSeek sent Silicon Valley into a panic by proving you could build powerful AI on a shoestring budget. In some respects, it was too good to be true. Recent testing has shown that DeepSeek’s AI models are more vulnerable to manipulation than those of its more expensive competitors from Silicon Valley. That challenges the entire David-vs-Goliath narrative on “democratized” AI that has emerged from th...