Danger Of AI: US Soldier Used ChatGPT To Plan Tesla Cybertruck Attack In Las Vegas, Police Say
The Las Vegas attack also adds to the concerns of experts and critics who have already cautioned that AI could be used for malicious ends.

While up until now generative artificial intelligence has been blamed for "killing" some jobs that were earlier done by humans, the recent Tesla Cybertruck attack outside Trump hotel in Las Vegas hints at the physical harm the misuse of AI technology can inadvertently cause to humans as well.
According to Las Vegas police, Matthew Livelsberger— the decorated US soldier from Colorado Springs who detonated the Tesla Cybertruck outside the hotel — utilised ChatGPT and other generative AI tools to help plan the blast.
Livelsberger sought information about explosive targets, the velocity of specific rounds of ammunition, and whether or not fireworks were permitted in Arizona, as per an analysis of his ChatGPT searches.
"This is the first incident that I’m aware of on US soil where ChatGPT is utilised to help an individual build a particular device. It's a concerning moment," said Kevin McMahill, sheriff of the Las Vegas Metropolitan Police Department, as quoted in a report by the Associated Press.
The 37-year-old shot himself to death shortly before the truck exploded. Seven persons suffered minor injuries as a result of the explosion, while the Trump International Hotel suffered almost no damage. Livelsberger, according to the authorities, acted alone.
However, given the disclosure that Livelsberger used information provided by ChatGPT to plan the attack, the incident raises a red flag about the potential misuse of the readily available generative AI technology in future attacks too. The Las Vegas attack also adds to the concerns of experts and critics who have already cautioned that AI could be used for malicious ends.
OpenAI, the maker of ChatGPT, responded to the issue in an email statement, according to AP. It said the company was "committed to seeing AI tools used responsibly" and its "models are designed to refuse harmful instructions".
"In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities," OpenAI said in a statement cited by Axios.