OpenAI has partnered with two defense technology companies that the Pentagon has selected to compete to develop voice-controlled, drone swarming software for the US military, according to multiple people familiar with the matter.
OpenAI's technology would only be used to translate voice commands from battlefield commanders to digital instructions for the drones, according to two of the people. It wouldn't be used for the operation of the drone swarm, weapons integration or targeting authority, the two people said. All of the people asked not to be named to discuss sensitive matters that aren't public.
The effort is part of a $100 million Pentagon prize challenge announced in January that's intended to deliver prototypes for technology that can command swarms of drones capable of making decisions and execute missions without human intervention. The six-month competition will progress in phases, depending on the success and interest of the participants, the people said.
OpenAI's logo appears on at least two of the successful contest submissions, according to some of the people. OpenAI's involvement hasn't previously been reported, and the companies selected haven't been publicly named.
Special Operations Command, which runs the Defense Autonomous Warfare Group, or DAWG, declined to comment. The Defense Innovation Unit didn't respond to a request for comment. DAWG and DIU jointly launched the prize challenge for drone swarm technology.
The company hasn't decided how far it will proceed or firmed up arrangements with the defense-tech companies involved, according to some of the people. Only the open-source version of OpenAI's model would be provided rather than the company's most advanced models, according to one of the people, who added the company may also provide installation support.
OpenAI didn't submit its own bid for the prize and its involvement in the challenge will be only cursory, according to a spokesperson. Two of OpenAI's existing partners chose to incorporate the company's open-source model in their bids, the spokesperson added. If one of the partners is selected, OpenAI would ensure any use of its tools is consistent with our usage policies, the spokesperson said.
Other AI companies have directly submitted their own bids to participate in the drone swarm contest, according to the spokesperson.
One of the successful submissions that included OpenAI was led by Applied Intuition Inc., a defense contractor and strategic partner of OpenAI, and co-lists two other companies: Sierra Nevada Corporation and Noda AI, according to the document dated January 25, which was reviewed by Bloomberg.
Applied Intuition will provide the swarm interface and some digital commands, Sierra Nevada Corporation will provide integration and venture-backed Noda AI will provide the so-called “orchestration” software that controls the drones, according to the document.
OpenAI will provide command-and-control for “Mission Control” according to a graphic in the document, which displays OpenAI's software inside a section titled “Orchestrator,” between the human operator and the machine.
Applied Intuition, SNC and Noda AI didn't immediately respond to requests for comment.
The company's involvement in the drone swarm challenge reveals that its defense work is set to expand the military's current use of its tools.
This week, the Pentagon announced a partnership with OpenAI that would make ChatGPT available to 3 million Defense Department personnel. Chief Executive Officer Sam Altman last year downplayed the prospect of helping the Pentagon develop an AI-enabled weapons platform.
“I don't think most of the world wants AI making weapons decisions,” he said in April at a conference dedicated to modern conflict, adding that he didn't expect the company would do so “in the foreseeable future.”
Altman still left the possibility open, however. “I will never say never, because the world could get really weird,” he said.
While it is already possible to fly multiple drones at once, developing the software to direct multiple drones on sea and in the air as a swarm — able to move autonomously in pursuit of a target — remains a challenge.
Those selected for the Pentagon's competition must show their technology can translate a battlefield commander's voice commands into action, allowing drones to carry out tasks en masse during combat operations.
A defense official quoted in the announcement made clear the effort would be for offensive purposes, saying the human-machine interaction “will directly impact the lethality and effectiveness of these systems.”
Commands might include instructions such as “Move all USV pods 5 kilometers east,” according to an example provided by the Pentagon, referring to unmanned surface vessels.
The prospect of integrating chatbots and voice-to-text commands in weapons platforms has alarmed even some defense officials, despite the Pentagon's eagerness to accelerate the adoption of AI and autonomy, according to several of the people. They said it would be important to limit generative AI to translation and not allow it to control drone behavior.
Several of the people familiar with the matter expressed concerns about the risks if generative AI were used to translate voice into operational decisions without a human in the loop.
The move comes when employees at major labs have departed after voicing a range of other ethical concerns about the AI industry, even as leading generative AI companies push for revenue to support ongoing research and development. They include an OpenAI researcher who said she's concerned about ads in ChatGPT, and a researcher at Anthropic who publicly resigned, raising broader concerns about AI development.
Large language models, which underpin chatbots such as OpenAI's ChatGPT, are prone to bias and so-called hallucinations — meaning they can generate outputs that aren't anchored in reality but which the AI can present as reliable.
The Pentagon's new AI Acceleration Strategy, released in January, seeks to “unleash” AI agents for the battlefield, from planning military campaigns to targeting, potentially involving lethal strikes.
Defense contracts have historically been controversial inside consumer tech companies, including significant protests at Google in 2018 over the Pentagon effort named Project Maven that intended to use AI to analyze drone footage.
More recently, the AI industry has shown more openness to such deals. OpenAI revised its policy on working in national security in 2024, and later announced a strategic tie-up with defense-technology company Anduril Industries Inc. to work on anti-drone technology. At the time, a spokesperson for OpenAI said the company's partnership with Anduril was specifically for using its technology in a defensive capacity against unmanned drones.
The Pentagon explanation of its voice-controlled drone competition refers to several competition stages that companies will only participate in if they succeed at earlier tests.
The first phase would focus only on software development, before using live platforms later. The software is intended to coordinate drone movements across multiple domains, such as by air and sea, according to a Pentagon description of the task. Later stages call for developing “target-related awareness and sharing” and ultimately "launch to termination."
ALSO READ: OpenAI Claims DeepSeek Distilled US Models To Gain An Edge
Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.