Generative AI for imaging is not as harmless as it seems. They can produce deep fakes, but also help train combat control systems, and even determine the purpose of strikes. For this, DALL-E from OpenAI was offered to the US Army. Sam Altman's company was previously against the use of AI for military purposes, but its policy changed after the investment from Microsoft.

In October 2023, at a presentation at the Pentagon, Microsoft outlined the possibilities of using DALL-E in the military sphere. The company actively cooperates with the American military: for example, it previously offered them to use its HoloLens product. It is currently looking for ways to integrate artificial intelligence into military technology.
Battle management systems (BMS) provide Army leadership with data to plan military operations, including troop movement and targeting for artillery and aircraft. Microsoft has proposed using DALL-E to create artificial images that will improve visualization of the combat environment and help systems more accurately identify targets.
The US Air Force is developing the JADC2 system, which is supposed to integrate data from all types of forces, including data from drones, radars and tanks to coordinate military operations. Microsoft sees the potential of DALL-E as a training tool for this system.
Despite presenting the capabilities of the technology, Microsoft emphasizes that training using DALL-E has not yet begun. It is perceived as a potential direction of development. OpenAI notes that it did not participate in the presentation and was not informed about such use of its technologies. Any agreements with the military will be governed by Microsoft policy.
Experts and analysts emphasize that the decision to use technology for military purposes is made at the political level, not by the developer company. There are also concerns about the reliability and accuracy of generative AI models in military software. For example, Heidi Hlaaf, a machine learning security engineer who previously worked at OpenAI, says: “These generative image models cannot even generate the correct number of limbs or fingers. How can we rely on their accuracy regarding events on the battlefield?”.