19.3 C
New York

OpenAI removes ban on military use of AI tools like ChatGPT

Published:

OpenAI, a leading artificial intelligence (AI) research company, has updated its usage policies to permit the use of its powerful language technologies, such as ChatGPT, for military purposes. Previously, the company’s policies explicitly banned the use of its AI tools and services for “weapons development” and “military and warfare”. However, the revised usage policy now focuses on a general prohibition of harmful activities without specifically addressing military applications. OpenAI states that the goal is to simplify the policy into “universal principles” like “Don’t harm others,” while the potential implications of military use remain unclear.

Niko Felix from OpenAI emphasized the aim to create universal principles that are easy to remember and apply, especially as the company’s tools are widely used. The principle of “Don’t harm others” is highlighted as a broad yet easily understood principle that applies in numerous contexts.

Implications of military use of generative AI

Experts speculate that this change in language implies a relaxation of OpenAI’s previous stance against military use, potentially allowing indirect use of the company’s AI technology in combat scenarios by supporting operational infrastructure. There are also concerns about the company’s partnership with Microsoft, a major defense contractor with significant investments in OpenAI. While OpenAI’s current technologies may have limited practical uses for militaries in their current form, the policy shift comes at a time when defense departments worldwide are increasingly interested in leveraging advanced AI for intelligence and operational purposes. It remains to be seen how OpenAI will interpret or enforce these revised guidelines as military demand for AI continues to grow.

Featured Image: Dall-E

Related articles

Recent articles