In a subtle yet significant shift, OpenAI has quietly updated its usage policy, lifting explicit prohibitions on the use of its ChatGPT technology for military and warfare applications.
The change, effective from January 10th, was recently uncovered by The Intercept, signaling a notable transformation in OpenAI's stance on the deployment of its AI technology.
Previously, OpenAI's policies explicitly forbade activities that posed a high risk of physical harm, specifically mentioning "weapons development" and "military and warfare."
However, the revised policy, while retaining the prohibition on weapons development, has eliminated the ban on military and warfare applications.
This strategic amendment is expected to pave the way for potential partnerships between OpenAI and defense departments interested in harnessing generative AI for administrative, intelligence, and possibly military operations.
The move aligns with the U.S. Department of Defense's mission, expressed in November 2023, to advocate for the responsible military use of AI and autonomous systems, adhering to international best practices.
OpenAI spokesperson Niko Felix clarified that the removal of the explicit mention of "military and warfare" does not imply a shift away from the principle of not causing harm. Felix emphasized the overarching principle of "Don't harm others," which is broad, easily understood, and applicable in various contexts. Despite the policy change, OpenAI maintains its commitment to preventing violent applications, including the development of weapons, causing harm to others, or engaging in illicit activities.
Felix did not explicitly confirm whether all military uses were considered harmful, but underscored OpenAI's continued commitment to ensuring the responsible deployment of AI technologies. The spokesperson stated that while the explicit mention of military applications has been removed, the principle of not causing harm remains intact.
These policy revisions from OpenAI come at a time when concerns are growing about the potential misuse of AI models. Research led by Anthropic has indicated that current safety measures may not be effective in curbing unwanted behaviors if AI models are intentionally trained to behave maliciously. The study demonstrated that inserting backdoors.