OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”


OpenAI has subtly revised its usage policy, removing a clear prohibition against using its technology, including tools like ChatGPT, for military purposes. Previously, the policy explicitly banned “weapons development” and “military and warfare” applications, but the updated version omits the specific “military and warfare” ban while maintaining a general directive against causing harm, with weapons development as an example.

The policy update aims for clarity and broad applicability, according to OpenAI spokesperson Niko Felix. However, the change has raised concerns among experts about the potential for AI to be used in military operations, given known issues with bias and inaccuracy in large language models (LLMs). The implications for AI safety are significant, as these technologies could lead to imprecise and biased military operations, increasing harm and civilian casualties.

The revised policy’s real-world impact is uncertain, but there is evidence that U.S. military personnel may already be using OpenAI’s technology for non-violent tasks. The National Geospatial-Intelligence Agency has considered using ChatGPT to support its analysts. Experts suggest that OpenAI may be softening its stance on military collaboration, potentially allowing for involvement in non-weapon military infrastructure.

The changes coincide with a growing interest from militaries worldwide in integrating machine learning to enhance operations. The Pentagon is exploring the use of LLMs like ChatGPT, despite concerns about their factual inaccuracies and security risks. Deputy Secretary of Defense Kathleen Hicks has emphasized AI’s importance to military innovation, though acknowledging that current AI technologies may not yet align with the Pentagon’s ethical AI principles.
Read more at The Intercept…

Discover more from Emsi's feed

Subscribe now to keep reading and get access to the full archive.

Continue reading