Mixing AI And The Military Can Be Dangerous - This Proposed Bill Might Help
There's no denying the influence that AI has already had on the technology we use every day. Much of the opposition around its rise centers around where and how it is used. You might not know of the many different ways the U.S. Air Force is already using AI, but its efficiency-boosting and logistics-simplifying possibilities are huge. However, it's vital to tread carefully when integrating AI into military matters. The AI Guardrails act, introduced by Michigan Senator Elissa Slotkin in March 2026, sets out to provide exactly what the name suggests: An overriding human influence and final decision behind AI's work.
As detailed in a government press release, Senator Slotkin explained that her proposed bill would focus on three key areas: "Ensur[ing] a human is involved when deadly autonomous weapons are fired, AI cannot be used to spy on the American people, and that a human is on the switch to launch nuclear weapons." These measures aren't intended to curtail the advancement of the U.S. AI industry, but rather to protect the nation's dominance in the area ("we must win the AI race against China," the Michigan lawmaker added) while ensuring that it develops in a safe and practical way.
Malfunctions, glitches, and mistakes, after all, are far from unheard of in the AI sphere. Human judgment and decision-making certainly aren't infallible either, of course, but the best way to get the benefit of both is to use them in tandem. Here's how the bill could help the United States do that.
More details about the AI Guardrails Act
In the press release detailing the measures included in the AI Guardrails act, Senator Slotkin declares restricting the technology's ability to strike with autonomous weapons without oversight, banning it from launching nuclear weapons, and preventing its use to contribute to widespread surveillance of the population to be "just common sense." Quite understandably, these are not new concepts. For instance, Department of Defense Directive 3000.09 notes that weaponry should be created with the concept of "allow[ing] commanders and operators to exercise appropriate levels of human judgment over the use of force." This is partially why some weapon systems with automated capabilities, such as the Navy's Phalanx CIWS, will also have modes that 'indicate' targets but require authorization to fire at them, as well as automatic modes (when enabled).
This bill's intent, in short, it to make these three potential use cases of AI illegal. As for why, the document itself simply explains, "Some military command decisions are too risky and too consequential for machines to decide." This also has the effect of helping to ensure responsibility and accountability for each military decision made, waters that can become rather muddied when an AI system acts more independently.
The move can be seen as an interesting advancement of the five principles of ethical artificial intelligence, which were taken on board as part of the department's AI development strategy in February 2020. They state that its use of AI is to be equitable, governable, reliable, responsible, and traceable. With the bill being new to the agenda at the time of writing, it's not yet known how it will fare with the Michigan Senator's fellow lawmakers. It could be a significant step to steering AI's development safely in one of its most potentially dangerous areas.