Here's Why Congress Banned Microsoft's Copilot AI

Late in 2023, U.S. President Joe Biden signed an executive order that set new standards for AI safety and guardrails around its development. After completing the initial 150-day actions outlined in the order on March 28, the White House Office of Management and Budget (OMB) issued a government-wide policy to counter the risks of AI, build transparency, enhance oversight, and specify how federal agencies can use AI. But it seems the approach is a bit heavy-handed in some quarters, specifically Congress.

According to an internal notice obtained by Axios, Congressional staffers have been banned from using Microsoft Copilot, a suite of AI tools available across the company's ecosystem of products and the web. Catherine Szpindor, Chief Administrative Officer of the U.S. House of Representatives since 2020, has reportedly communicated that Copilot is "unauthorized for House use." The restrictions apply to the commercial version of Copilot, which is available as a free tool and via subscription.

The company is reportedly developing a version of Copilot as part of an Azure cloud and Microsoft 365 bundle to meet the advanced security compliance requirements of government agencies. However, Szpindor's office has confirmed that these government-focused AI tools provided by Microsoft will be required to undergo vetting before they are allowed for use by staffers.

A shaky history

"The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," Szpindor's office said. Data leaks have been a problem for generative AI tools ever since OpenAI's ChatGPT arrived. In May last year, The Wall Street Journal reported that Apple warned employees against using tools like GitHub's Copilot assistant for coding and ChatGPT. After uncovering data leak trails, Samsung also prohibited staffers from using ChatGPT.

A few months later in June, San Francisco-based Robust Intelligence experts demonstrated how Nvidia's NeMo Framework AI software can be tricked into revealing private information and skipping past its safety measures. The U.S. Federal Trade Commission also investigated ChatGPT for putting data security at risk. In November last year, experts at Northwestern University unearthed methods to trick custom GPTs into revealing confidential information.

A month later, leaked documents obtained by The Platformer revealed the hallucination problems with Amazon's Q chatbot and that it was leaking confidential data such as the location of data centers, unreleased features, and discount programs. In another incident, ChatGPT spilled personal details such as user banking information, forcing the company to briefly take it offline. With such a shaky history, it's no surprise that Microsoft's Copilot has been banned from official machines, even though Congressional staffers can't be stopped from using these tools on their personal devices.