Congress Reportedly Puts Strict Rules On Staff Use Of ChatGPT
House lawmakers and staff members have reportedly been warned against using OpenAI's ChatGPT, according to an internal memo sent to members that was obtained by Axios. Instead of outright banning it, Chief Administrative Officer Catherine L. Szpindor has apparently directed all House members to only use ChatGPT when dealing with non-sensitive data.
Now, that's a legitimate concern. A bug recently forced OpenAI to shut down ChatGPT briefly because it leaked sensitive user data, including email addresses and financial information, alongside their conversations with the chatty AI. The U.S. may not be the only country establishing guardrails when it comes to the usage of AI bots if this is true, especially by government employees.
In April, Italy banned ChatGPT over data security risks following an order by the Italian data protection watchdog that OpenAI must stop processing data belonging to Italian citizens in the wake of a breach. China, Russia, Iran, and Cuba are among the other countries that have also enforced a ChatGPT ban over similar concerns.
While the directive allegedly sent to House members is a first of its kind for government employees, multiple U.S.-based tech companies have already taken precautionary steps. Apple has reportedly banned employees from using ChatGPT — and other AI programs — because it could leak sensitive information. JPMorgan Chase, Verizon, and Amazon are also on the list of ChatGPT-averse companies.
AI risks reach the top echelon
As mentioned above, the alleged House of Representatives memo doesn't enforce a blanket ban on ChatGPT, but makes it clear that it should only be used for experimentation towards "research and evaluation." House members are evidently strictly prohibited from including ChatGPT in their day-to-day work.
House staff have also reportedly been asked to use the AI bot only after enabling privacy settings, so that none of their data is used to train the LLM powering ChatGPT. To recall, OpenAI recently rolled out a tool that lets users disable their chat history, allowing them to choose if they want their conversation data to be used by OpenAI for further refining ChatGPT.
This human-machine conversation data is a precious commodity — so much so that Reddit revamped its whole policy because it didn't want to give such data for free to AI labs, and partially in doing so, earned the wrath of its community. Notably, the internal House memo makes it clear that members and staff are only allowed to use the paid version called ChatGPT Plus because it offers extra safety features, and have been clearly told to avoid the free version.
The risks are very much real, especially in the absence of a proper regulatory framework. Committees, with the backing of the White House, are already brainstorming an AI policy to contain the harms, while the European Union has already drafted the regulations and will likely implement them in the coming months.