Center For AI And Digital Policy Files Complaint To Slow Down AI Development

OpenAI's latest language model GPT-4 is currently making waves in the tech sphere with its advanced skills. It has already been lapped up aggressively by well-known companies, including Microsoft. However, a prominent tech ethics group has now submitted a complaint before the U.S. Federal Trade Commission (FTC) alleging that the latest version of OpenAI's language model violates consumer protection rules.

The complaint has been filed by the Center for AI and Digital Policy, and it squarely targets GPT-4, which is not only faster and smarter than ChatGPT, but also goes multi-modal. The primary argument is that GPT-4 is "biased, deceptive, and a risk to privacy and public safety." The criticism doesn't mention the recent privacy incident with ChatGPT, which forced OpenAI to temporarily take the system offline because some users were able to see others' personal and financial details.

The complaint also criticizes the absence of any prior independent vetting before the system was deployed and pushed into the public spotlight. Earlier this week, a group of tech industry behemoths such as Elon Musk and Apple co-founder Steve Wozniak — alongside leaders of top AI labs (except OpenAI and Meta) and field experts — signed an open letter calling for a six-month-long summer on the further development of AI and demanded independent auditing of AI systems as advanced as GPT-4. Interestingly, the fresh complaint also targets OpenAI for a disclosure that acknowledges risks originating from tech like GPT-4 and its aversion to liability.

An effort to rein in generative AI

Going a step further, the ethics oversight body cites an FTC rule and claims that GPT-4 does not satisfy the conditions for an AI program to be "transparent, explainable, fair, and empirically sound." Based on this foundation, the Center for AI and Digital Policy is asking the FTC to launch a formal investigation into how OpenAI is developing and hawking GPT-4 to clients across the world and to take a look into its most well-known product, ChatGPT. The independent non-profit organization also recommends that further commercialization of GPT-4 should be halted in a bid to protect the interests of consumers and businesses.

The AI ethics body says that generative AI tech like GPT-4 is not "human-centric and trustworthy," and cites instances where such models have given harmful information. Plus, there's the potential for weaponization such as enabling surveillance and the lack of transparency around the data set that was used to train ChatGPT and GPT-4.

The complaint also flags the technical issues like hallucination associated with such AI models, which cause them to make up facts — something we experienced ourselves when reviewing Paragraph AI in early 2023. As a concluding note, the complaint urges "the FTC to 'hit the pause button' so that there is an opportunity for our institutions, our laws, and our society to catch up."