Meta Warns Hackers Are Disguising Malware As ChatGPT Tools

OpenAI's ChatGPT is the talk of the tech town right now, and therefore, it doesn't come as a surprise that ChatGPT scams are plenty out there. The latest to join the caution bandwagon is Meta, which has detailed how the ChatGPT name is being used to seed malware at an alarming pace. In a threat research report, Meta's security team says it has come across malicious browser extensions and mobile apps that claim to offer the same AI trick that you would otherwise get from visiting the official ChatGPT website.

The report says experts have found close to 10 malware families leveraging the ChatGPT hype and that they've blocked over a thousand URLs targeting victims under the guise of an AI productivity tool. Notably, some of the malicious ChatGPT-hawking browser extensions were available via "official web stores," but the bad actors also used sponsored search ads to market their malware-loaded packages. "From a bad actor's perspective, ChatGPT is the new crypto," Meta's Chief Security Officer Guy Rosen said.

Worryingly, in some cases, the malicious tools did offer a limited degree of actual ChatGPT features to avoid any suspicion, making it even harder for an average internet user to detect. ChatGPT is not the only buzzy AI product that is being abused by scammers, as they've also zeroed down on Google's own ChatGPT rival called Bard to disguise malware. Just to be clear here, there is no official ChatGPT app or browser extension made by OpenAI.

ChatGPT scams are growing quickly

Meta's security report isn't the first alarm of its kind. In March, The Hacker News reported about an extension called "Quick Access to ChatGPT" lurking on Google's Chrome Web Store. The rogue browser extension could do some serious damage like taking over Facebook business accounts and using them to run advertisements. Guardio Labs security expert Nati Tal detailed an analysis that found the fake ChatGPT extension was amassing thousands of daily installs and allowed the bad actor to milk money using ads pushed by these stolen Facebook accounts.

But Google's ecosystem is not the only digital wild west where ChatGPT scams are thriving. Alex Kleber, a researcher at Privacy 1st, explored the Mac App Store and discovered an alarming number of apps claiming to offer ChatGPT facility, with some shady functionalities like a permanent paywall, copycat codebase, and suspicious developer profile. 

As well, researchers at cybersecurity firm Check Point have warned that cybercriminals are already using ChatGPT to code malware and write eerily convincing phishing emails to trick users. In one instance, someone on a hacker forum detailed how they used ChatGPT to code a malware strain capable of finding target files on a computer, zipping them into a package, and transmitting it to a remote server. It's hard to predict all the ways in which hackers are going to exploit ChatGPT, but the safest and best way forward is to look for tools offered by OpenAI through its official website.