The Hottest New AI Agent Is Here – But It Has Some Experts Raising Alarms

An open-source agenic artificial intelligence agent by the name of OpenClaw has exploded onto the tech scene in recent weeks, and it's not for entirely good reasons, either. Launched by Austrian developer Peter Steinberger (and rebranded twice in rapid succession from Clawdbot to Moltbot before settling on its current name), OpenClaw's appeal lies in its ability to autonomously complete real-world tasks beyond just generating text. It's a promise many other AI products have flirted with but rarely have been able to deliver at this scale.

OpenClaw runs directly on a user's operating system and can manage emails, calendars, browse the web, summarize documents, shop online, delete messages, and even interact with third-party services, all with little to no supervision. Early adopters hope it could eventually run entire organizations without much human oversight. But it's that exact hands-off capability that has people worried about OpenClaw.

While developers and business leaders see it as a potential leap forward for productivity tools, cybersecurity experts worry that granting it such deep access to a user's system makes it easily exploitable. On X (formerly Twitter) Cybersecurity pro Jamieson O'Reilly put OpenClaw's weaknesses this way: "imagine you come home and find the front door wide open, your butler cheerfully serving tea to whoever wandered in off the street, and a stranger sitting in your study reading your diary."

What makes OpenClaw so concerning?

OpenClaw's open-source model is a big reason why it's spread like crazy – over two million visitors on GitHub in one week. While actual concrete usage figures can't be known, creator Steinberger claims hundreds of thousands of stars on the code repository so far. It's even expanded to China, where developers are working to pair OpenClaw with both Western and Chinese language models for even greater efficiency.

And yet, security researchers warn that prompt injection attacks (where hidden instructions embedded in websites or documents trick an AI into doing harmful actions) can be a real problem with OpenClaw. That's because it has a persistent memory, which means it can retain and act on information even weeks later. Major cybersecurity firms like Cisco have also warned that the combination of sensitive data access and external communication capabilities makes for a serious risk, especially for enterprise environments.

What's being done about it

China's Ministry of Industry and Information Technology recently issued a public warning that improper use of OpenClaw could expose users to everything from cyberattacks and data breaches, otherwise known as "a good way to end up with your personal information on the dark web." While stopping just short of a ban, the ministry nevertheless urged organizations to conduct audits and limit network exposure as much as they can. 

There have been separate security flaws uncovered in Moltbook, too. That's the social network designed exclusively for AI agents built largely on OpenClaw. Pros at cloud security platform Wiz said that because "AI tools don't yet reason about security posture or access controls on a developer's behalf," Moltbook's misconfigured databases have been exposing people's private data.

Steinberger has since acknowledged the risks, making it clear to CNBC that his OpenClaw should be considered a hobbyist, open-source project... not something meant for nontechnical users. He insisted that security improvements are currently underway and that progress has been made with help from the global security community, but the tool is still far from perfect. For critics, that caveat underscores a broader concern: powerful autonomous AI tools are spreading faster than guardrails can be put in place to protect people.

Recommended