A viral AI tool called OpenClaw is gaining popularity with employees, who may be unwittingly exposing their organizations to serious data and security risks.
What's happening: OpenClaw (previously Moltbot) is an open-source AI agent that acts autonomously on a user's behalf — managing calendars, reading files, and sending emails. Employees drawn to its productivity promise are likely connecting it to work accounts without IT or HR's knowledge.
The security reality is alarming. Cybersecurity firm Palo Alto Networks has flagged OpenClaw as a potential trigger for the next major AI security crisis.
- To function as designed, the tool requires access to root files, login credentials, browser history, and entire file systems.
- That means one compromised employee account could expose sensitive HR data, payroll records, or confidential personnel files.
- Worse, the tool has a "persistent memory" feature that can store malicious instructions and execute them later — meaning threats aren't always immediate or obvious.
Then there's Moltbook. An AI-only social network where more than 150,000 OpenClaw agents share information and interact has emerged as an entirely new channel through which corporate data could leak, intentionally or not.
What CHROs should do now:
- Audit employee AI tool use. Shadow AI adoption is real and growing fast.
- Update acceptable use policies to explicitly address autonomous AI agents.
- Partner with IT and legal to assess exposure before an incident forces your hand.
- Educate employees on what connecting a personal AI agent to work systems actually means for company data.
The bottom line: The productivity appeal of tools like OpenClaw is undeniable, but they are “not designed to be used in an enterprise ecosystem.”