Blog
Jan 27, 2026 | 5 min

The Clawdbot (Moltbot) Enterprise AI Risk: One in Five Have it Installed

The Clawdbot (Moltbot) Enterprise AI Risk: One in Five Have it Installed

A new kind of shadow AI is inside companies globally and it’s already wreaking havoc. An open source AI assistant, Clawdbot (recently renamed to Moltbot), has spread like wildfire, amassing over 60,000 Github reviews. It is also a security nightmare, with exposed control servers that can lead to credential theft and remote execution over the internet.

What makes Clawdbot particularly powerful, and concerning, is that it integrates deeply with the user’s digital life, including calendars, email, documents, and file systems. It can read and respond to emails, manage schedules, access files, and execute commands. It's essentially "Claude (or other LLM service) with hands." It can execute terminal commands, write and run scripts, browse the web, read and write files, and control browsers. It maintains persistent memory across sessions and can proactively reach out to users.

What is Clawdbot?

Clawdbot is an open-source personal AI assistant created by Peter Steinberger that runs on users' own devices - Mac or Linux. Unlike traditional AI chatbots confined to browser tabs, Clawdbot is a full-blown application that integrates directly with messaging platforms employees already use: Slack, WhatsApp, Telegram, Discord, Microsoft Teams, Signal, and iMessage.

Sounds Great, but There’s a Problem

Clawdbot (Moltbot) is a potential danger and a real security problem. By design, it requires broad access to your most sensitive systems - email, calendar, documents, messaging platforms - and stores credentials locally in plaintext. It runs on unmanaged personal devices, outside your security perimeter, with no centralized logging or oversight. When misconfigured, these privileged AI agents become high-impact control points, combining leaked data, automation abuse, and misuse of user permissions.

What We Found

In less than a week of analysis, Token Security identified that 22% of our customers have employees actively using Clawdbot within their organizations. 

This rapid adoption signals a significant shadow AI trend that security teams need to address immediately.

Example: The WhatsApp Insider Threat

Consider this scenario: An employee sets up Clawdbot on their personal laptop or Mac Mini and connects it to WhatsApp or iMessage as their chat interface. They also connect data sources to their corporate Slack workspace and grant it access to internal channels, direct messages, files, emails, and calendars. After all, they want their AI assistant to be helpful.

Now, from anywhere in the world, that employee can send a WhatsApp message asking: "What were the revenue numbers for the last quarter?" or "Summarize the latest product roadmap discussions in the #productmanagement channel." The AI dutifully searches through corporate Slack, reads confidential documents, and sends the summary back via WhatsApp, completely bypassing DLP controls, email security, and any corporate audit trail.

The data flows from corporate systems, through an unmanaged AI agent running on a personal device, to a consumer messaging app that is all invisible to the enterprise security team.

The Security Risks

Clawdbot introduces several critical security concerns for enterprises:

Exposed Gateways and Credential Leaks - Security researcher Jamieson O'Reilly discovered hundreds of Clawdbot instances exposed to the internet with no authentication - completely open admin dashboards that granted immediate access to API keys, OAuth tokens, and complete conversation histories. In some cases, attackers could achieve remote code execution through stolen gateway tokens.

Plaintext Credential Storage - Clawdbot stores configuration and credentials in plaintext files under ~/.clawdbot/ and ~/clawd/. Unlike encrypted browser stores or OS keychains, these files are readable by any process running as the user, making them prime targets for attackers.

Corporate Data Exposure - When employees connect Clawdbot to Slack or Microsoft Teams, they expose internal communications, documents, and sensitive business data to an AI system running outside IT's visibility. The tool's ability to read emails, documents, and webpages creates additional attack vectors through prompt injection.

No Sandboxing by Default - Clawdbot's own documentation acknowledges there is "no perfectly secure setup when operating an AI agent with shell access." Without explicit sandboxing configuration, the agent has full access to everything the user can access.

No Centralized Audit Trail - Unlike sanctioned enterprise AI tools, Clawdbot usage leaves no centralized logs for security teams to monitor what data is being processed, what systems and services are being accessed, or where is being sent.

How Organizations Can Respond

For security teams dealing with Clawdbot adoption, consider these approaches:

Discovery and Visibility - Identify which employees are running Clawdbot by monitoring for characteristic access patterns, process names, or the presence of .clawdbot directories on endpoints.

Permission Controls - Review OAuth grants and API tokens connected to corporate systems like Slack, Microsoft Teams, Google Workspace, and email. Clawdbot requires broad permissions to function - identify and revoke unauthorized integrations that give AI agents access to sensitive corporate data.

AI Usage Policies  - Establish clear policies on personal AI agent usage. Many employees may not understand the risks of granting an AI full system access.

Access Controls - Block or monitor connections to Clawdbot-related infrastructure. Ensure employees cannot expose gateway ports to the internet. Also monitor for connections into internal systems from Clawdbot-related infrastructure.

Approved Alternatives - If employees need AI automation capabilities, provide approved enterprise alternatives with proper security controls, audit logging, and IT oversight.

How Token Security Helps

Token Security provides comprehensive visibility into and control over AI agent activity and access across your organization:

1. AI Agent Discovery and Endpoint Detection. Automatically identify Clawdbot running on corporate endpoints by correlating cloud identities with third-party service connections. Know which employees are running autonomous agents before they become a security incident.

2. Access Mapping. Monitoring OAuth apps, tokens, service accounts, etc, for access from clawdbots infrastructure. See exactly what corporate resources each Clawdbot AI agent can access, such as Slack workspaces, email accounts, cloud storage, and more. Understand the blast radius of a compromised agent.

3. Hardening Controls. Create network policies and right-sizing for access to limit the blast radius in case of compromise. Where we discover an identity used by Clawdbot (for example, a Slack app), we can use active remediation capabilities to lock it down.

The Bottom Line

Clawdbot represents a new category of shadow AI, powerful autonomous agents that employees can deploy without IT approval. The recent discoveries of widespread exposed instances and attackers targeting demonstrate this isn't a theoretical risk. Security teams need visibility into these tools before they become deeply embedded in workflows and start processing sensitive corporate data. But it doesn’t end with visibility: security teams need control over the access given to each AI agent and the ability to lock down that access when faced with a security vulnerability or risk.

To learn more about how Token Security can identify and control Shadow AI in your enterprise, set up a demo today.

Discover other articles

Be the first to learn about Machine-First identity security