Obsidian Security researchers are monitoring an emerging risk associated with Moltbot (formerly known as Clawdbot), a personal AI assistant that is increasingly being connected into corporate SaaS environments.
Moltbot is designed to act as a human-language agent that users can configure with integrations (into apps like Slack, Gmail, Notion, Github), and then instruct to perform automated actions. However, security researchers have demonstrated that Moltbot is highly susceptible to prompt injection attacks, meaning it may be unable to reliably distinguish legitimate user commands from malicious instructions. For example, malicious instructions delivered through an email could cause the agent to execute unintended actions, such as deleting emails, forwarding sensitive data outside company addresses, or triggering unauthorized workflows.
In addition, Moltbot deployments are frequently misconfigured. In multiple observed cases, users have inadvertently exposed Moltbot’s administrative configuration consoles to the internet (e.g. via websites like Shodan). In the case of vulnerability, attackers may be able to publicly access these configuration consoles and exfiltrate API keys to whatever downstream apps Moltbot is integrated with.
This risk highlights that organizations should proactively discourage the use of untested consumer-grade AI agents in enterprise environments, exercise caution around high-permission automation tools, and prioritize solutions that offer stronger security controls, governance, and visibility.
What customers should immediately consider:
Start in minutes and secure your critical SaaS applications with continuous monitoring and data-driven insights.