OpenClaw security fears lead Meta, other AI firms to restrict its use
Summary
Companies are banning OpenClaw, an AI tool that controls computers, over security fears. While some explore its potential, many restrict use to protect sensitive data.
Companies are banning a powerful AI tool
Tech companies are banning employees from using a new, experimental AI agent called OpenClaw on corporate hardware. Executives warn the software poses a high security risk, despite its potential to automate complex computer tasks.
Jason Grad, cofounder and CEO of web proxy company Massive, issued a warning to his 20 employees in late January. He told them to keep the tool, then known as Clawdbot, off all company hardware and away from work-linked accounts.
A Meta executive also recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. He believes the software is unpredictable and could lead to a privacy breach, speaking anonymously to discuss internal concerns.
What is OpenClaw?
OpenClaw is a free, open-source AI agent launched solo by founder Peter Steinberger last November. Its popularity surged last month as coders added features and shared their experiences online.
The tool requires basic software knowledge to set up. Once running, it can take control of a user's computer with limited direction to perform tasks like:
- Organizing files
- Conducting web research
- Shopping online
Last week, Steinberger joined OpenAI. The ChatGPT developer says it will keep OpenClaw open source and support it through a foundation.
Security fears prompt immediate bans
Cybersecurity professionals have urged companies to strictly control workforce use of OpenClaw. The bans show firms are prioritizing security over experimentation with emerging AI.
"Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful," Grad says. His warning went out before any employees had installed the software.
At software company Valere, an employee posted about OpenClaw on an internal Slack channel on January 29. Company president Guy Pistone says the CEO quickly responded that use was "strictly banned."
"If it got access to one of our developer’s machines, it could get access to our cloud services and our clients’ sensitive information," Pistone says. This includes credit card data and private GitHub codebases.
Testing the tool in controlled environments
Some companies are exploring OpenClaw in isolated, secure settings. A week after the ban, Pistone allowed Valere's research team to run it on an old, disconnected computer.
The goal was to identify flaws and potential security fixes. The team later advised key safeguards, including:
- Limiting who can give orders to the AI agent
- Exposing it to the internet only with a password-protected control panel
In a report, the Valere researchers noted users must "accept that the bot can be tricked." For example, if set to summarize email, a hacker could send a malicious email instructing the AI to share files from the user's computer.
Pistone has given a team 60 days to investigate building proper safeguards. "Whoever figures out how to make it secure for businesses is definitely going to have a winner," he says.
Alternative approaches to the risk
Not every company is issuing a formal ban. Some are relying on existing cybersecurity protections. The CEO of a major software company says only about 15 programs are allowed on corporate devices, and anything else should be automatically blocked.
He doubts OpenClaw could operate on the company's network undetected. "While OpenClaw is innovative, I doubt that it will find a way to operate on the company’s network undetected," the anonymous executive said.
Jan-Joost den Brinker, CTO of compliance software firm Dubrink, bought a dedicated machine not connected to company systems for employees to experiment with OpenClaw. "We aren’t solving business problems with OpenClaw at the moment," he says.
The commercial allure remains strong
Despite the risks, the tool's potential is too great for some to ignore. Massive, the company that banned it internally, is cautiously exploring commercial applications.
Grad says the company tested OpenClaw on isolated cloud machines. Last week, it released ClawPod, a service that lets OpenClaw agents use Massive's infrastructure to browse the web.
OpenClaw "might be a glimpse into the future. That’s why we’re building for it," Grad says. The tension between groundbreaking potential and significant security risk is defining this new tool's rocky introduction to the tech world.
Related Articles
‘An AlphaFold 4’ – scientists marvel at DeepMind drug spin-off’s exclusive new AI
Isomorphic Labs, a Google DeepMind spin-off, has developed a proprietary AI model, IsoDDE, that predicts protein-drug interactions for drug discovery, but unlike AlphaFold, it is not being shared with the broader scientific community.
OpenAI’s Sam Altman: Global AI regulation ‘urgently’ needed
OpenAI's Sam Altman urgently calls for global AI regulation and an international oversight body for safe, fair development.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
