Hacker exploits AI coding tool Cline to install rogue software
Summary
A hacker exploited a vulnerability in the AI coding tool Cline, using a prompt injection to trick it into installing the OpenClaw AI agent on users' computers. This stunt highlights the serious security risks of autonomous AI agents.
Hacker exploits AI coding tool to install rogue software
A hacker has successfully exploited a vulnerability in the popular AI coding assistant Cline to automatically install the open-source AI agent OpenClaw on users' computers. The incident, which occurred on February 19th, 2026, was a proof-of-concept stunt but highlights a critical security risk as AI agents gain more autonomy.
The hacker used a technique called prompt injection to manipulate Cline's underlying AI model, Anthropic's Claude. By feeding the model sneaky instructions, they bypassed normal safeguards and executed commands to install software.
The vulnerability was publicly known
Security researcher Adnan Khan had publicly disclosed the specific vulnerability in Cline just days before the attack. Khan's proof-of-concept demonstrated how Claude could be tricked into performing unauthorized actions within the Cline workflow.
Khan stated he had privately warned Cline's developers about the flaw weeks earlier. The exploit was only patched after he published his findings, raising questions about the project's response to security warnings.
While the hacker chose to install the benign OpenClaw agent, they could have installed any malicious software. The installed agents were not activated, preventing a more severe security incident.
Prompt injection is a growing security nightmare
This event underscores the inherent risks of granting AI agents control over systems. Prompt injections are a fundamental weakness where an AI model is manipulated via its text input to override its intended instructions.
These attacks are notoriously difficult to defend against because they exploit the core way generative AI works. As AI tools become more capable and autonomous, the potential damage from such hijackings grows exponentially.
Other incidents have shown the creative methods of attack:
- Researchers have tricked chatbots into committing simulated crimes using crafted poetry.
- The technique bypasses traditional security filters that look for specific malicious code.
Companies are scrambling for defenses
With robust technical solutions elusive, some companies are adopting strict operational limits. OpenAI recently introduced a Lockdown Mode for ChatGPT, which severely restricts the AI's capabilities to prevent data exfiltration if it is compromised.
This approach essentially trades functionality for security, limiting what a hijacked AI can do. For developer tools like Cline, which require system access to function, implementing similar locks is a complex challenge.
The Cline exploit serves as a stark warning. As Robert Hart reports, the security landscape for autonomous AI software is unraveling quickly, and the industry is still figuring out how to respond.
Related Articles

Pi for Excel: AI sidebar add-in for Excel, powered by Pi
Pi for Excel is an open-source AI sidebar for Excel. It reads and edits workbooks using models like GPT or Claude, with tools for formatting, extensions, and recovery.

Midsummer Studios shuts down, reveals unreleased life sim Burbank
Midsummer Studios, founded by ex-Firaxis director Jake Solomon, is closing. It revealed a first look at its AI-driven life sim "Burbank" before shutting down.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.

