IBM AI Security Experts Say Agents Require Runtime Protection
Photo by Possessed Photography on Unsplash
A recent report warns that AI agents left unprotected in runtime are vulnerable to exploitation, prompting IBM’s top security experts to call for mandatory runtime safeguards on every deployed agent.
Quick Summary
- •A recent report warns that AI agents left unprotected in runtime are vulnerable to exploitation, prompting IBM’s top security experts to call for mandatory runtime safeguards on every deployed agent.
- •Key company: IBM
IBM’s security team warned that AI agents are being deployed with system‑level privileges that give them unfettered access to files, terminals and networks—capabilities that most users cannot fully comprehend. Jeff Crume, a Distinguished Engineer at IBM Security, explained on the Security Intelligence podcast that “most people … this is going to be very opaque” when they install an agent, meaning the software can read credentials, modify files and execute arbitrary commands while the operator assumes it is merely “helping with code.” The report from ClawMoat, which reproduced the podcast discussion, notes that this exposure creates a direct pathway for attackers to hijack an agent and leverage its privileged context to move laterally across an environment (IBM report, Feb 28).
Both Sridhar Mupidi, IBM Fellow and CTO of IBM Security, and Crume converged on the principle of least privilege as the single most critical control for agents. Mupidi urged that “whether it’s open source or not, make sure they’re only allowed to do what they’re allowed to do—nothing more, nothing less.” In practice, this means restricting an agent’s file‑system, network and tool access to the minimum set required for its function, and revoking those rights as soon as they are no longer needed. The ClawMoat solution “McpFirewall” enforces tool‑level access control, while “FinanceGuard” adds domain‑specific limits such as transaction caps, ensuring that even a compromised agent cannot exceed its authorized authority (ClawMoat, Feb 28).
Prompt injection emerged as the third major risk highlighted by Nick Bradley, X‑Force Incident Command leader. Bradley described how an agent can be tricked into executing malicious instructions embedded in seemingly benign content—web pages, emails, documents or inter‑agent messages—because the model cannot distinguish between legitimate prompts and injected payloads. “It processed exactly what it was supposed to—and got busted,” he said, referencing the OpenClaw incident where an agent faithfully followed injected prompts that led to a breach. The report stresses that without runtime monitoring, such attacks can go undetected until damage is done (ClawMoat, Feb 28).
To address these vulnerabilities, IBM’s experts advocated for continuous runtime protection that monitors an agent’s behavior in real time. ClawMoat’s “Host Guardian” watches file‑system activity, credential exposure and system‑level operations, while the “Secret Scanner” intercepts credential leaks before they leave the agent’s output. A “Network Egress Logger” records every outbound connection, providing an audit trail that can flag suspicious communications. Together, these controls aim to close the gap between an agent’s powerful capabilities and the organization’s need for visibility and control (ClawMoat, Feb 28).
The broader industry context underscores the urgency of these safeguards. VentureBeat reported that IBM and AWS’s joint study found no “silver bullet” for generative AI security, emphasizing that the rapid rollout of AI agents is outpacing existing defensive measures (VentureBeat, 2024). At RSAC 2025, analysts noted a surge in demand for CISOs to manage agent‑centric threats, with more than 20 vendors unveiling agent‑based security products (VentureBeat, 2024). IBM’s call for mandatory runtime protection therefore aligns with a growing consensus that the agent era will reshape security operations and require new, enforceable standards for privilege and monitoring.
Sources
No primary source found (coverage-based)
- Dev.to AI Tag
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.