How the OpenClaw case exposes the hidden risks in Agentic AI

The recent case of OpenClaw, which allowed AI agents to move across different systems using a centralized set of credentials, has exposed the hidden risks that can come from the use of experimental AI tools. While it experienced surge in popularity - at one point recording 2 million users - investigators found serious security issues including potential vulnerabilities across 50,000 devices.

Jonathan Armstrong’s recent interview with the Information Security Media Group (ISMG) goes into more detail on the specifics of the case and the wider implications for Agentic AI.

In the interview, Jonathan touches on the following points:

  • How OpenClaw allows AI agents to move across systems using centralized credentials;

  • How shadow AI experimentation can expose organizations to hidden risks;

  • And why companies may need new governance and due diligence models for AI tools.

Make sure to check out the full video at: https://www.govinfosecurity.com/openclaw-exposes-hidden-risks-in-agentic-ai-a-31035

Next
Next

Why AI literacy is the new compliance frontier