Keeping Agentic AI in Line Through Integration of Governance and Security
IBM’s Vishal Kamat and Heather Gentile on the recently announced integration of watsonx.governance and Guardium AI Security

As enterprises scramble to find ways that AI can increase efficiency, innovate business models and automate complex processes, one solution they are exploring is agentic AI. These AI agents can act autonomously as they handle tasks, make decisions, communicate with humans and solve complex problems.
But as organizations move from experimenting with AI agents to deploying them, they face the twin challenges of governance and security. “Unfortunately, that’s where it’s kind of hitting a little bit of a bump,” Vishal Kamat, vice president of data security at IBM, tells TechChannel.
Core to the challenge is the inherent reality that adding AI agents to an IT environment leads to less direct human involvement, complicating the challenge of oversight. “With greater automation, you have less human in the loop,” Heather Gentile, director of product for watsonx.governance, tells TechChannel.
Risks of Agentic AI
Here’s a list of a few risks associated with agentic AI:
- Bias introduction – Even if bias is not introduced during model training, it may still materialize through data transformation that occurs as the models interact with agents, Gentile explains.
- Jailbreaking and prompt poisoning – Hackers may attempt to undermine AI by manipulating it to perform restricted actions, including divulging sensitive information such as AI API keys.
- Hallucinations – AI agents may invent incorrect answers or make faulty decisions. These hallucinations are at risk of spreading among agents as they communicate with one another.
- Multi-agent dependencies – There is risk of malfunction and system-wide failure as multiple agents, based on the same foundation models and sharing the same pitfalls, are tasked with solving complex problems.
- Shadow AI – The introduction of unapproved AI tools into an environment, akin to shadow IT.
Enhanced Visibility Through Integration
The above risks highlight the need for visibility into the workings of agentic AI, and the ability to quickly address issues as they arise. To adequately achieve that level of control, relationships between governance teams and security teams ought to be tightened, Kamat notes.
With that in mind, IBM recently announced new integrations between the Guardium AI Security tool and watsonx.governance. This integration, Gentile says, “brings those two views together to provide the best and most comprehensive information to the people doing the oversight.”
The capabilities of Guardium AI Security include continuously monitoring AI models, discovering shadow AI, automating penetration tests and detecting security vulnerabilities. “As this discovery happens, you want to be able to share that with your governance team to say, ‘Hey, look, here’s all the inventory,’” Kamat says.
Automation can make that process seamless. When Guardium AI Security detects new or anomalous use of AI, it can automatically trigger watsonx.governance, which brings the detected AI into inventory, aligns it with its use case and assesses its purpose to determine if it should be running within the organization.
“Unifying AI governance with AI security gives organizations the necessary context to find and prioritize risks, as well as the information to clearly communicate the consequences of not addressing them,” says Jennifer Glenn, research director for International Data Corporation’s (IDC) security and trust group.
Governance, Compliance and Uncertainty
A big factor in determining whether a use case is appropriate is compliance, a moving target in the rapidly shifting AI space. Historically, AI regulations have been more “aspirational” than specific, establishing the importance of issues like data privacy, transparency and responsible AI adoption, Gentile says. The EU AI Act, established in 2024, was the first regulation in which “you are actually able to break down requirements that can then be mapped to controls,” she explains.
watsonx.governance covers the EU AI Act and 11 other regulatory frameworks, including the EU Digital Services Act and the ISO/IEC 42001 and ISO/IEC 42002 international standards. As the technology develops, new regulations are inevitable, creating the need to be proactive rather than reactive.
“Clients who have the structure in place will be able to respond to any of these new requirements more agilely than someone who’s trying to organize it in support of a major regulatory directive,” Gentile says.
That kind of anticipation can help organizations avoid missteps as they enter the fledgling, potentially perilous world of agentic AI. “This being a very new domain, everything is evolving,” Kamat says.