Security and Governance Lag Behind AI Adoption, says IBM’s ‘Cost of a Data Breach Report’

Governance and security are playing catch-up as enterprises rush to leverage AI, a technology that is generating new attack vectors and changing what organizations must do to protect their data, according to IBM’s recently published “Cost of a Data Breach Report 2025.”
The report is the result of research conducted by Ponemon Institute, which studied more than 6,485 breaches and interviewed over 34,652 leaders in technology, security and business between March 2024 and February 2025. Now in its 20th year, IBM’s annual data breach report has become an industry benchmark.
Costs Decline, but Oversight Gap Emerges
The survey found that the global average cost of a data breach declined to $4.4 million since the 2024 report, a 9% drop from the previous year and the first decline found by the annual study in five years. That global decline, the report notes, would have been even greater if it not for the trend in the U.S., which saw a 9% increase in the average cost of a data breach, reaching $10.2 million.
Looking deeper into the data, IBM’s new report centers on “The AI Oversight Gap,” in which governance and security measures are taking a back seat to AI implementation. During an online presentation of the report Aug. 13, Ron Bennatan, CEO of the security platform AllTrue.ai, described the pressure that decision makers are under as their boards push for AI implementation.
“They’re saying, ‘Look, if we don’t do this, our competition’s going to outpace us. So there’s huge pressure,” Bennatan said. “It started two or three years ago, and the mandate was, ‘Go quickly, go experiment, go build, forget everything. Forget all the things that slow you down.’ And the things that slow you down are very often the things that make you responsible.”
Rapid AI Progression = Security Upheaval
AI is advancing faster than any previous technology, Bennatan observed. And that advance is causing upheaval in data security as hackers find new marks. Thirteen percent of organizations surveyed reported breaches of AI models and AI applications, with 60% of those breaches resulting in compromised data and 31% leading to operational disruption. The most common vectors for these breaches could be found in the “AI supply chain,” including compromised applications, APIs and plugins, IBM’s report noted.
Amid the rapid progression of AI, new vulnerabilities are emerging. That includes agentic AI, which can act autonomously and make decisions independently, mimicking human roles within an organization—and it’s the human that is typically the weakest link in data security. “If we’re now creating agents which mimic people, that means that very naturally, the attacks are going to go after the agents,” Bennatan said.
That’s going to amount to an immense attack surface, according to the predictions of Nvidia CEO Jensen Huang. During his keynote speech at the NVIDIA GPU Technology Conference in March 2025, Huang said he expects there will eventually be 10 billion “digital workers” conducting tasks alongside humans. For reference, according to commonly cited estimates, there are currently about 1 billion knowledge workers in the world.
With the mass deployment of AI agents potentially looming, organizations will have to update their security controls. Of the organizations affected by an AI-related breach of their AI systems, 97% didn’t have the proper access controls in place, according to IBM’s report. That’s despite the fact that most companies now have AI governance committees, Bennatan said. “That still doesn’t mean that there’s a clear policy,” he added.
The Emergence of Shadow AI
Highly familiar with the concept of shadow IT, those governance teams must now reckon with shadow AI, or the unsanctioned, uncataloged use of AI within an organization.
Part of the danger of shadow AI is the unknown lineage of the models, which can be downloaded from repositories such as Hugging Face, said Sam Hector, product manager for data security at IBM. Included in the dangers is data poisoning, which occurs when the data sets on which the models trained are subjected to malicious corruption, Hector said.
Another concern, he added, is staff uploading sensitive data in the course of using unsanctioned AI tools. The problem of shadow AI is compounded by the fact that it increases mean time to identify (MTTI), since the off-the-books AI is inherently an unknown quantity in efforts to monitor incursions, Hector said. Add it all up, and shadow AI is “really an avenue for something bad to happen,” Bennatan said.
A Double-Edged Sword
The good news is that AI is also helping to defend against attacks. Organizations who used AI and automation in security operations saved an average of $1.9 million on data breach costs, IBM’s report stated. The survey results showed that nearly a third of organizations were using AI extensively for security, but that share only rose slightly over last year, suggesting that such AI uptake may have stagnated.
In addition to being a tool of defense, AI is also a weapon for bad actors. Hackers used AI in 16% of breaches, often for phishing and deepfake attacks, according to IBM’s report. “AI has been implemented for years to improve security use cases,” Hector said, “but what we’re seeing now with generative AI is that it’s really increasing the size of the organization’s attack surface, especially with the rapid adoption that we’ve seen.”
AI Security Tips
Hector and Bennatan’s recommendations for security in the AI age included the following:
- Fortify identities – “Identity is this new perimeter to organizations, but often what we’re seeing with our clients is that it’s done in an incredibly fragmented way,” Hector said. To address this, he advocates for an identity fabric approach in which multiple identity access systems are integrated. “It means you can use really useful techniques like AI across all environments to detect anomalies in behavior without ripping and replacing the entire identity stack,” Hector added.
- Return to the basics – Use encryption in storage and in transit for AI data flows, and implement strict access controls around AI, especially when it is interacting with APIs and humans, Hector advised. “I think organizations need to treat AI data flows like regulated data flows,” he said.
- Connect security and governance – Governance and security are usually the responsibilities of different stakeholders, but the two disciplines share so much in common that keeping them separate is a disadvantage, Bennatan noted. “There’s so much overlap between AI security and AI governance that you really need to take an approach where you don’t duplicate efforts,” he said.
- Appreciate the potential of AI to improve security – “Whether it’s triage and whether it’s just going over findings/issues/alerts, all these things can become way more effective with AI,” Bennatan said.
- Don’t be scared into inaction – While embracing AI opens up new vulnerabilities for an organization, that’s no reason not to do it, Bennatan stressed. “To be very clear, the fact that it’s dangerous is not going to mean we’re not going to do it, because we are going to do it,” he said. “Because from a business perspective, it holds so much promise.”