Skip to main content

Governing Agentic AI: The Mainframe DBA as the Architect of Autonomous Data Workflows

Craig Mullins explains how mainframe DBA is becoming the link between autonomous systems and data integrity

TechChannel AI

In my recent columns, I’ve discussed the transformation of the mainframe DBA from a mere “custodian” of data to a strategic overseer. But as we progress further into the realm of the AI-assisted organization, the stakes have shifted yet again. We are moving past simple Generative AI which mostly answers questions, into the era of agentic AI with autonomous systems that can execute multi-step workflows. Using APIs, we can automate these workflows to modify database state without direct human intervention.

For the mainframe world, where “integrity” isn’t just a buzzword but a way of life, this presents a unique challenge. If an AI agent has the power to autonomously adjust a Db2 parameter or setting, or even trigger a CICS transaction based on its own “reasoning,” who is actually in charge?

The Agentic Shift: From Retrieval to Action

Traditional AI was largely about retrieval-augmented generation (RAG). It read your data and gave you a summary. Agentic AI is different; it uses “tools.” In the context of the IBM Z ecosystem, these tools are often SQL statements, stored procedures or administrative commands.

While the efficiency gains are tempting, the risks are non-trivial. An autonomous agent doesn’t understand the tribal knowledge of a 40-year-old schema. It doesn’t know that updating Table A without checking Table B violates a business rule that isn’t codified in a declarative database constraint (referential integrity).

Technical Guardrails: Hardening Db2 13 for AI Agents

In a world where AI “agents” are generating their own SQL, we can no longer rely on the assumption that the application code was vetted during a code review or sprint. Instead, the DBA must bake security into the database layer itself. Db2 13 for z/OS provides several sophisticated capabilities that can act as sanity checks for autonomous workloads.

  • Trusted contexts and roles: We should never allow an AI agent to connect with high-level administrative privileges. By using Trusted Contexts, a DBA can define a specific security perimeter (based on IP address, Job Name, or encryption requirements). When an AI agent connects, it is restricted to a specific role. This ensures that for example, if an agent “hallucinates” a DROP TABLE command, the database engine will reject it before it ever tries to execute the code.
  • Row and column access control (RCAC): Not all data is for AI eyes. RCAC allows us to define “masks” and “permissions” that are transparent to the application. If an agent is tasked with analyzing “customer trends,” RCAC can ensure that personally identifiable information (PII) is masked automatically. For example, the agent can see the trend but not the Social Security Number.
  • Label-based access control (LBAC): While traditional security controls who can access a table, LBAC (also known as multi-level security) allows the DBA to control access at the cell level based on data sensitivity labels. For an AI agent, this is a critical safety net. You can assign security labels (e.g., “Top Secret,” “Internal Only,” or “Regional-US”) to both the data rows and the AI agent’s ID. If an agent tasked with “General Research” tries to scan a table, Db2 will automatically filter out any rows with a “Highly Sensitive” label. Unlike a WHERE clause that an AI might forget, LBAC is enforced by the database engine, ensuring the agent literally cannot “see” data it isn’t cleared for.
  • SQL Data Insights (SQL DI): Introduced in Db2 13, SQL Data Insights allows for “semantic” queries. Instead of an agent trying to join five tables to find similar customers, the DBA can expose built-in functions like AI_SIMILARITY or AI_COMMONALITY. By encouraging agents to use these governed, built-in AI functions rather than generating complex, raw SQL, we can reduce the risk of runaway queries that consume excessive CPU capacity.
  • Profile tables for monitoring: Use Db2 profile tables to set strict thresholds on AI-driven connections. If an agent begins an infinite loop of queries, the DBA can ensure that the connection is automatically throttled or terminated based on CPU or monitor warnings, protecting the rest of the subsystem.

Why the ‘Human-in-the-Loop’ is Non-Negotiable

We cannot simply “set and forget” these agents. The DBA must act as the ultimate architect of the guardrails that keep these agents from causing digital havoc. This “Human-in-the-Loop” requirement isn’t about slowing things down; it’s about governance.

  1. Semantic guardrails: The DBA must define the sandbox where an agent can play. This means using statement-level invalidation and fine-grained access control to ensure an agent can’t accidentally drop a tablespace while trying to optimize it.
  2. The audit of intent: Standard database auditing tells us what happened, but in an agentic world, we also need to know why. The DBA must ensure that agent logs are correlated with database logs so we can trace an erroneous update back to the specific AI thought process that triggered it.
  3. Validation of SQL generation: We’ve all seen GenAI “hallucinations.” If an agent generates dynamic SQL, the DBA’s role is to implement automated validation layers that act as a “pre-flight check” before that SQL hits the production subsystem.

As I’ve said many times before: data is the lifeblood of the organization, and the DBA is the heart that keeps it pumping correctly. If we allow autonomous agents to operate without a human architect at the helm, we’re ripping the heart out of the system. That is, we aren’t just modernizing; we’re gambling with the most valuable asset the enterprise owns.

The New DBA Skillset

To govern agentic AI, the modern DBA needs to move beyond EXPLAIN paths and buffer pool tuning. You must evolve into a data architect for AI, mastering the specific disciplines that allow autonomous agents to operate safely within the enterprise.

  • Prompt engineering for data: Unlike general-purpose prompt engineering, this is a technical discipline focused on grounding AI in factual database structures. The DBA must learn to craft system prompts that act as a SQL Specification for the agent. This means things like defining mandatory join conditions, enforcing index usage and prohibiting high-impact operations like Cartesian products. By treating the prompt as a schema-level constraint, you ensure the AI’s thought process is aligned with the physical reality of your Db2 environment.
  • AI orchestration: As agents begin to link multiple tasks together (such as fetching data from a VSAM file, processing it through a watsonx model and then updating a Db2 table), the DBA must become the conductor of these workflows. This involves managing the state of these long-running autonomous processes and implementing circuit breakers that can pause an agentic chain if it deviates from expected performance or security baselines.
  • Metadata management: Metadata is the compass by which AI navigates. Without rich, governed metadata, an agent cannot distinguish test data from production data. The modern DBA must curate a centralized data catalog that includes data lineage, business definitions and sensitivity tags. By providing the AI with high-fidelity metadata, you move from simply managing rows and columns to managing the context and meaning that prevents the agent from making costly, uneducated guesses.

From Doing to Overseeing

We are at a crossroads in the history of database administration. The arrival of agentic AI does not signal the end of the DBA; rather, it marks the end of the DBA as a manual laborer. For decades, we have spent our days in the trenches of REORG, RUNSTATS, and manual performance monitoring. Those days are waning, replaced by a mandate for high-level architectural oversight.

If we embrace this shift, we become the essential link between autonomous intelligence and enterprise integrity. But if we ignore it, or worse, if we allow these agents to operate in a vacuum, we risk the very stability that has made the mainframe the system of record for over 50 years.

The future belongs to the DBA who can architect the guardrails, manage the metadata and orchestrate the workflows of tomorrow. It’s time to step out of the curmudgeon’s corner and take your place as the chief auditor of the autonomous enterprise.

Key Takeaways for the Agentic Era

1. Shift from “code review” to “rule review.” Since agents generate code on the fly, DBAs must focus on governing the rules and metadata that guide the agent, rather than individual SQL statements.

2. Leverage Db2 13 Native AI. Use the built-in SQL Data Insights functions to provide agents with pre-validated, high-performance AI tools rather than letting them wing it with dynamic SQL.

3. Enforce least privilege via roles. Never use a generic service ID for AI. Use Trusted Contexts to bind agents to specific, low-privilege roles that can only execute predefined stored procedures or limited DML.

4. Audit the why, not just the what. Enhance your auditing strategy to capture the context of AI decisions. If an agent modifies data, your logs should link that change back to the specific AI “thought” or “goal” ID.

5. Safeguard the system. Use Db2 profile tables to prevent autonomous agents from triggering a CPU spike that impacts the entire enterprise’s mainframe bill.


Key Enterprises LLC is committed to ensuring digital accessibility for techchannel.com for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards.