Banking

What Banking Leaders Say About AI Governance

April 13, 2026
By
Galina Korshunova
Blue recycling arrows symbol with "REGOLOGY" text on a white background.
Abstract black and white pattern with a large curved shape and intersecting lines.A bold, abstract black and white geometric design with sharp, angular shapes and lines.

AI Governance within Banks Is Becoming a System Capability Rather Than Just a Compliance Task

If you listen closely to how banking leaders are talking about AI in 2026, the tone has changed. The conversation has moved away from high-level principles and into something much more grounded—how AI is actually being used, supervised, and controlled inside the business today.

A year or two ago, AI governance was mostly framed as a policy problem. Banks focused on responsible AI frameworks, model validation standards, and governance committees. All of that still exists, but it’s no longer where the real tension is.

What leaders are grappling with now is far more practical: what does governance look like when AI is embedded in everyday decisions?

Today, AI is no longer sitting in isolated systems but has a spill-over effect. It’s drafting analysis, supporting compliance workflows, informing risk decisions, and increasingly acting as a first layer of work across teams. And once it’s operating at that level, governance begins to change shape—and even its definition at times. Below are a few different approaches to AI governance from current banking leaders.

It’s a systems design challenge

In a recent discussion on governing AI in banking, TD Bank’s AI leadership emphasized something that sounds obvious, but is surprisingly difficult to operationalize: governance starts with knowing exactly what data is used, who accesses it, and how models interact with systems.

That distinction matters because most governance frameworks still rely heavily on documentation—model inventories, validation reports, policy attestations. Useful, but static. What leaders are describing instead is something more dynamic: lineage that’s captured as work happens.

If an AI system generates a compliance summary, flags a risk, or informs a credit decision, the increasing expectation is that you can trace that output back:

  • to the underlying data sources;
  • to the transformations applied;
  • to the logic or model behavior involved.

And all of this should happen in real time, not after the fact. It’s a systems design challenge.

When you get down to the details, governance looks a lot less like policy and a lot more like guardrails. As he talks about role-based AI, Naveen Kumar, Head of Insider Risk, Analytics, and Detection at TD Bank, puts this shift into practical terms:

“I think role-based AI is like a polite bouncer. It only provides information based on role—if there’s an insider investigation going on, finance has nothing to know about it. Putting it into the AI shouldn’t return anything. Guardrails are an invisible force, period. These are rules AI simply cannot break, no matter what prompt it receives. That stops people from gathering information by asking a series of questions and revealing things an attacker shouldn’t know.”

AI is not just a tool; it’s part of the workforce

Another interesting example comes from BNY Mellon, where CEO Robin Vince has been unusually explicit about how the bank is operationalizing AI. In a recent interview, Vince described how BNY has deployed more than 140 AI systems internally—not as background tools, but as active contributors to work. 

He framed it as “All digital employees report to a human manager.”

That line reframes the entire governance model. These systems aren’t treated as abstract models that require periodic validation, but rather like individuals whose work needs to be reviewed, supervised, and improved over time.

And in a longer interview with TIME, he emphasized the visibility this creates: “They’ve got very good audit trails…you can see everything that they’re doing.” 

BNY has gone further than most by formalizing this structure. Roughly 100 human managers are assigned to oversee these AI “employees,” reviewing outputs and ensuring accountability.

“[They’re] held accountable for their work—with performance reviews.”

That’s governance, but not in the traditional sense. It’s governance as management. And once you see it that way, it becomes clear why older governance models are starting to break down. Static validation processes don’t map cleanly to systems that are continuously producing outputs and influencing decisions.

“You can’t layer governance on top of bad architecture”

Another theme that comes up repeatedly in executive commentary is that governance isn’t failing because of models. It’s failing because of systems. Leaders are increasingly pointing to data quality, infrastructure, and workflow integration as the real bottlenecks.

In an interview with Deutsche Bank’s Private Bank CIO, Christian Rhino, he was pretty direct about it. The challenge isn’t just defining governance—it’s enabling it through the right infrastructure.

You can’t explain outputs if your data is fragmented.
You can’t enforce controls if your systems aren’t connected.

Or as he essentially put it, you can’t just “add governance” on top of complexity and expect it to work. That’s why a lot of the real work happening right now isn’t flashy AI deployment—it’s foundational: cleaning up data, modernizing architecture, simplifying how systems interact.

Not because it’s nice to have, but because without it, governance doesn’t scale.

Governance is moving into the decision loop

AI governance used to sit outside the workflow: models were developed, validated, approved, and then monitored. There was a clear separation between the system and the decision.

That separation is disappearing.

Executives across institutions are describing a shift where AI outputs are treated as a first draft—something that needs to be reviewed and challenged before being acted on. In practice, that means governance is happening at the point of decision, not before or after it.

You can see this broader trend reflected in how banks are deploying AI across hundreds of use cases. JPMorgan CEO Jamie Dimon, for example, has noted that the bank already has over 600 AI use cases in production, many of which are directly tied to operational and risk processes. 

“We use AI for risk, fraud, bargaining, underwriting, note taking, idea generation, error reporting, reducing errors…”

The scale matters because once AI is embedded that deeply, governance can’t rely on centralized review alone. It has to be distributed across the people actually using it.

The recent leadership changes at JPMorgan have focused heavily on aligning AI strategy with data and operations. The bank appointed a new COO, Guy Halamish, for its commercial and investment bank, specifically to oversee data and AI together, with a mandate to improve data quality and infrastructure readiness. 

That move reflects a broader realization across the industry: you can’t govern what you can’t see.

If data is fragmented, if systems don’t connect, if lineage isn’t captured as information flows through workflows, then governance becomes guesswork. You might have policies in place, but you don’t have control in practice.

This is why so much of the real effort right now is happening below the surface. Not in launching new AI tools, but in restructuring the underlying architecture so that AI outputs can actually be traced, explained, and audited.

Leadership is getting more directly involved

Another signal that AI governance is becoming operational is how banks are organizing around it.

HSBC recently appointed its first Chief AI Officer, David Rice—a move that reflects how central AI and its governance have become to the bank’s strategy. 

But what’s notable isn’t just the creation of the role. It’s how leadership is framing AI more broadly. CEO Georges Elhedery has emphasized that AI will be deployed across the organization to drive efficiency and personalization, but with human oversight remaining a core requirement. 

That balance—scale with control—is exactly where governance is being redefined.

At the same time, responsibility isn’t staying centralized. It’s being pushed into the business. Teams using AI are expected to understand its outputs, challenge them, and take accountability for decisions. So governance is becoming both more structured at the top and more embedded across the organization.

In a recent WSJ CIO interview with KeyBank’s CRO, Mo Ramani, he describes the rise of the “digital CRO”—a role where AI is embedded directly into how risk decisions are made. 

As Ramani puts it, “the future of risk management is one where human judgment is augmented by AI and data-driven insights.” 

The implication is subtle but important. Governance isn’t about standing outside the system and reviewing outputs after the fact. It’s about operating alongside AI, where risk leaders are continuously interpreting, validating, and acting on AI-driven insights as part of the workflow itself. 

That’s a very different posture from traditional model governance, and it reinforces the broader pattern emerging across banks: governance is moving closer to where decisions actually happen, not further away from them. 

You can see all of this converging in how boards are engaging with AI. The questions leaders are being asked have changed. They’re no longer about whether policies exist but about whether governance actually works in practice.

Can you explain how a decision was made?
Can you trace it back to its source?
Can you intervene if something goes wrong?

These are system-level questions. And they’re forcing organizations to move beyond governance as documentation toward governance as something that’s embedded in how work happens.

When you step back, the direction is clear. AI governance in banking is no longer sitting in policy frameworks or model documentation. It’s moving directly into the flow of work: into how decisions are made, outputs are reviewed, and accountability is enforced.

The Bottom Line

What’s striking across these leadership conversations is not just that banks are investing in AI—it’s how quickly they’ve had to rethink what governance actually means.

Robin Vince at BNY is effectively describing a model where AI is managed like a workforce, with oversight, accountability, and continuous supervision built into how work gets done. Jamie Dimon has made it clear that AI is already deeply embedded across hundreds of use cases at JPMorgan, touching core operational and risk processes. And at HSBC, the creation of a Chief AI Officer role signals that governance is no longer a side function—it’s becoming central to how the organization runs.

Taken together, these point to a broader shift: AI governance is moving away from being something defined in policies and enforced at checkpoints. It’s becoming something that has to operate inside the system itself, where outputs are generated, decisions are made, and risk actually materializes.

That’s why so many of these leaders are converging on the same underlying idea, even if they describe it differently: governance is no longer about controlling models in isolation. It’s about maintaining visibility, accountability, and oversight across how AI is used in practice.

And that’s a much higher bar.

Because once AI becomes part of the workflow—producing analysis, shaping decisions, acting as a first layer of work—governance can’t rely on periodic validation or static documentation. It has to be continuous. It has to be traceable. And it has to be embedded in how people interact with AI outputs every day.

That’s the transition banks are in right now. Not figuring out whether to govern AI, but figuring out how to do it in a way that actually holds up under real-world conditions. And the institutions that get there won’t be the ones with the most comprehensive frameworks on paper. They’ll be the ones that can demonstrate, clearly and consistently, how AI-driven decisions are produced, reviewed, and acted on—end to end.

Read more on AI Governance:

Practical Guide for Building an AI Governance Framework

Ready to Learn More?

We would be happy to discuss your regulatory compliance needs. Contact our leading team of experts today.
Abstract black and white geometric design with overlapping shapes and lines creating a bold pattern.White abstract shapes on a black background, creating a simple geometric pattern.