Back
Back
industry
Back
Back


Healthcare organizations are moving past the stage where AI is limited to isolated experiments. As adoption spreads into clinical operations, utilization management, revenue cycle, care management, member services, and other administrative support functions, AI is reshaping work that is sitting inside regulated processes. At the same time, the regulatory environment is not standing still. Federal attention is clearly increasing, with HHS actively seeking input on how existing regulations affect AI adoption in clinical care, while CMS is already testing models that combine enhanced technologies with human clinical review in care and payment-related workflows.
So, today, the big question for healthcare leaders is how the AI governance framework is being translated into operating reality. The organizations that govern AI in name only will publish a policy and call it progress. The more advanced organizations that govern it in practice will build ownership, turn principles into department-level rules, and create standing structures that can keep pace as use cases and regulatory expectations evolve.
In the first article, we made the case for why AI governance, especially in healthcare, cannot wait for perfect clarity. We argued that the strongest frameworks are built on existing principles, such as patient safety, privacy, security, transparency, and health equity, rather than on any single tool or moment in time. That foundation remains essential, and once those principles are in place, the next step is to translate those principles into the operational structures, roles, and workflow decisions that make governance a reality across the organization. This will help a frontline team decide what should happen when AI enters a real process that affects patients, members, documentation, or reimbursement.
Below, we discuss how to move that framework to function.
One of the fastest ways an AI governance effort loses momentum is when everyone is involved in theory, but no one clearly owns it in practice.
In healthcare, that ambiguity creates real risk. AI touches legal, compliance, IT, privacy, security, clinical leadership, operations and, often, outside vendors or delegated entities. If no single role is accountable for connecting those dots, governance becomes fragmented almost immediately.
That is why many organizations will need an AI Officer, or at least an equivalent leader with clear authority, even if the title varies. The point is less about ‘org chart branding’ and more about giving governance a nerve and a communications center. Someone needs to maintain a living inventory of AI tools already in use, tools being piloted, and tools being proposed. That inventory should not stop at direct internal deployments. It should extend to AI awareness of activities within vendors, partners, and delegated entities, because in healthcare, the operational footprint of AI often expands well beyond what sits directly inside the four walls of the organization.
This role should also coordinate information flow across the enterprise. New use cases should not drift into production because one department found a promising tool and moved quickly. There should be a methodical intake process that routes proposed use cases through the appropriate reviews, whether that means privacy, security, legal, clinical validation, or compliance. And at regular intervals, executive leadership should be able to see where the organization stands: what is in use, what is under review, where risks cluster, and where operational alignment is still immature.
This is the difference between governance as a document and governance as a function. Once one role or office is clearly responsible for maintaining the ledger, coordinating the intake process, and reporting upward, the organization has a real operating center for AI oversight instead of a loose set of good intentions.
A common mistake in governance is assuming that a strong enterprise policy is enough. While necessary, it is usually only a starting point. Broad principles create consistency, but they do not answer the questions that determine whether AI is being used safely inside a specific workflow. In healthcare, those questions vary dramatically by department.
This is important because many future expectations are likely to become more operational, rather than less. It is easy to imagine a governance program that sounds strong at the enterprise level but breaks down the moment it reaches clinical documentation, member communications, risk coding, utilization review, or care management. The issue is not whether the organization believes in privacy, safety, or accuracy. The issue is whether those principles have been translated into concrete boundaries that match how each team actually works.
The next step is to start from governance first principles of the organization and move downstream towards department-level translations of how those values are practically realized. Ensuring patient safety. Protecting health information. Maintaining clinical accuracy. Ensuring member confidentiality. From there, each department must define how AI may or may not be used within its own workflows. The standard for helping a member understand a benefits explanation is not the same as the standard for assisting with coding, summarization of clinical notes, or surfacing recommendations tied to medical necessity. Treating those uses as if they carry the same operational risk is where governance starts to become too vague to be useful.
Clinical departments (e.g., hospitalist medicine, ED, nursing) are a good example. If AI is used in documentation workflows, the following questions cannot stay abstract:
CMS Conditions of Participation already require medical records to be accurately written, promptly completed, and authenticated by the responsible person, which means governance cannot treat AI-generated documentation as a purely technical efficiency tool, detached from clinical accountability.
Member services and care management present a different but equally important set of boundaries. AI tools that surface or synthesize member-sensitive information may operate in areas where confidentiality expectations are especially heightened. Substance use disorder records, for example, are subject to protections under 42 CFR Part 2, which exists precisely because of the sensitivity of those records and the need to tightly govern use and disclosure. That means healthcare organizations need more than a general privacy statement. They need explicit guardrails on what data certain AI tools can access, what they can generate from that data, and which users can see those outputs in the first place.
The deeper point is that department-level translation is where governance becomes usable. It is where principles stop sounding admirable and start affecting decisions. And it is where organizations begin to create something much more durable than a policy binder: a set of operational rules that fit the actual work.
The third structural move is a standing AI Governance Committee to guide and evaluate AI use. This is more than a one-time task force or an ad hoc review group, assembled only when a problem appears. But rather, it’s a standing structure with cross-functional representation and a clear charter.
In healthcare, that committee should never sit inside only one department. A credible composition might include clinical leadership such as a CMO or CMIO, compliance, legal, privacy, IT or security leadership, a frontline clinical voice, and someone accountable for patient experience or health equity. The reason is simple: AI decisions rarely belong to one function alone. A tool may look technically sound, but still raise concerns about documentation integrity, patient communication, fairness, or workflow disruption. Cross-functional governance is what keeps those perspectives from being missed.
That does not mean every AI decision needs to crawl through a heavyweight approval process. In fact, if the committee is designed badly, it becomes exactly the bottleneck that operational teams learn to work around. A better model is a tiered intake:
This structure also gives the organization a place to continuously review changes in the outside environment. That includes regulatory developments, market activity, vendor shifts, and evolving technical capabilities. Regology’s Regulatory Change Agent can help monitor those changes for your specific healthcare sector in near-real time. HHS has already signaled that federal expectations around AI in healthcare are still developing, and CMS is already incorporating enhanced technologies into selected review processes while keeping licensed clinicians in the loop for adverse recommendations. That is exactly why a standing governance body matters. The context around AI use is not fixed, so the body overseeing it cannot be temporary either.
Just as important, the committee’s mandate should include innovation, not only restraint. That point often gets missed. A strong governance committee is not there just to review what others want to deploy. It should actively evaluate emerging use cases, identify where the organization could responsibly move faster, and help distinguish between acceptable experimentation and unacceptable risk. When governance is set up this way, it does not act as a brake. It becomes a disciplined accelerator.
Once these three moves are in place, governance starts to feel different inside the organization. There is visible ownership. There is a current inventory instead of a partial guess. There are department-specific rules instead of a generic policy that no one knows how to apply. There is a committee that can absorb complexity without stalling progress. Most importantly, there is a mechanism within the operating model to translate change in the regulatory environment into action.
That last point matters more than ever. Healthcare organizations do not need to predict every future rule to know where things are heading. The signals are already there:
Operational maturity means being ready for that future before every requirement arrives in final form. It means knowing who owns AI governance, how use cases are reviewed, where departmental boundaries sit, and how updates in the regulatory landscape will be folded back into policy and workflow. That is the real work of governance. Not writing a framework once, but making it durable enough to function under pressure.
Practical Guide for Building an AI Governance Framework
What Banking Leaders Say About AI Governance
Legal Operations Gain Strategic Position with AI Governance