AI Governance Awareness: Why It Matters in Healthcare
Post Summary
AI governance in healthcare ensures that artificial intelligence is responsibly managed to improve patient care while minimizing risks. Without proper oversight, AI can amplify harm, such as worsening disparities, compromising patient safety, and exposing organizations to regulatory penalties. Key challenges include algorithmic bias, model drift, cybersecurity risks, and lack of accountability.
Quick Takeaways:
- 71% of U.S. hospitals use predictive AI, but only 30% maintain an inventory of AI systems.
- A 2019 study found a commercial algorithm underestimated Black patients' health needs by 26.3% due to flawed metrics.
- 81% of data violations involve Protected Health Information (PHI), highlighting cybersecurity gaps.
- 43% of FDA-approved AI devices lack thorough evaluation protocols.
- Governance committees exist in 70% of healthcare organizations, but many lack effective monitoring processes.
Effective governance combines multidisciplinary teams, risk-based oversight, and tools like centralized AI inventories and continuous monitoring. Organizations must prioritize accountability, fairness audits, and compliance with regulations to protect patients and reduce risks.
AI Governance in Healthcare: Key Statistics and Implementation Gaps
Healthcare AI Governance - Risks, Compliance, and Frameworks Explained
Effective governance also requires managing third-party AI risk to ensure external solutions meet clinical and security standards.
sbb-itb-535baee
What Research Shows About AI Governance in Healthcare
Recent studies highlight a concerning gap in the evaluation of AI medical devices. A striking 43% of FDA-approved AI medical devices lack thorough evaluation protocols to confirm their reliability and effectiveness[5]. This points to the need for ongoing oversight throughout an AI system's lifecycle, not just a one-time check before deployment.
Another significant issue is geographic bias in AI development. A review revealed that 71% of diagnostic AI algorithms were trained primarily on data from California, Massachusetts, and New York[4]. This lack of diversity in training data raises concerns about how these tools perform in underrepresented areas. Even more troubling, 34 states had no representation in AI training datasets at all[4], increasing risks for patients in those regions when maintenance and accountability are unclear.
Researchers describe a "responsibility vacuum" in healthcare AI, where no one takes ownership of system maintenance. A study in BMC Health Services Research explains:
"Maintenance and repair is often ignored in the AI/ML product pipeline, commonly leading to negligence about the safety and efficacy of AI/ML products over time."[6]
This neglect - referred to as "strategic ignorance" - often prioritizes rapid innovation over long-term monitoring. For instance, some sepsis prediction models, due to undetected data drift, performed no better than random guessing within a few years[6]. Without addressing these gaps, clinical and cybersecurity risks will only increase.
Research-Based Best Practices
To tackle these challenges, researchers suggest several targeted strategies. One key approach is risk-based stratification, where high-risk tools like autonomous surgical systems undergo stricter oversight compared to lower-risk tools like administrative software[5][7]. This prioritization helps organizations use resources effectively while maintaining safety.
Another effective strategy is shadow deployment. For example, in November 2024, Mass General Brigham tested an AI documentation system alongside existing workflows without impacting patient care[8]. Clinicians reviewed every AI-generated message, ensuring accountability while evaluating real-world performance before full implementation.
Memorial Sloan Kettering Cancer Center offers another example of structured governance. Between 2023 and October 2024, they rolled out their "iLEAP" framework (Legal, Ethics, Adoption, Performance) to oversee AI systems from research to decommissioning[9]. This framework allowed the center to manage a 63% increase in AI projects while maintaining oversight. They also introduced an "Express Pass" for low-risk, FDA-approved models, enabling quicker approvals - like a mammography triage system cleared in just two weeks, with proper risk assessments still in place[9].
Transparency is also crucial. Research supports using Model Information Sheets, which act like "nutrition labels" for AI systems. These sheets detail training data sources, performance metrics, and known limitations, helping clinicians and administrators understand the capabilities and boundaries of each system[9].
New Trends in Governance Frameworks
Governance frameworks are evolving to become more operational and comprehensive. The People, Process, Technology, and Operations (PPTO) framework is gaining popularity for addressing all aspects of AI implementation[7][10]. For instance, Duke Health applied this model in 2021 to manage over 50 AI tools, creating "journey maps" to track each system’s lifecycle, from procurement to maintenance[7].
Cross-functional committees are also replacing single-department oversight. In October 2024, a Canadian university-affiliated health system developed AI policies through stakeholder interviews and co-design workshops, identifying critical gaps in ad-hoc practices[10].
Centralized model registries are becoming essential for consistent governance. These registries provide visibility into all AI systems, including third-party vendor tools, ensuring uniform standards across the board[7][9].
Finally, the shift toward continuous post-market surveillance reflects growing awareness of AI system degradation over time. Organizations are moving beyond static pre-market evaluations to dynamic monitoring that detects algorithmic drift and performance changes[5][8]. High-risk systems, in particular, are undergoing more frequent audits and retraining to address biases and maintain equity for diverse patient populations[5].
What Makes AI Governance Work in Healthcare
For AI governance in healthcare to succeed, it needs a well-thought-out and structured approach. Research points to four core components that effective governance depends on, alongside specific areas of oversight that directly influence patient safety and reduce organizational risks.
People, Process, Technology, and Operations (PPTO) Framework
The PPTO framework is the go-to model for managing AI governance in healthcare, covering every aspect of implementation. Duke Health provides a great example of this framework in action, with its subcommittee structure that includes teams for Implementation and Monitoring, Quantitative Assessment, Ethics and Legal, and Operations. This setup ensures no detail is overlooked when it comes to oversight [7].
Here’s how the framework breaks down:
- People: This pillar focuses on assembling a multidisciplinary team. Experts in clinical care, technology, informatics, and regulation all need to work together.
- Process: Governance here means managing the AI lifecycle systematically. This includes keeping an updated AI inventory, applying risk-based oversight, and standardizing decision-making for procurement, deployment, and even decommissioning [8, 11].
- Technology: A secure infrastructure is key. Real-time validation systems, like "AI DevOps", help keep performance in check on an ongoing basis [7].
- Operations: Sustainability is the focus here, requiring executive sponsorship, dedicated budgets (typically 10–15% of the total AI investment), and clear metrics to measure success [8, 11].
A Canadian hospital system tied to a university applied this framework in October 2024 through co-design workshops involving stakeholders. While they excelled in data de-identification, a major gap came to light - there was no formal ethics review process for non-research AI tools. This led to the introduction of mandatory ethics reviews for clinical AI. Reflecting on the earlier approach, a clinical stakeholder noted:
"It is completely ad hoc and whoever is running the project gets to decide if they want to do anything at all" [10].
By combining strong infrastructure with multidisciplinary teamwork, this framework lays the groundwork for effective governance. But success also hinges on having targeted oversight mechanisms in place.
Key Oversight Areas for Healthcare AI
The PPTO framework directly supports the essential oversight areas for healthcare AI. These include:
- Fairness: Preventing bias in AI systems.
- Explainability: Ensuring AI recommendations are clear and understandable.
- Data Management: Protecting sensitive patient health information (PHI).
- Cybersecurity: Defending against threats like prompt injection or model poisoning.
- Patient Safety: Monitoring for issues like model drift that could impact outcomes [8, 6, 4].
For example, a 2019 study revealed that a widely used commercial algorithm showed racial bias because it relied on healthcare costs as a proxy for patient needs [3]. This highlights why fairness audits are so important.
To manage oversight efficiently, organizations often use risk-based stratification. High-risk AI tools, such as autonomous surgical systems, require full lifecycle governance with continuous monitoring. On the other hand, low-risk tools, like those used for appointment reminders, follow a simpler approval process [6, 8]. This tiered system helps healthcare organizations focus their resources where they’re needed most, without compromising safety in critical areas.
Barriers to AI Governance Implementation
Healthcare organizations face significant challenges in putting effective AI governance into practice, which can put both patient data and safety at risk.
In February 2026, Censinet teamed up with the American Hospital Association (AHA) and Health-ISAC to publish the "2026 Healthcare Cybersecurity Benchmarking Study." This study, spearheaded by Censinet CEO Ed Gaudet and AHA National Advisor John Riggi, highlighted a troubling gap: while 70% of healthcare organizations had formal AI governance committees, only 30% had an enterprise-wide inventory of their AI systems. This 40-point gap underscores a disconnect between governance structures and operational visibility [2]. Gaudet summed it up well:
"Healthcare has built the governance scaffolding for AI, but the operational muscle - inventory, asset management, detection methods, and clear accountability - is not keeping pace with adoption." [2]
This phenomenon, sometimes called "governance theater", reflects a situation where organizations establish committees and policies but lack the tools and processes to effectively monitor or control their AI systems. Below are two major barriers contributing to this issue: insufficient training and poorly developed governance frameworks.
Lack of Awareness and Training
One of the biggest hurdles is a "responsibility vacuum", where no one is clearly tasked with monitoring and maintaining AI systems. Clinicians, developers, and IT staff often lack defined roles in this area, leaving critical oversight tasks unassigned [6]. In some cases, organizations deliberately avoid investing in monitoring systems to prioritize rapid innovation, a practice referred to as "strategic ignorance." This "move fast" mentality can lead to AI failures going unnoticed, as many institutions only evaluate model performance annually - or worse, only after a failure comes to light by accident [6]. This lack of operational readiness mirrors earlier findings on weak oversight, further jeopardizing the secure use of AI.
Underdeveloped Governance Structures
The gaps in operational governance are striking. The 2026 study found that over half of healthcare organizations lack the tools to detect AI functionalities embedded by vendors [2]. This so-called "shadow AI" bypasses standard procurement and risk assessment processes, creating blind spots that can threaten patient privacy and data security.
Another issue is shared responsibility without clear accountability. Thirty-eight percent of organizations distribute AI risk management duties across multiple teams, but without clear escalation paths or ownership, accountability falls through the cracks [2]. When AI-related incidents occur, many organizations lack a defined process for detection and response. Rural health systems, which often have fewer resources and specialized staff, are particularly vulnerable; these systems are twice as likely as urban ones to lack formal AI governance structures [2].
Meanwhile, the rapid adoption of advanced AI systems is outpacing the frameworks needed to manage them. Sixty-four percent of healthcare organizations are already using or testing agentic AI - autonomous systems capable of independent action - yet only 8% have banned these systems outright [2]. Without proper oversight, the risks tied to these technologies, including compromised patient privacy, grow significantly. These operational gaps make it harder for healthcare organizations to manage risks effectively and ensure the safe deployment of AI.
How AI Governance Reduces Risk in Healthcare
AI governance plays a crucial role in managing both operational and regulatory challenges in healthcare. By connecting policies with everyday practices, healthcare organizations can better protect patient data, comply with regulations, and proactively address AI-related risks before they escalate.
Managing Cybersecurity and Third-Party Risks
Governance frameworks are essential for addressing cybersecurity and third-party risks in healthcare. One key tool is a centralized AI inventory, which improves operational visibility. This inventory helps security teams respond effectively to incidents by providing a clear record of AI systems, reducing confusion during potential threats [2].
A pressing issue is shadow AI - AI functionalities added to existing products by vendors without proper oversight. Alarmingly, over half of healthcare organizations lack the ability to detect these additions [2]. Effective governance frameworks tackle this by requiring real-time monitoring of vendor data access patterns. This monitoring can uncover unauthorized AI processing of sensitive health information and ensure vendor activities are tied to audit trails, enhancing accountability.
Agentic AI systems, which are already used by 64% of healthcare organizations, introduce another layer of complexity. These systems operate autonomously, executing multi-step processes. To mitigate risks, governance frameworks enforce strict identity controls, ensuring these AI agents only access the specific types of protected health information (PHI) necessary for their tasks. This prevents unnecessary exposure of sensitive data. John Riggi, National Advisor for Cybersecurity and Risk at the American Hospital Association, underscores this connection:
"Cybersecurity and AI governance are no longer separate disciplines. To defend one is to defend all" [2].
Platforms like Censinet RiskOps™ turn governance principles into actionable strategies. By centralizing third-party risk management assessments, enabling continuous vendor monitoring, and creating unified audit trails, Censinet helps healthcare organizations operationalize governance. With tools like Censinet AI™, automated evidence validation and risk routing ensure that findings are reviewed by the right stakeholders, such as AI governance committees. This "air traffic control" approach ensures timely and focused responses to critical AI risks.
Beyond these operational measures, governance frameworks also play a key role in ensuring compliance with industry regulations.
Meeting Regulatory and Industry Standards
AI governance simplifies complex regulations, turning them into practical compliance measures. For example, the ONC HTI-1 Rule mandates transparency in AI and predictive algorithms used in certified health IT systems. Developers must disclose their processes and data sources so clinicians can understand how these technologies reach their conclusions [1]. Organizations with strong governance structures can document these requirements systematically, reducing the risk of legal issues.
Another regulatory standard, HIPAA's Security Rule, requires organizations to review activity logs for systems processing electronic PHI [2]. Centralized governance supports this by creating the necessary audit trails and logging infrastructure. It also aligns with frameworks like the NIST AI Risk Management Framework and HHS 405(d) Health Industry Cybersecurity Practices [2]. This alignment prevents the fragmented risk management approach seen in 38% of healthcare organizations, where responsibilities are spread across multiple teams without clear escalation paths [2].
Internal validation of vendor AI models is another critical compliance step. Instead of relying solely on vendor-provided scorecards, governance frameworks encourage organizations to test these tools against their own patient data. This practice helps identify hidden issues, such as performance degradation or bias, that might not surface in vendor testing. It also supports red-teaming exercises, where experts simulate attacks to expose vulnerabilities, ensuring AI models are robust and safe for clinical use [1].
Conclusion
AI governance is becoming increasingly critical in healthcare, both for safeguarding patients and ensuring secure operations. While many healthcare organizations have governance committees in place, there's still a noticeable lack of comprehensive, enterprise-wide AI inventories [2]. This gap leaves room for vulnerabilities, such as undetected shadow AI and unclear accountability during high-stakes incidents.
To address these challenges, healthcare organizations must move beyond surface-level governance. They need to establish centralized AI inventories, continuously monitor vendor data access, and assign clear executive accountability for managing AI risks. As Ed Gaudet, CEO of Censinet, aptly states:
"Healthcare has built the governance scaffolding for AI, but the operational muscle - inventory, asset management, detection methods, and clear accountability - is not keeping pace with adoption" [2].
Currently, 71% of U.S. hospitals are using predictive AI integrated into their electronic health records [3]. At the same time, regulatory penalties are signaling a shift from mere guidance to active enforcement. Organizations that implement real-time AI governance strategies will not only better protect patients but also stay ahead in compliance.
Platforms like Censinet RiskOps™ are helping healthcare systems turn these governance principles into actionable processes. By centralizing third-party risk assessments, automating evidence checks, and streamlining audit trails, such tools enable governance committees to actively oversee and control AI systems. This approach bridges the gap between awareness and measurable risk mitigation.
FAQs
What should a healthcare AI inventory include?
A healthcare AI inventory needs to include every AI tool in use, from standalone systems to embedded features and even unapproved shadow AI. Important details to document include:
- System name
- Purpose
- Business owner
- Risk tier
- Data sources
- Model or provider
- Deployment status
- Last review date
- Links to approval records
By maintaining thorough records, healthcare organizations can improve oversight and manage risks more effectively.
How can hospitals detect and manage shadow AI from vendors?
Hospitals can tackle the challenges of shadow AI by establishing strong governance frameworks. These frameworks should focus on continuous monitoring, regular risk assessments, and thorough vendor evaluations. A key step in this process is maintaining an up-to-date inventory of all AI tools in use, including those introduced by external vendors.
Platforms such as Censinet RiskOps™ can play a vital role in this effort. They simplify risk assessments, improve compliance reviews, and automate the detection of unauthorized AI tools. This approach helps reduce risks like data breaches and threats to patient safety.
How often should clinical AI models be monitored for drift and bias?
Once clinical AI models are deployed, their performance isn't set in stone. Over time, factors like outdated datasets or shifts in clinical practices can lead to a drop in accuracy. This is why continuous monitoring is critical.
Regular checks help identify issues like drift - when a model's predictions start to deviate due to changes in data patterns. Similarly, bias can creep in, potentially affecting the fairness and reliability of the model's decisions.
To maintain trust and ensure these models remain effective, structured feedback loops and performance evaluations are essential. These practices not only safeguard accuracy but also help adapt the models to evolving clinical environments.
