X Close Search

How can we assist?

Demo Request

Regulated Intelligence: Navigating the Evolving AI Compliance Landscape

Post Summary

AI is transforming healthcare, but keeping up with regulations is challenging. Organizations must address complex issues like patient data privacy, cybersecurity risks, and ethical oversight. Recent incidents, such as AI-driven Medicare fraud costing $703 million in 2025, underscore the urgent need for stringent compliance strategies. Here’s what you need to know:

  • Key Regulations: New laws like HIPAA updates (Feb 2026) and state-specific rules in Texas and Illinois are reshaping how AI is used in healthcare.
  • Main Challenges: Protecting patient data, managing third-party AI risks, and ensuring human oversight in AI-driven decisions.
  • Compliance Tools: Frameworks like the NIST AI RMF and platforms like Censinet RiskOps™ help streamline risk management and regulatory adherence.

Healthcare providers must combine technical safeguards, structured assessments, and oversight committees to manage risks effectively. Balancing automation with human review ensures both compliance and patient safety.

Healthcare AI Governance - Risks, Compliance, and Frameworks Explained

US AI Regulations for Healthcare

Federal and state regulations around AI in healthcare are becoming increasingly complex, urging organizations to develop strong compliance strategies. These rules, especially concerning patient data, clinical decisions, and transparency, are evolving, with new standards set to take effect in 2026.

HIPAA Requirements for AI Systems

Starting February 16, 2026, the HHS Office for Civil Rights (OCR) will require healthcare organizations to conduct AI-specific risk analyses for "agentic" AI systems - those capable of independently accessing or acting upon Protected Health Information (PHI) [2]. This distinction sets agentic AI apart from traditional data processing tools.

"The 2026 updates recognize that agentic AI systems, which can autonomously access, interpret, and act upon PHI, require a distinct regulatory approach." – James Holbrook, JD [2]

AI vendors handling PHI are categorized as business associates, meaning healthcare organizations must establish Business Associate Agreements (BAAs) with these vendors. Non-compliance with these rules can result in hefty fines, with maximum annual penalties reaching $2.13 million [2].

Organizations are also tasked with implementing technical safeguards such as:

  • Access controls to limit who can use AI systems.
  • Audit logs to track tool usage and data submissions.
  • Encryption to secure PHI processed by AI systems.

Additionally, the minimum necessary standard requires that only the essential PHI needed for a specific task is disclosed to AI systems. Alongside HIPAA, the NIST framework offers structured guidance for managing AI-related risks.

NIST AI Risk Management Framework

NIST AI Risk Management Framework

The NIST AI Risk Management Framework (AI RMF 1.0), introduced on January 26, 2023, provides voluntary guidance for addressing AI risks throughout its lifecycle. It identifies seven key traits of trustworthy AI: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with managed harmful bias [7].

The framework is built around four essential functions - Govern, Map, Measure, and Manage - spanning 19 categories and 72 subcategories. These controls align with FDA guidelines [7]. A Generative AI Profile, released in July 2024, adds 12 specific risk categories, including issues like "Confabulation" (AI hallucinations) and vulnerabilities in supply chains [7].

In addition to HIPAA and NIST guidelines, recent federal policy changes further influence AI governance in healthcare.

Executive Order 14110 and Federal AI Guidelines

In 2025, Executive Order 14110 on Safe, Secure, and Trustworthy AI was rescinded [3]. Healthcare organizations are now encouraged to follow the "America's AI Action Plan" (July 2025) and guidance from OMB Memoranda M-25-21 and M-25-22 [5]. These updates emphasize the importance of balancing innovation with stringent oversight.

Moreover, the Office of the National Coordinator (ONC) Algorithm Transparency Final Rule, effective February 8, 2024, mandates that certified health IT developers disclose details about the design, development, and training of predictive AI models [6]. By January 1, 2026, the United States Core Data for Interoperability (USCDI) Version 3 became the standard for certified health IT, aiming to minimize bias in datasets used for AI training [6]. The Centers for Medicare & Medicaid Services (CMS) also defines "high-impact" AI as systems whose outputs significantly influence decisions related to health, safety, or civil rights. Such systems require documented human oversight and robust risk management [4].

AI Governance Strategies for Healthcare Organizations

As AI adoption in healthcare accelerates, governance can no longer be an afterthought. With regulations evolving and implementation outpacing oversight systems [8], healthcare organizations must act now. Effective strategies are crucial to safeguarding patient data, addressing cybersecurity risks, and ensuring robust oversight structures that align with rapid AI advancements.

Protecting Patient Data in AI Systems

Securing patient data in AI applications requires a multi-layered approach. Organizations should prioritize:

  • End-to-end encryption: Use AES-256 standards to secure data both at rest and in transit.
  • Role-based access controls (RBAC): Limit system access to authorized personnel handling Protected Health Information (PHI).
  • Pseudonymization: Anonymize patient identifiers before processing data with AI.

Real-world examples highlight the effectiveness of these measures. Mayo Clinic, for instance, reduced breach risks by 40% during federated learning pilot studies [9][10]. Similarly, Cleveland Clinic cut data exposure in AI diagnostics by 50% by implementing differential privacy and conducting Data Protection Impact Assessments (DPIAs) [10][11].

Dynamic access controls add another layer of security by adjusting permissions based on real-time context. For example, Google's Healthcare API reduced manual reviews by 60% while maintaining HIPAA compliance [10]. The challenge lies in balancing automation with security - enforcing policies without disrupting clinical workflows. These safeguards integrate seamlessly with the proactive cybersecurity assessments discussed below.

AI Cybersecurity Risk Assessments

Structured cybersecurity assessments are a must for managing AI risks. A practical workflow includes:

  • Identifying AI assets and threats through threat modeling.
  • Assessing vulnerabilities using frameworks like OWASP AI Security guidelines.
  • Quantifying risks with NIST AI RMF scoring methods.
  • Prioritizing mitigations based on their potential impact.
  • Establishing continuous monitoring to detect emerging threats.

Challenges such as model poisoning (where adversarial inputs compromise diagnostic accuracy) and data exfiltration via API vulnerabilities are real concerns. In 2024, a U.S. hospital avoided $2 million in ransomware damages by proactively assessing its AI imaging system, as reported by the Cybersecurity & Infrastructure Security Agency (CISA). These structured assessments reduced AI-related incidents by 35%, according to a 2024 HIMSS report [9][11].

Healthcare-specific tools are critical, as they address unique clinical workflows, medical device integrations, and regulatory needs that generic platforms might miss. Assessments should cover not just AI software but also medical devices, supply chain partners, research protocols, and third-party integrations. Moving from static questionnaires to a RiskOps model enables continuous risk reduction and real-time monitoring - key components of a resilient governance framework.

Creating AI Oversight Committees

AI oversight committees play a vital role in aligning deployments with regulatory standards. To build an effective committee, organizations should assemble cross-functional teams, including clinicians, IT specialists, legal advisors, ethics experts, and executives. Each member contributes a unique perspective to ensure well-rounded governance. Clear charters should define responsibilities like policy approvals, project reviews, and incident response protocols.

For example, Johns Hopkins established an AI oversight committee in 2023, meeting bi-weekly with structured agendas and standardized review templates. This team successfully aligned over 20 AI projects with HIPAA requirements, achieving a 95% compliance audit score [9][10]. Assigning specific roles - such as a governance chair, documentation secretary, and technical subject matter experts - streamlines the oversight process.

Tracking performance metrics is another crucial step. Key indicators include AI compliance rates (aiming for 95% or higher), response times for incidents (under 24 hours), quarterly risk score reductions, and audit pass rates. According to HIMSS Analytics, organizations monitoring these metrics saw 28% fewer compliance violations in 2025 [11]. Shared platforms and dashboards for real-time updates can further enhance committee effectiveness. Additionally, Gartner research highlights that diverse representation within committees improves decision-making by 25% and reduces governance silos [9][10].

Censinet Solutions for AI Compliance

Healthcare organizations face the challenge of turning regulatory requirements into actionable workflows. Censinet RiskOps™ bridges this gap by automating compliance processes while maintaining the human oversight necessary in healthcare.

Automated Risk Assessments with Censinet RiskOps

Censinet RiskOps

Censinet RiskOps™ streamlines the evaluation of AI systems against frameworks like HIPAA and the NIST AI Risk Management Framework. What once took weeks can now be done in hours. This is possible thanks to its integration with the Digital Risk Catalog, which includes data from over 50,000 pre-assessed vendors and products [1][12].

For example, in 2026, Tower Health managed to reduce its risk assessment team from three to two full-time employees while increasing the number of assessments completed [1]. Another case involved a large hospital network using RiskOps™ to evaluate an AI diagnostic tool. Within 24 hours, they identified and addressed patient data encryption gaps to meet HIPAA standards [9].

The platform also enables targeted reassessments, focusing only on changes in questionnaire responses. This reduces the average completion time to less than a day [12]. Additionally, it identifies security issues and offers remediation recommendations, complete with task tracking. One clinic chain used these features to automate AI vendor risk scoring, cutting assessment costs by 40% [9]. This level of automation supports continuous, real-time AI risk monitoring.

Real-Time AI Risk Monitoring

The AI Risk Dashboard provides a live overview of compliance status using color-coded heatmaps. Automated alerts notify users of emerging risks, such as model drift or unpatched APIs in third-party tools. Healthcare organizations using the platform report faster compliance reporting (50% improvement), a 65% drop in risk exposure incidents, and the prevention of 80% of potential HIPAA violations through proactive alerts on AI data flows [9].

Metrics like data bias and cybersecurity vulnerabilities are continuously monitored. For instance, the platform helped a healthcare provider isolate a risky imaging AI integration before patient data was exposed [9]. Shared dashboards with role-based access allow compliance officers, IT teams, and legal departments to collaborate efficiently, assigning tasks and tracking resolutions. Integrations with tools like Slack keep governance teams informed in real time [9]. These insights are further supported by expert human oversight.

Balancing Automation with Human Oversight

While automation is powerful, expert review remains essential to ensure effective AI governance and compliance with healthcare regulations. Censinet combines automated alerts with human validation, where specialists review risk scores and approve exceptions. This hybrid approach reduced false positives by 30% during pilot programs, while still maintaining detailed audit trails for regulatory purposes [9]. The system aligns with NIST recommendations by incorporating a human-in-the-loop model for critical decisions.

Organizations can integrate their AI inventory via API, configure regulatory templates, and run initial scans. Reassessments are scheduled as needed, and a brief 2-hour training session helps governance teams interpret outputs and escalate high-risk findings [9]. Additionally, the platform maintains long-term risk records, which are invaluable for HIPAA audits and tracking security incidents [12].

AI Risk Management Best Practices

NIST AI Risk Management Framework Implementation Timeline for Healthcare

NIST AI Risk Management Framework Implementation Timeline for Healthcare

Practical steps are critical for managing AI risks in healthcare, especially when building on established frameworks and governance strategies. These steps not only help mitigate risks but also ensure healthcare AI systems stay in line with evolving compliance standards. While the NIST AI Risk Management Framework provides a solid foundation, its application in healthcare requires tailored actions.

Using the NIST AI RMF in Healthcare

The NIST AI Risk Management Framework (AI RMF 1.0), introduced on January 26, 2023, lays out a structured approach to managing AI risks through its core functions [7]. A key component, the Govern function, emphasizes creating an AI risk oversight committee. This committee should include representatives from technical, legal, compliance, and clinical areas, with clear ownership assigned and risk tolerance thresholds defined across the organization [7][14]. Dr. Faiz Rasool, Director at the Global AI Certification Council, highlights the framework’s perspective:

"The framework treats AI as socio-technical: risks emerge not only from models and data, but from how people build, deploy, and use AI systems" [7].

risk assessments for AI systems should consider four dimensions: likelihood, impact, velocity, and detectability. Based on these evaluations, risks are rated as Critical, High, Medium, or Low [14]. For instance, a high-risk clinical AI tool may require Tier 3 or 4 governance maturity, while lower-risk administrative tools might only need Tier 2 oversight [7][14].

Implementation typically unfolds in stages:

  • Foundation (Weeks 1–4): Establish the groundwork.
  • Assessment (Weeks 5–8): Evaluate risks and systems.
  • Treatment (Weeks 9–12): Address identified risks.
  • Ongoing Maturation: Continuously refine and improve.

Organizations must maintain an inventory of all AI systems, including those embedded in vendor products and third-party APIs [7][14]. With agencies like the FDA, FTC, and EEOC referencing NIST AI RMF principles in their guidance, this framework is quickly becoming the "standard of care" for AI practices [7].

A strong framework can be further strengthened by adopting decentralized governance structures.

Decentralized AI Governance Models

Decentralized governance connects AI management to broader organizational controls and data policies, enhancing oversight [13]. The key is to define clear roles for all AI stakeholders throughout the system’s lifecycle. Without these definitions, risk management can become inconsistent and ineffective [13]. NIST underscores this point:

"Lack of clear information about responsibilities and chains of command will limit the effectiveness of risk management" [13].

To address this, organizations should:

  • Use standardized documentation templates and uniform risk rating systems (e.g., Red-Amber-Green scales) across all AI projects.
  • Maintain a centralized AI inventory that includes documentation, incident response plans, and contact details for AI stakeholders.
  • Clearly define delegated authorities for teams involved in designing, deploying, and monitoring AI systems.

Transparent communication channels and whistleblower policies are also essential for identifying and addressing concerns about AI systems [13]. Additionally, organizations should establish processes for individuals or communities impacted by AI decisions to contest outcomes, integrating this feedback into the governance model [13]. Decommissioning protocols are equally crucial - ensuring that shutting down a model in one department doesn’t unintentionally cause risks in interconnected systems [13].

With an internal governance model in place, organizations can better manage risks associated with external AI components.

Managing Third-Party AI Risks with AI-BOM

Third-party AI systems bring unique challenges, requiring careful monitoring and transparency. The NIST Generative AI Profile (NIST AI 600-1), released in July 2024, highlights "Value Chain" as a critical risk category, focusing on risks from third-party components, pre-trained models, and data supply chains [7].

Healthcare organizations should conduct comprehensive audits of all AI systems, particularly third-party integrations. Each third-party system must be classified based on its criticality, intended use, and potential harm. This allows organizations to prioritize oversight where it’s most needed. The March 2025 updates to the NIST AI RMF specifically address supply chain vulnerabilities and the importance of assessing third-party models.

Maturity in managing third-party risks can be measured using NIST implementation tiers, ranging from "Partial" (reactive) to "Adaptive" (proactive and continuous monitoring). Progressing through these tiers also aligns with readiness for ISO/IEC 42001 certification, which requires auditable processes for AI management [7]. Cross-functional teams, including legal, compliance, and technical experts, should evaluate the transparency and risks of third-party components using the NIST Generative AI Profile’s subcategories.

Conclusion

AI compliance in healthcare is far from a simple task. It requires a comprehensive framework that weaves together HIPAA-compliant vendor risk management, the NIST AI Risk Management Framework, and federal AI governance guidelines. Treating these requirements in isolation can lead to disjointed strategies that lack the integrated oversight needed to address the complexities of AI in healthcare.

When compliance efforts lack structure, organizations leave themselves exposed to significant risks. Governance gaps can arise, accountability may become unclear, and third-party risks might slip through the cracks. These vulnerabilities not only heighten the risk of regulatory penalties but also open the door to cybersecurity threats that could jeopardize patient data on a massive scale.

To address these challenges, a balanced strategy combining automation with human oversight is key. Automated tools excel at processing vast amounts of data and flagging potential compliance issues quickly. However, human expertise is crucial for interpreting these findings and making informed decisions about nuanced risks. As Terry Grogan, CISO at Tower Health, aptly noted:

"Censinet RiskOps allowed 3 FTEs to go back to their real jobs! Now we perform more risk assessments with only 2 FTEs required" [1].

In today’s shifting regulatory environment, the fusion of technology and regulatory expertise is a game-changer for healthcare compliance. Centralized platforms can streamline the process by documenting AI systems, running systematic vendor assessments, and offering real-time insights into third-party risks. For example, Censinet's Digital Risk Catalog™ provides risk scores for over 50,000 vendors and reduces reassessment times to less than a day [12].

Matt Christensen, Sr. Director GRC at Intermountain Health, highlights the importance of customized solutions, stating that tailored approaches allow healthcare organizations to integrate compliance seamlessly into their broader risk management strategies [1]. Purpose-driven tools for HIPAA compliance, medical devices, supply chains, and patient data protection enable healthcare providers to adopt AI with confidence, ensuring regulatory adherence and patient safety. By aligning AI risk management with regulatory frameworks, healthcare organizations can break free from fragmented strategies and build trust with their patients.

FAQs

Does my AI tool count as “agentic” under HIPAA?

Your AI tool might fall under the category of “agentic” as defined by HIPAA if it operates independently, handling healthcare tasks, making decisions, and adjusting to patient needs without ongoing human oversight. That said, it must still adhere to all of HIPAA's security and privacy rules to safeguard patient information.

What should an AI risk analysis include for HIPAA and NIST?

When evaluating AI systems under HIPAA, the primary concern is how these systems manage Protected Health Information (PHI) while adhering to Privacy, Security, and Breach Notification Rules. Key focus areas should include:

  • Data Access Controls: Ensure only authorized individuals or systems can access PHI.
  • Encryption: Verify that PHI is encrypted both in transit and at rest to prevent unauthorized access.
  • Audit Trails: Implement detailed logging to track who accesses or modifies PHI, ensuring accountability.
  • Re-identification Risks: Assess the likelihood of anonymized data being re-identified and take steps to mitigate this risk.

AI Risk Analysis for NIST

For NIST compliance, it's essential to align with established frameworks like the Cybersecurity Framework (CSF) and the AI Risk Management Framework (AI RMF). This involves addressing several critical vulnerabilities:

  • Bias: Evaluate AI models for potential biases that could lead to unfair or inaccurate outcomes.
  • Safety: Ensure the AI operates reliably and does not pose risks to users or systems.
  • Cybersecurity Threats: Identify and mitigate risks such as data breaches or adversarial attacks.
  • Continuous Monitoring: Maintain ongoing oversight to detect and respond to emerging threats.
  • Alignment with NIST Controls: Regularly review and ensure compliance with NIST's recommended practices and controls.

By addressing these areas, organizations can better manage AI-related risks while maintaining compliance with HIPAA and NIST standards.

How often should we reassess AI vendors and integrations?

Regularly reassessing AI vendors and integrations is crucial. It's not enough to rely on annual reviews - continuous monitoring is key to maintaining compliance, performance, and security. This ongoing oversight helps ensure systems stay up-to-date with changing regulations and evolving risk management practices.

Related Blog Posts

Key Points:

Censinet Risk Assessment Request Graphic

Censinet RiskOps™ Demo Request

Do you want to revolutionize the way your healthcare organization manages third-party and enterprise risk while also saving time, money, and increasing data security? It’s time for RiskOps.

Schedule Demo

Sign-up for the Censinet Newsletter!

Hear from the Censinet team on industry news, events, content, and 
engage with our thought leaders every month.

Terms of Use | Privacy Policy | Security Statement | Crafted on the Narrow Land