How to Implement Joint Commission AI Guidance
Post Summary
As healthcare organizations continue to embrace the transformative potential of artificial intelligence (AI), ensuring responsible and safe deployment becomes paramount. To address this, the Joint Commission, in collaboration with the Coalition for Health AI, has released a comprehensive guidance document aimed at promoting the ethical and effective use of AI in healthcare. This article breaks down the core elements of the new Joint Commission AI guidance, offering actionable insights for healthcare and cybersecurity professionals.
This guidance serves as a roadmap for healthcare delivery organizations (HDOs), emphasizing the importance of minimizing risks while maximizing the utility of AI tools. It builds on key frameworks like the National Institute for Standards and Technology (NIST) AI Risk Management Framework and the National Academy of Medicine’s AI Code of Conduct, positioning itself as essential guidance for the future of AI accreditation in healthcare.
Why the Joint Commission Guidance Matters

The Joint Commission is a leading accrediting body for healthcare organizations, and its guidance often signals forthcoming accreditation requirements. By partnering with the Coalition for Health AI, this guidance consolidates best practices, ethical standards, and technical expertise to create a clear pathway for safe AI utilization in healthcare settings.
The document outlines seven critical elements for responsible AI deployment, offering a detailed framework that healthcare organizations can adopt as they navigate the complexities of AI integration. Below, we’ll explore these seven pillars in depth, providing both context and strategies for implementation.
sbb-itb-535baee
The Seven Pillars of Responsible AI Use in Healthcare
1. AI Policies and Governance Structures
Establishing robust governance is the cornerstone of responsible AI deployment. The guidance emphasizes the need for clear policies and governance frameworks that define how AI is utilized, monitored, and escalated when issues arise. Effective governance should:
- Involve a multidisciplinary team with expertise in clinical care, IT, legal, and compliance.
- Include a transparent escalation pathway for unresolved issues or safety concerns.
- Facilitate open communication about successes and failures in AI deployment.
By creating these structures, healthcare organizations can ensure AI tools are integrated safely and align with organizational goals.
2. Patient Privacy and Transparency
Trust is foundational in healthcare. For AI tools to succeed, patients and caregivers must trust how their data is handled. The guidance encourages healthcare organizations to:
- Clearly outline how patient data is accessed, used, and protected.
- Implement robust privacy guardrails in compliance with HIPAA and other relevant standards.
- Transparently communicate with patients about where, when, and how AI tools are used in their care.
Transparency doesn’t just build trust - it fosters informed consent, especially in scenarios like using AI scribes in states requiring verbal consent for data recording.
3. Data Security and Data Use Protections
AI’s reliance on large datasets increases organizations’ surface area for cybersecurity risks. The guidance underscores the importance of safeguarding sensitive data from internal misuse and external threats. Key recommendations include:
- Ensuring all AI vendors and contractors comply with strict data use agreements.
- Detailing permissible and non-permissible uses of patient data.
- Keeping security practices updated to reflect AI’s evolving use across the healthcare ecosystem.
By prioritizing security, organizations can mitigate risks and maintain operational continuity.
4. Ongoing Quality Monitoring
AI tools are not static; they require continuous oversight to ensure optimal performance over time. The guidance recommends:
- Conducting initial validation of AI tools for the specific patient population and health system where they will be used.
- Implementing risk-based monitoring, with higher scrutiny for tools influencing direct patient care.
- Documenting ownership of monitoring responsibilities and escalation protocols for anomalies.
Regular quality checks ensure that AI tools continue to perform as intended, even as data systems and operational environments evolve.
5. Voluntary Blinded Reporting of AI Safety-Related Events
To improve safety and minimize risk, the guidance calls for the establishment of non-punitive reporting pathways for AI-related safety events. This includes:
- Capturing both actual incidents and near-misses involving AI tools.
- Analyzing patterns to identify and address systemic issues.
- Sharing findings within the organization and, where applicable, with vendors or regulatory agencies like the FDA.
By creating a culture of transparency and learning, healthcare organizations can refine their AI implementations and prevent future errors.
6. Risk and Bias Assessment
AI is not immune to bias, and mitigating this risk starts at the development stage. The guidance recommends using tools like the Coalition for Health AI’s Applied Model Card to document:
- Known risks, biases, and limitations of AI tools.
- Intended end-users, workflows, and operational contexts for each tool.
Bias is not limited to data - it can also emerge in how AI tools are designed, deployed, and used. Regular assessments should extend beyond the technical model to include the broader "action space", ensuring that tools do not inadvertently perpetuate inequities in care.
7. Education and Training
The final pillar emphasizes the importance of equipping healthcare professionals with the knowledge to effectively use AI tools. Training should:
- Be role-specific, focusing on what the tool does, its limitations, and proper usage.
- Include general AI literacy to prevent over-reliance on algorithms and minimize susceptibility to biases.
- Provide clear pathways for staff to access tool documentation and report unexpected behavior.
Education is not just about compliance - it’s a critical component of change management. Engaging staff as partners in the AI journey fosters trust, innovation, and better outcomes.
Actionable Next Steps for Healthcare Organizations
While the Joint Commission’s guidance is voluntary today, it is likely to inform future accreditation standards. To prepare, healthcare organizations can take proactive steps:
- Inventory Existing AI Tools: Create a comprehensive list of all AI tools currently in use, documenting their purpose, data sources, and associated risks.
- Establish AI Governance Frameworks: Form a dedicated governance team with clear roles, protocols, and escalation pathways for AI oversight.
- Train and Educate Staff: Implement targeted training programs to ensure all staff understand their responsibilities with AI tools and know how to report issues.
- Conduct Risk Assessments: Use tools like model cards to evaluate and document potential biases and risks for every AI tool in use.
- Develop Monitoring Protocols: Set up ongoing validation and monitoring cycles tailored to each tool’s impact on patient care or operations.
Key Takeaways
- Governance Is Crucial: Implementing AI policies and forming multidisciplinary governance teams ensures safe and ethical AI use.
- Transparency Builds Trust: Clear communication about how AI tools are used fosters patient and caregiver confidence.
- Data Security Is Non-Negotiable: Protecting sensitive patient data from misuse and breaches is essential for operational resilience.
- Bias Requires Vigilance: Regular assessments of AI tools’ risks and biases help prevent inequitable outcomes.
- Education Empowers Staff: Training programs equip employees to use AI effectively and address potential risks proactively.
- Proactive Monitoring Reduces Risk: Ongoing quality checks ensure AI tools remain reliable and safe over time.
Conclusion
The Joint Commission’s AI guidance provides a robust framework for the responsible deployment of artificial intelligence in healthcare. By addressing governance, transparency, security, quality monitoring, safety reporting, risk assessment, and education, the document lays the foundation for safe and effective AI integration. As AI becomes more entrenched in healthcare, organizations that adopt and implement these best practices will not only align with emerging standards but also enhance patient safety, operational efficiency, and trust.
Healthcare and cybersecurity professionals must work collaboratively to operationalize these recommendations, ensuring that AI fulfills its promise of transforming care delivery while safeguarding the people it serves. This guidance is not just a document - it’s a call to action for the healthcare industry to lead with responsibility, foresight, and innovation.
Source: "What's The Joint Commission Saying About Healthcare AI These Days?" - Health Data Ethics Podcast, YouTube, Sep 30, 2025 - https://www.youtube.com/watch?v=mYQUX3IGydo
