The Joint Commission and the Coalition for Health AI on Sept. 17 issued their first joint guidance on the responsible use of AI in healthcare, outlining principles to help hospitals and health systems adopt the technology safely.
The guidance is the first product of a partnership between the two organizations, which was announced in June. It establishes a framework for how healthcare organizations can govern, deploy and monitor AI tools while prioritizing patient safety, data security and transparency.
The document identifies seven elements for responsible AI use: establishing governance structures, protecting patient privacy, ensuring data security, ongoing quality monitoring, voluntary reporting of AI safety events, risk and bias assessment, and education and training.
The Joint Commission and CHAI said the guidance is intended to help healthcare organizations align around best practices for using AI-enabled tools in clinical, administrative and operational settings. It does not focus on validating the effectiveness of specific AI tools but rather on how organizations should use them responsibly.
The groups said the guidelines were shaped by input from hospitals, health systems, technology providers and patient advocates, as well as by existing frameworks such as the National Institute of Standards and Technology’s AI Risk Management Framework and the National Academy of Medicine’s AI Code of Conduct.
The Joint Commission said it began working on AI guidance in 2024 after hospitals and health systems asked for direction on implementation.
CHAI is a nonprofit group founded by clinicians to establish consensus on AI’s use in healthcare.
The post Joint Commission issues AI guidance for healthcare organizations appeared first on Becker’s Hospital Review | Healthcare News & Analysis.
Health IT
