Guidelines for AI quality in healthcare

AI in Healthcare Guideline: assessing the safety, quality and effectiveness of AI in healthcare

How do you assess artificial intelligence (AI) algorithms in the healthcare domain on their safety, accuracy, bias, trustworthiness and effectiveness? The AI in Healthcare Quality Assessment Guideline gives you an overview of the most important criteria, requirements and recommendations to assess AI innovations across all phases of the AI-lifecycle, from data and intended use, to their development, validation and accuracy evaluation, tot their impact, deployment, implementation and upscaling.

Generative AI in Healthcare Guideline (GENAISIS)

The global GENAISIS project (GENerative AI Evaluation in Healthcare Systems and Implementation Standard) has delivered a safety and quality guideline with the key considerations and criteria for responsibly developing, validating, evaluating, integrating, implementing and monitoring generative AI, and notably Large Language Models (LLMs), into healthcare settings. The GENAISIS guideline is developed and tested via an extensive and rigid global survey process among >120 experts/stakeholders from 22 countries and is in its final phase. The GENAISIS guideline will be likely be issued before the summer of 2026.

The AI innovation funnel

The AI innovation funnel has been developed by and for the field and targets the steps of AI-developers, focusing on the action space within laws and regulations.

Position of the guideline

An overview of the positioning of the guideline in the landscape of existing legislation and regulations (including links with the GDPR and the MDR).

More information

Get more information about the guideline on datavoorgezondheid.nl.

Questions/remarks? These can be shared via the LinkedIn-page  Guideline quality AI in healthcare.

Guideline AI healthcare