Sectoral application · RAISE Framework
Healthcare & Life Sciences is, to date, the sector where the RAISE doctrine is most densely tested. The present unfolding instantiates the five pillars on the applicable normative corpus, the typical failure modes, and the operational commitments they impose.
Healthcare is an environment in which AI is not a topic of innovation, it is a topic of liability. An algorithmic decision in oncology, in imaging, in pharmacovigilance, in a class IIa or higher medical device, mobilises chains of legal, medical and institutional responsibility that pre-exist the digital system. The role of an architecture framework is not to invent these chains, it is to ensure that the digital system is made compatible with them before being deployed.
RAISE produces this compatibility by construction. The present sectoral application describes, for each of the five pillars, the expected Healthcare instantiation, the mobilised normative corpora, the failure modes that qualify an insufficient instantiation, and the concrete operational commitment that RAISE imposes on the system. The granularity remains intentionally architectural: detailed evaluation grids and sectoral ADR templates appear in the reference document.
The regulatory corpus applicable to AI in Healthcare is among the densest across all sectors, and it thickens with each legislative cycle. Pillar R in a medical environment requires holding simultaneously, from design onwards, the cross-reading of at least four regimes: the Medical Devices Regulation MDR (EU 2017/745) or its diagnostic equivalent IVDR (EU 2017/746) which qualifies the product as a medical device and sets its risk class; the EU AI Act whose Article 6 redefines the high-risk qualification as soon as an AI-based medical device crosses certain thresholds of decision autonomy; the GDPR whose Article 9 governs the processing of health data as a special category; and the HAS evaluation grid for AI-based medical devices (Grille DM-IA, 2024) which conditions reimbursement admission in France and which formalises technical requirements that the MDR alone does not capture.
The most frequent failure mode is ontological collision: a system designed solely under the MDR lens belatedly discovers the additional obligations of the AI Act (transparency toward the clinician user, decision logging, effective human oversight under Article 14), and must be re-architected mid-certification. RAISE pillar R requires that, at the specification phase, an Architecture Decision Record explicitly carries the cross-reading MDR + AI Act + GDPR + HAS DM-IA, identifies the cumulative requirements, and records the trade-offs when regimes diverge (for example: the AI Act high-risk classification may impose obligations that an MDR class IIa device would not by default). This cross-reading is not a formality, it is a condition of deployability.
The governance of an AI system in Healthcare fits into a chain of responsibility that pre-exists the digital: manufacturer responsibility under the MDR (Person Responsible for Regulatory Compliance, Article 15), responsibility of the clinician user, responsibility of the healthcare establishment, responsibility of the Data Protection Officer for GDPR-relevant processing. Pillar A requires that this chain be explicitly mapped and that information flows between actors be documented and opposable.
Concretely, this means: a quality system compliant with ISO 13485 covering the entire device lifecycle, a RACI mapping over algorithmic decisions (who validates a deployment, who can decide a withdrawal, who qualifies a drift as a vigilance incident), an internal validation committee with clinical, quality, regulatory and data representation, a vigilance procedure compliant with MDR Articles 87 to 92 for serious incidents, and a communication protocol with the notified body for substantial modifications. The typical failure mode is responsibility diluted by service contract: the software vendor positions itself as a technical provider, the establishment as an end user, and no one carries manufacturer responsibility under the MDR. RAISE rejects this dilution: there is an identified manufacturer, signatory of the declaration of conformity, who assumes system compliance over time.
Interoperability in a medical environment is not an integration convenience, it is a condition of continuity of care and regulatory traceability. The mobilised standards are stable and well documented: HL7 FHIR R5 for structured clinical exchanges, DICOM for imaging, SNOMED CT and LOINC for terminology, IHE for integration profiles, and more recently the InteropSanté profiles specific to the French ecosystem. A Healthcare AI system should never rest on undocumented proprietary formats nor on unversioned connectors toward HIS (Hospital Information Systems), EHRs, national patient records, or registries.
Pillar I further imposes requirements that standards alone do not cover: an HDS (Health Data Hosting, French regime) hosting for patient data, an application security policy compliant with ISO/IEC 81001-5-1 which specifies cybersecurity requirements for the lifecycle of health software, and an IEC 62443 cybersecurity reference framework or equivalent for connected components. The typical failure mode is interoperability by CSV export to the EHR (anti-pattern documented on the main page): the system exposes structured data, but the traceability of the exchange is not guaranteed, schema versions are not negotiated, and responsibility for flow integrity is implicitly transferred to the hospital CIO. A Healthcare interoperability compliant with RAISE exposes versioned, journaled and opposable FHIR interface contracts.
Operational validation of an AI system in Healthcare is governed by a dedicated and demanding corpus: IEC 62304 on the lifecycle of medical-device software, ISO 14971 on risk management (FMEA, residual risk analysis, benefit-risk balance), ICH E6 R3 (Good Clinical Practice) for validation phases involving patient data, and the FDA's 21 CFR Part 11 for electronic systems in regulated US environments. The HAS DM-IA grid adds specific requirements on clinical performance, algorithmic robustness, and transferability across populations.
Pillar S requires that validation not be a stage, but a sustained regime: pre-deployment validation protocols (clinical investigation under MDR Annex XV, or clinical evaluation under Annex XIV), continuous post-market surveillance (PMS, Article 83 MDR, and PMCF for clinical performance), production model-drift monitoring with explicit thresholds, withdrawal or suspension procedure activatable on drift signal or incident. The typical failure mode is validation by single cohort: the system is validated on one population, deployed on another, and the loss of performance is observed only through vigilance incidents. RAISE pillar S requires explicit documentation of the system's domain of validity — populations covered, hospitals represented in the training cohort, intended clinical conditions of use — and the rejection of any out-of-domain prediction rather than the production of an unqualified output.
Explainability in Healthcare does not reduce to a user-interface requirement. It is a condition of medical practice itself: a clinician who does not understand an algorithmic output cannot assume responsibility for it, and therefore cannot legally integrate it into their decision without re-qualifying it. Pillar E must produce a situated explainability, that is, calibrated on the clinical questions the clinician must be able to ask the system, not on the variables the algorithm actually used.
Concretely: for an oncology risk-stratification system, useful explainability is not the list of SHAP features with their weights, it is the capacity to answer the question “why is this patient classified high risk when a comparable patient on the same ward is classified moderate risk”. For a pharmacovigilance system, useful explainability is the traceability of the database signals that triggered the alert. For an imaging detection system, useful explainability is the localised heatmap plus documentation of the negative cases where the model is known to fail. The mobilised corpus includes EU AI Act Article 13 on transparency and user information, Article 14 on effective human oversight, GDPR Article 22 and Recital 71 on the right to explanation of an automated decision affecting the data subject, and the HAS DM-IA grid which formalises interpretability requirements specific to the medical context. The typical failure mode is decorative explainability: SHAP displayed next to every prediction, without effective practice of contestation nor a protocol enabling a clinician to send a decision back for review. RAISE pillar E rejects this decoration: the explanatory output must be consumable by the clinician in their workflow, and institutionally opposable.
This page restitutes the public instantiation of RAISE in a Healthcare environment. The reference document develops MDR-class evaluation grids, sectoral ADR templates, decision trees for the MDR + AI Act cross-reading, post-market validation protocols, and HAS DM-IA documentation templates.
🔒 Register to access the reference document
Free access · Named document · No commercial outreach