HomeRAISE Framework

Reference architecture

The RAISE Framework

Reference architecture for AI deployment in critical and regulated environments. Five interdependent pillars, applicable to any sector-specific regulatory corpus.

Genesis and necessity

Most existing AI frameworks for regulated environments treat compliance as an external constraint, something added after the fact to satisfy a regulator, obtain certification or respond to an audit. RAISE starts from the opposite observation.

Compliance that arrives after design is not compliance. It is compensation. And compensation does not scale.

AI systems that fail in critical environments are not, for the most part, technical failures. They are architecture failures, systems designed out of context, deployed in organizations unprepared to absorb them, within regulatory frameworks they had not anticipated.

RAISE is the structural response to this problem. A framework that does not overlay system design, it is constitutive of it.

RAISE and existing frameworks

A predictable objection must be addressed up front: the market is not short of frameworks for AI in regulated environments. The NIST AI Risk Management Framework, ISO/IEC 42001, Microsoft's Responsible AI Maturity Model, the EU AI Act and sector-specific evaluation grids (HAS for medical devices in France, ANSSI references for critical systems) each cover a part of the subject. Why propose a fifth?

The answer rests on a distinction of nature, not of scope. These frameworks are not equivalent to each other, and none of them is architectural in nature.

Framework Nature What it prescribes What it does not capture
NIST AI RMF (1.0, 2023) Voluntary risk management framework Four organisational functions: Govern, Map, Measure, Manage. Catalogue of practices to identify and treat risks attached to an AI system. Imposes no design decision. Does not address sector-specific regulatory compliance. Says nothing about the technical interoperability contracts between adjacent systems.
ISO/IEC 42001 (2023) Certifiable AI Management System (AIMS) PDCA cycle applied to AI. Policy, roles, registers, management review, continuous improvement. Quality management framework, not engineering. A system can be 42001-certified and structurally non-deployable. Certification attests to the process, not to the object designed.
EU AI Act (Regulation (EU) 2024/1689) Normative text Risk-tier classification. Technical requirements for high-risk systems (Articles 9 to 15). Transparency, human oversight, robustness obligations. Legislative text, not methodological. Describes a required end-state, not the path to reach it. Assumes the operator can translate the text into architecture, which is precisely the point that fails.
RAI Maturity Model (Microsoft, 2023) Organisational maturity model Five levels (Latent → Leading) across governance, culture and tooling dimensions. Diagnoses an organisational state. Organisational maturity ≠ system architecture. An organisation at the Leading level can produce structurally defective systems. The reverse is also possible.
HAS, DM-IA grid (2024) Sector-specific evaluation grid Evaluation criteria for AI-based medical devices seeking reimbursement admission. Sectoral (health, France), evaluative (post-design). Not usable in the architecture phase for a product still being specified.
RAISE Architecture framework Five interdependent pillars integrated into design decisions. Production of constraints opposable to technical choices from the specification phase onwards. Not a management standard. Not certifiable as such. Not a normative text. Not a post-hoc evaluation grid.

RAISE does not substitute for any of these frameworks. It precedes them. A system designed under RAISE is better placed to satisfy ISO/IEC 42001, to pass a HAS evaluation or to absorb the EU AI Act without re-architecture, because these frameworks describe a required end-state that RAISE architecture produces as a consequence, not as a post-hoc compliance effort.

Existing frameworks state what must be reached. RAISE structures the way it is designed.

Framework structure

RAISE is an operational acronym: each letter designates a non-negotiable pillar of industrial AI deployment. The five pillars are interdependent, the failure of one structurally compromises the others.

Pillar Description
R

Pillar 1, Regulatory Architecture

Integrating regulatory frameworks from the design phase, not during validation, not in response to an audit. Active reading of applicable regulatory texts (EU AI Act, MDR/IVDR, GDPR, HDS, NIS2…) from the architecture phase, translated into concrete design constraints.

A

Pillar 2, Accountability & Governance

Clear definition of responsibilities at every system level, who decides, who controls, who answers in case of failure. Algorithmic governance, auditability and institutional risk control.

I

Pillar 3, Interoperability Standards

System capacity to durably integrate into existing infrastructures. Data traceability, digital sovereignty, interoperability standards (FHIR, HL7, open APIs) and information flow continuity.

S

Pillar 4, Safety & Operational Validation

System validation in real operational conditions. Usage safety, effective human supervision, decision reversibility and failure protocols.

E

Pillar 5, Explainability & Ethics

System capacity to justify its decisions intelligibly for operators, decision-makers and affected individuals. Decisional legitimacy.

Interdependency matrix

The five pillars are not independent. Their interdependence is not a side effect of the framework, it is constitutive. This is precisely what distinguishes RAISE from a compliance checklist, where each line can be validated separately.

Interdependence operates on three registers: the constitutive chain that orders the pillars by design priority, the feedback loops that restate an upstream pillar in light of a downstream one, and the complete matrix that qualifies the twenty directional dependencies between pillars.

Pentagon of the five RAISE pillars and their dependencies Representation of the five RAISE pillars arranged as a pentagon, with the constitutive chain R to A to I to S to E shown as solid edges, and three feedback loops E to R, S to A and I to A shown as dashed edges. R A I S E
Five pillars, constitutive chain (solid edges), feedback loops (dashed edges). Transverse dependencies are described in the matrix below.

Constitutive chain: R → A → I → S → E

The order is not alphabetical. It is causal.

R → A: Regulation calls for accountability

A regulatory framework integrated from design (R) is inoperative without a governance structure ensuring its respect over time (A). Regulatory compliance does not maintain itself: it requires identified actors, documented processes and named responsibilities.

A → I: Governance requires interoperability

Auditability (A) is only possible if data is traceable and systems interoperable (I). A system opaque about its data flows is not governed; it is invoked.

I → S: Interoperability conditions validation

Operational validation (S) concerns the system in its real context of use, which requires stable and documented interfaces with adjacent systems (I). A system validated in isolation has been validated only as a prototype.

S → E: Safety legitimises explanation

A system can only claim explainability (E) if it has been operationally validated (S). Explaining an unvalidated decision is not transparency, it is post-hoc rationalisation.

Feedback loops

The constitutive chain is legible. It is also insufficient. It suggests that each pillar can be stabilised once and for all before moving to the next. Architectural practice is the opposite: three feedback loops restate the upstream pillars as the downstream ones are operated.

E → R: Explainability restates the regulatory reading

A system that must be explainable surfaces regulatory requirements that the initial reading of the corpus had not isolated. The need to justify a decision forces a requalification of what, in the regulatory text, is required to be justifiable. The reading of the regulation is updated by the explainability work, not the reverse.

S → A: Validation objectifies accountability

An accountability (A) not anchored in a validation protocol (S) is declarative. As long as operational validation has not surfaced the system's real failure modes, the chain of responsibility rests on assumed failure modes. Validation is the operation that gives substance to governance.

I → A: Interoperability materialises governance

An algorithmic governance (A) lacking the required traceability flows (I) does not control, it estimates. Governance has effective grip on the system only in proportion to what interoperability exposes.

Complete dependency matrix

Pairs not covered by the constitutive chain and the principal feedback loops are read as a matrix. Each cell qualifies the meaning of the directional dependency (rows: source pillar, which conditions; columns: target pillar, which is conditioned).

R A I S E
R , condition of possibility for governance determines the required interoperability standards delimits the validatable scope imposes the minimum exigible explainability threshold
A restates contestable requirements to the regulator , requires traceability of data and decisions qualifies acceptable validation modes attests to the institutional legitimacy of explanations
I surfaces gaps in the normative corpus makes governance operative through exposed flows , conditions the validation context makes the explanation traceable end-to-end
S surfaces grey zones in the regulatory text objectifies the chain of responsibility imposes real interface contracts , precedes any explanatory claim
E restates the reading of the normative corpus materialises institutional accountability requires a legible trace format qualifies what is clinically or operationally acceptable ,

Twenty directional dependencies, structurally non-substitutable. A failure localised on a single pillar propagates. This propagation is the operational signature of the architecture: a system that fails on a single RAISE pillar is not a partially compliant system, it is a system whose industrialisation is structurally compromised.

Regulatory corpora by sector

Healthcare & Life Sciences → sectoral application

EU AI Act · MDR/IVDR · HDS · SNDS · GDPR · HAS DM-IA · ISO 13485 · FHIR R5

Anti-patterns

RAISE can be observed at the surface and betrayed in substance. The eight anti-patterns that follow are not a list of beginner mistakes. They are the most frequent degraded form of each pillar in organisations that believe they are following it. Each has its own logic, its proponents, its post-hoc justifications. All produce the same outcome: a system that satisfies the appearance of RAISE without carrying its function.

The inventory is not exhaustive and never will be: each organisation cultivates its own degraded variants. The shared pattern is the silent substitution of an institutional sign for the function it should carry. RAISE is not betrayed by ignorance. It is betrayed by mimicry.

Operational maturity model

RAISE is not a compliance checklist. It is an architecture framework that structures design decisions, not boxes to tick. Its implementation follows four progressive levels, each defined by its deliverables, its transition criteria toward the next level, and a typical observed duration.

The levels are not strictly sequential. Level 3 (Governance) typically overlaps Level 2 (Architecture), because steering structures are built in parallel with the design decisions they will have to validate. Level 4 (Maintenance) does not close the trajectory: it makes it permanent.

Level Expected deliverables Transition criteria Typical duration
1, Diagnosis
System status against the five pillars
Initial coverage matrix (five pillars × status). Mapping of applicable normative corpora. Register of structural gaps prioritised by operational and regulatory impact. Matrix signed off by leadership. Gaps prioritised with estimated impact. System scope frozen for the next phase. 6 to 10 weeks for a system of average complexity.
2, Architecture
Embedding RAISE constraints into design
Architecture Decision Records (ADR) tied to RAISE constraints. Architecture diagrams annotated by pillar. Documented admission ports. Traceability specifications. Register of technological choices opposed to RAISE and their justification. ADRs validated in architecture review. Critical-path prototypes operational. Interface contracts signed with adjacent systems. 3 to 9 months depending on system size.
3, Governance
Establishing steering structures
Algorithmic governance charter. Complete RACI on the five pillars. Validation committee constituted and operating. Internal audit procedures. Documented failure and reversibility protocols. First internal audit cycle completed. Committee operating at quarterly cadence minimum. Reversibility procedures tested under simulation. 4 to 8 months, partially overlapping Level 2.
4, Maintenance
Continuous monitoring of RAISE compliance
Multi-pillar surveillance dashboard. Incident and drift log. Register of framework revisions. Periodic compliance reports against sectoral normative corpora. 12-month operational continuity. Demonstrated capacity to absorb a major regulatory revision without system re-architecture. Stable audit-velocity indicator. Permanent.

RAISE compliance is not an attained state, it is a sustained regime. An organisation that has reached Level 4 has not finished. It has begun to exist under RAISE.

Doctrinal FAQ

Seven questions recur frequently when RAISE is presented to decision-makers, architects or auditors. They are addressed here in their canonical formulation. The answers are deliberately compact; full developments appear in the reference document.

Why not the NIST AI Risk Management Framework?

The NIST AI RMF is a risk management framework, RAISE is an architecture framework. This distinction is not a quarrel of words. NIST describes four organisational functions, Govern, Map, Measure, Manage, that an organisation can adopt without modifying the design of the system it deploys. RAISE produces constraints opposable to technical choices from the specification phase onwards: data architecture, interface contracts, logging, human oversight, domains of validity. An organisation can be NIST RMF compliant and deploy a structurally non-governable system. RAISE makes that case impossible by construction. The two frameworks are compatible: a system designed under RAISE satisfies, with no additional effort, the Map, Measure and Manage functions of NIST. The reverse is not true.

Is RAISE certifiable?

No, and this is deliberate. An architecture framework is not a quality management standard. ISO 9001 certification attests that an organisation follows processes; ISO 13485 that it manages a medical device's lifecycle; ISO/IEC 42001 that it steers its AI systems via an AI Management System. None guarantees the architectural quality of the system produced. RAISE does not certify an organisation, it qualifies a system. Its proof of conformance takes the form of signed and opposable Architecture Decision Records, not a certificate issued by a third-party body. This absence of certification is itself a doctrinal stance: a system that satisfies RAISE demonstrates this through its deployability, not through its badge.

Does RAISE apply to LLMs?

Yes, with a reinforced demand on two pillars. Pillar S (operational validation) becomes harder to instantiate, because the model's non-determinism breaks naive reproducibility: validation by frozen holdout is no longer sufficient, and must be complemented by statistical protocols on the stability of output in distribution. Pillar E (explainability) becomes more imperative, because the post-hoc explanations available for LLMs (SHAP, attention maps, chain-of-thought) are structurally weaker than for tabular models: the practice of effective contestation must therefore rest on upstream devices (versioned prompts, traceable context, format guardrails) rather than downstream ones. More broadly, pillar R now incorporates the AI Act 2024 version which specifically qualifies general-purpose models and applies to them a regime distinct from classic high-risk systems.

How does RAISE position itself relative to the EU AI Act?

The EU AI Act is a normative text, RAISE is the architecture that makes that text operationally applicable. The AI Act describes, in its Articles 9 to 15, the technical requirements for high-risk systems: risk management, data quality, logging, transparency, human oversight, robustness, accuracy, cybersecurity. It does not state how these requirements translate into design decisions. RAISE produces this translation: pillar S for robustness and validation, pillar I for data quality and cybersecurity, pillar A for oversight and accountability, pillar E for transparency and explainability, pillar R for the cross-reading itself. An organisation that deploys a high-risk system without explicit architecture ends up translating the AI Act into a post-hoc checklist, which is precisely the compliance by amendment anti-pattern.

Does SHAP-based explainability satisfy pillar E?

No, and this is the subject of the decorative explainability anti-pattern documented above. SHAP, LIME and their equivalents produce a technical explanation, an attribution of weights to input features. This attribution answers the question “what contributed to the output”, it does not answer the question “can this output be contested and sent back for review”. Pillar E requires situated explainability, calibrated on the operational questions the user must be able to ask the system. A clinician consulting an oncology risk stratification does not need the list of SHAP values, they need to be able to compare two similar patients classified differently and request a review when the difference does not appear justifiable to them. SHAP can be part of the instantiation, it cannot constitute the whole.

How long does it take to reach RAISE Level 4 maturity?

Two to three years in an existing mid-size organisation, provided that the executive sponsor is identified from Level 1 onwards and that the Architecture phase (Level 2) is not truncated. Two thresholds concentrate most of the programme risk. The Level 1 → 2 threshold (Diagnosis → Architecture) typically fails when the initial coverage matrix is treated as a communication deliverable rather than a prioritisation decision: gaps are identified without being ranked, and the subsequent architecture tries to address everything simultaneously. The Level 3 → 4 threshold (Governance → Maintenance) fails when the validation committee is constituted but does not hold halt authority: it becomes the declarative governance anti-pattern. Organisations that cross both thresholds without makeshift reach Level 4 in approximately 30 months.

Is RAISE French?

Doctrinally, RAISE is anchored in the European legal context and in the French Healthcare ground, that is the ground on which it has been stabilised, notably through the PREDICARE programme and the Qualees / Twingital Institute ecosystem. Structurally, it is transposable: its five pillars operate on universal regulatory objects (regulatory architecture, accountability, interoperability, operational validation, ethical explainability) which appear in all regulated environments beyond France. The applicable normative corpus changes by country, FDA and HIPAA in the United States, MHRA and UK GDPR in the United Kingdom, PMDA in Japan, but the grammar of the five pillars remains invariant. The least trivial transposition concerns pillar R, because the cross-reading of national and federal regimes is not symmetric across jurisdictions. Pillar E, in turn, raises transversal questions that European case law (the SCHUFA ruling) clarifies better than North American common law, which makes RAISE perhaps more demanding on its native ground than in its transpositions.

Reference document

A condensed Executive Abstract is available for English-reading practitioners

This page restitutes the public architecture of the framework: genesis, differentiation, five pillars, interdependency matrix, anti-patterns, maturity model, doctrinal FAQ. The Executive Abstract is a 7-page synthesis. The complete reference document, currently in two French volumes (86 pages, 25,700 words), develops the sectoral evaluation grids, RAISE-ADR templates, detailed transition criteria, sectoral application cases (Healthcare, Defence, Finance, Critical Infrastructure) and anti-pattern remediation protocols. A complete English edition is in preparation.

🔒 Register to receive the abstract

Free access · Named document · No commercial outreach

Your AI system is performant. Is it deployable?

If any of these questions remains open, that's where the work begins.