Genesis & rationale
Most existing frameworks on AI in regulated environments treat compliance as an external constraint — something added after the fact to satisfy a regulator, obtain a certification or respond to an audit. RAISE starts from the opposite premise.
Compliance added after design is not compliance. It is compensation. And compensation does not hold at scale.
AI systems that fail in critical environments do not, for the most part, fail for technical reasons. They fail for architectural reasons — systems designed out of context, deployed into organisations that were not prepared to absorb them, within regulatory frameworks they had not anticipated.
RAISE is the structural response to this problem. A framework that does not layer on top of the system's design — it is constitutive of it.
Framework structure
RAISE is an operational acronym: each letter designates a non-negotiable pillar of industrial AI deployment. The five pillars are interdependent — the failure of any one structurally compromises the others.
R
Regulatory Architecture
A
Accountability & Governance
I
Interoperability Standards
S
Safety & Operational Validation
E
Explainability & Ethics
| — | Pillar | Description & operational questions |
|---|---|---|
| R |
Pillar 1 Regulatory Architecture |
Integration of regulatory frameworks from the outset of system design — not at the validation stage, not in response to an audit. This requires an active reading of applicable regulatory texts (EU AI Act, MDR/IVDR, GDPR, HDS, NIS2…) during the architecture phase, and their translation into concrete design constraints. Key questions: What is the regulatory risk level of the system? What documentation, transparency and oversight obligations apply? Is the system designed to be auditable? |
| A |
Pillar 2 Accountability & Governance |
Clear definition of responsibilities at every level of the system — who decides, who controls, who is accountable in the event of failure. This pillar covers algorithmic governance (who validates system decisions?), auditability (can the decision chain be reconstructed?) and institutional risk control. Key questions: Who is accountable for the system's decisions? Is there effective, documented human oversight? Are the control mechanisms themselves controllable? |
| I |
Pillar 3 Interoperability Standards |
The system's capacity to integrate durably into existing technical, organisational and regulatory infrastructures. This includes data traceability, digital sovereignty (hosting, localisation, access rights), interoperability standards (FHIR, HL7, open APIs) and continuity of information flows. Key questions: Is data localised in accordance with sovereign requirements? Can the system communicate with adjacent systems without loss of traceability? |
| S |
Pillar 4 Safety & Operational Validation |
Validation of the system under real operational conditions — not only in a test environment. This pillar covers operational safety (the system does what it is supposed to do, under the conditions in which it will be used), effective human oversight, reversibility of decisions and failure protocols. Key questions: Has the system been validated under real operational conditions? Is there a deactivation or reversibility mechanism? Are operators trained on the system's limits? |
| E |
Pillar 5 Explainability & Ethics |
The system's capacity to justify its decisions in an intelligible way for operators, decision-makers and the people concerned. This pillar goes beyond algorithmic transparency — it touches on decision-making legitimacy: a decision taken by an AI system must be explainable, contestable and, where necessary, overridden by a competent human. Key questions: Can the system explain its decisions at different levels of technical expertise? Are identified biases documented and monitored? Is there an effective right of recourse? |
Structural interdependencies
The five pillars are not independent. Their interdependence is constitutive of the Framework — this is precisely what distinguishes it from a compliance checklist.
R → A: Regulation calls for accountability
A regulatory framework integrated from the design stage (R) is inoperative without a governance structure that ensures its maintenance over time (A). Regulatory compliance does not sustain itself — it requires identified actors, documented processes and named responsibilities.
A → I: Governance requires interoperability
Auditability (A) is only possible if data is traceable and systems are interoperable (I). A system that is opaque about its data flows cannot be effectively governed — responsibilities cannot be exercised over what cannot be observed.
I → S: Interoperability conditions validation
Operational validation (S) bears on the system in its real use context — which assumes that interfaces with adjacent systems are stable and documented (I). Validating a system in isolation from its operational environment is a partial validation.
S → E: Safety legitimises explanation
A system can only claim explainability (E) if it has been operationally validated (S). The explanation of an unvalidated decision is not transparency — it is post-hoc rationalisation. Decision-making legitimacy rests on a prior demonstration of operational reliability.
A system that fails on a single RAISE pillar is not a partially compliant system. It is a system whose industrialisation is structurally compromised.
Normative corpora by sector
RAISE is a generic framework — it applies to any critical and regulated environment. Each sector mobilises a distinct normative corpus, which RAISE integrates at the level of each of its pillars.
Healthcare & Life Sciences
Defence & National Security
Public Sector & Administrations
Industry & Critical Infrastructure
Finance & Insurance
Clinical Research & Biotech
Operational implementation
RAISE is not a certification framework. It is an architectural framework — it structures design decisions, not boxes to tick. Its implementation follows four progressive levels.
Level 1 — Diagnostic
Assessment of the existing system (or system under design) against the five pillars. Identification of structural gaps — not to correct them immediately, but to make them visible and prioritisable.
Level 2 — Architecture
Integration of RAISE constraints into the system's architectural decisions. At this level, RAISE influences technology choices, data models, interfaces and supervision protocols.
Level 3 — Governance
Establishment of governance structures corresponding to the five pillars: validation committees, auditability procedures, accountability chains, failure protocols. This level concerns the organisation, not only the technical system.
Level 4 — Maintenance
Continuous monitoring of the deployed system against the five pillars. Regulatory frameworks evolve, systems drift, organisations change. RAISE compliance is not a state — it is a process that must be actively maintained over time.
Reference document
The RAISE Framework is the subject of a comprehensive reference document — detailed architecture of the five pillars, sector-by-sector evaluation grids, interdependency matrices and implementation protocols.
This document is accessible on registration. It is addressed to decision-makers, system architects, compliance officers and executive leadership in organisations deploying AI in critical environments.
Access the RAISE reference document
Complete document — architecture of the five pillars, evaluation grids, interdependency matrices. Free access on registration.
Your AI system performs well.
Is it actually deployable?
Is it economically sustainable?
Is it governable over time?
If any of these questions remain open,
that is where the work begins.