Intellectual and operational territory

Domains of work

Nine domains organised across three axes — the structural questions encountered by any industrial deployment of AI in a critical and regulated environment.

These nine domains do not constitute a competency catalogue. They are the territories of questions recurrently and structurally encountered by any actor attempting to bring a performant AI system to the stage of sustainable industrial deployment.

Each domain is addressed in its connections to the others — with the RAISE Framework as the guiding thread, and active projects (PREDICARE, TweenMe) as the concrete field of application.

Axis I

Architecture of AI systems at scale

D1

Axis I

Data engineering as infrastructure

D2

Axis I

System lifecycle and degradation

D3

Axis II

Regulatory compliance by design

D4

Axis II

Algorithmic governance

D5

Axis II

Digital sovereignty and geopolitics

D6

Axis III

Performance evaluation and measurement

D7

Axis III

Economics of industrial AI

D8

Axis III

Human and organisational transformation

D9

Axis I · D1 – D3

Design & Architecture

Architectural questions are the first that must be resolved — and the most costly to correct after the fact. This axis covers the internal structure of industrial AI systems: how they are built, how their data is treated, and how they age.

D1

Design & Architecture

Architecture of AI systems at scale

+

A performant AI system in a test environment is not necessarily a deployable system. Architecture at scale raises questions distinct from algorithmic performance: organisational integration (how the system fits into existing workflows and decision-making processes), observability (can one see what the system is doing in production?), resilience (how does the system behave under partial failure?) and reversibility (can one roll back if the system drifts or produces unacceptable decisions?).

These questions are not reducible to technical problems — they engage organisational architectural choices as much as software ones. An AI system without a reversibility mechanism is not merely a technical risk: it is a governance risk that engages the accountability of decision-makers.

Structural questions

  • Is the system designed to operate under real deployment conditions, not only in testing?
  • Can one observe in real time what the system produces and how?
  • What are the mechanisms for graceful degradation and deactivation?
  • Is human supervision structural or merely cosmetic?

RAISE pillars

S — Safety A — Accountability E — Explainability

Associated projects

D2

Design & Architecture

Data engineering as infrastructure

+

In most organisations, data is treated as a resource — something extracted, transformed and consumed. Data engineering as infrastructure starts from a different position: data is infrastructure, in the same way as networks or storage systems. It must be designed to last, evolve, and be audited.

This entails requirements of quality (is data reliable at source?), traceability (can one reconstruct the origin and transformations of any given data point?), interoperability (can data circulate between systems without loss of meaning?) and temporal stability (do data schemas evolve in a controlled manner?).

In the context of TweenMe, this question takes a concrete and demanding form: how does one build a reliable digital twin from heterogeneous data of variable quality, sourced from systems that were not designed to interoperate? The Smart Data Fertilizer module specifically addresses the HDLSS case — high dimensionality, low sample size.

Structural questions

  • Can one reconstruct the genealogy of a decision from the data that produced it?
  • Are data schemas versioned and documented like code?
  • How is silent data quality degradation detected and managed in production?
  • Are data interoperable with upstream and downstream systems?

RAISE pillars

I — Interoperability S — Safety

Associated projects

D3

Design & Architecture

System lifecycle and degradation

+

A deployed AI system is not static. It drifts. Input data evolves, the real distribution diverges from the training distribution, user behaviours change, and regulatory frameworks are updated. Model degradation is an inevitable phenomenon — the question is not how to prevent it, but how to detect, measure and manage it.

This domain covers continuous operational monitoring (model monitoring), periodic revalidation protocols, retraining triggers, and the governance of the complete lifecycle — from production deployment through to system retirement. In regulated environments, any significant model update may constitute a change requiring conformity reassessment.

Structural questions

  • How does one detect that a model has drifted before its outputs become harmful?
  • Who is accountable for the revalidation of an updated model?
  • Is the system lifecycle documented and auditable?
  • At what point should a system be retired rather than updated?

RAISE pillars

S — Safety A — Accountability R — Regulatory

Associated projects

Axis II · D4 – D6

Regulation & Governance

Compliance is not an external constraint added after the fact — it is an architectural property. This axis addresses the conditions under which an AI system can be legitimately deployed, maintained and controlled in a regulated environment.

D4

Regulation & Governance

Regulatory compliance by design

+

Regulatory compliance treated as a final phase — an audit passed after building — structurally generates non-compliant systems or prohibitive remediation costs. Compliance by design starts from the inverse principle: applicable regulatory requirements are read, understood and translated into design constraints from the architectural phase onward.

The AI regulatory environment is multiple and stratified: the EU AI Act for high-risk systems, MDR/IVDR for software medical devices, GDPR for personal data, HDS for healthcare data hosting, NIS2 for the cybersecurity of critical infrastructures. These instruments do not exclude one another — they overlap and create cumulative obligations that only an architectural reading allows treating without redundancy.

Structural questions

  • Which regulatory instruments apply to this system — and in what order of priority?
  • Are documentation and traceability requirements integrated from the design phase?
  • Is the system designed to be auditable, or merely to pass an audit?
  • Who within the organisation is responsible for continuous regulatory monitoring?

RAISE pillars

R — Regulatory A — Accountability

Associated projects

D5

Regulation & Governance

Algorithmic governance

+

Algorithmic governance denotes the set of mechanisms by which an organisation exercises effective control over the AI systems it deploys. This is not a policy question — it is a question of organisational architecture.

It covers three interdependent dimensions: accountability (who answers for the system's decisions?), auditability (can one reconstruct the decision chain for each system output?) and risk control (do effective mechanisms exist to interrupt, correct or contest the system's decisions?). Human supervision, to be real, must be structural — not cosmetic.

Structural questions

  • Who is nominally accountable for the decisions produced by the system?
  • Can one reconstruct the decision chain for any given system output?
  • Is human supervision effective, or merely formally documented?
  • Does a mechanism exist for contesting algorithmic decisions?

RAISE pillars

A — Accountability E — Explainability

Associated projects

D6

Regulation & Governance

Digital sovereignty and geopolitics

+

Digital sovereignty is not an abstract industrial policy question — it is a concrete deployment constraint. For organisations deploying AI in critical environments, it translates into architectural decisions: where is data hosted? Under which jurisdiction? Who has access, under what conditions and with what contractual guarantees?

The geopolitical dimension has intensified with the proliferation of national and regional regulatory regimes: French data localisation requirements for healthcare data (HDS certification), SecNumCloud-qualified trusted cloud for public administrations, ITAR constraints for certain defence sectors. These requirements are not optional — they condition the very legality of deployment. Sovereign AI is not a patriotic luxury: it is a condition of deployability.

Structural questions

  • Is data processed by the system localised in compliance with applicable sovereignty requirements?
  • Do the cloud providers used hold the required qualifications (HDS, SecNumCloud, ISO 27001)?
  • Do cloud provider contracts guarantee the absence of extra-territorial access?
  • Is the cloud strategy coherent with foreseeable regulatory developments over 3–5 years?

RAISE pillars

R — Regulatory I — Interoperability

Associated publications

Axis III · D7 – D9

Scale & Viability

A viable AI system is not merely one that works — it is one that can be evaluated, funded and sustained over time by organisations and individuals. This axis addresses the economic, organisational and human conditions of industrial deployment.

D7

Scale & Viability

Performance evaluation and measurement

+

Measuring the performance of an AI system in production is structurally more difficult than measuring its performance in testing. In testing, metrics are chosen, conditions are controlled, data is clean. In production, the system operates in a noisy environment, with imperfect data, on cases the training set had not anticipated.

This domain covers evaluation metrics adapted to critical environments — clinical accuracy, false positive rates in high-stakes contexts, robustness across sub-populations, algorithmic fairness — and validation protocols under real conditions. In regulated environments, performance evaluation is also a compliance obligation: results must be documented, reproducible and publishable.

Structural questions

  • Are evaluation metrics defined against real business objectives, not merely algorithmic benchmarks?
  • How are sub-populations for which the system performs less well identified and evaluated?
  • Are performance results reproducible and documentable for a regulatory audit?
  • Who defines acceptable performance thresholds — and on what basis?

RAISE pillars

S — Safety E — Explainability
D8

Scale & Viability

Economics of industrial AI

+

Industrial AI has a cost — in energy, in infrastructure, in human time, and in organisational capital. These costs are frequently underestimated in the design phase and overestimated in the sales phase, generating poorly calibrated investment decisions in both directions.

This domain covers the real economics of deployment: total cost of ownership (TCO), return on investment measured against genuine business indicators (not algorithmic metrics), energy constraints (datacentre infrastructure and electrical grids cannot indefinitely absorb growing computational demand), and funding models adapted to public and regulated environments.

Structural questions

  • What is the actual total cost of deployment — beyond initial development costs?
  • Have the energy constraints of the infrastructure been evaluated at the intended scale?
  • Is the economic model viable within the applicable regulatory framework (healthcare billing, public funding, etc.)?
  • Is ROI measured against business indicators or algorithmic proxies?

RAISE pillars

R — Regulatory A — Accountability
D9

Scale & Viability

Human and organisational transformation

+

AI systems that fail in production rarely fail for purely technical reasons. They fail because the organisations deploying them were not prepared to absorb them — neither structurally nor culturally.

This domain addresses the cognitive load imposed on operators (a physician receiving 50 predictive alerts per day will effectively act on none of them), organisational resistance (AI systems disrupt existing power balances and workflows), training (users must understand the system's limitations, not merely its interface) and the social acceptability of algorithmic decisions in high-stakes human contexts.

Structural questions

  • Were end users involved in the design — not merely in validation?
  • Has the cognitive load the system imposes on operators been measured?
  • Do training programmes cover system limitations, not only features?
  • How does the organisation handle cases where the system and the human expert diverge?

RAISE pillars

E — Explainability A — Accountability S — Safety

Your AI system performs well.
Is it actually deployable?

Is it economically sustainable?
Is it governable over time?

If any of these questions remain open,
that is where the work begins.