Intellectual and operational territory
Nine domains organised across three axes — the structural questions encountered by any industrial deployment of AI in a critical and regulated environment.
These nine domains do not constitute a competency catalogue. They are the territories of questions recurrently and structurally encountered by any actor attempting to bring a performant AI system to the stage of sustainable industrial deployment.
Each domain is addressed in its connections to the others — with the RAISE Framework as the guiding thread, and active projects (PREDICARE, TweenMe) as the concrete field of application.
Axis I
Architecture of AI systems at scale
D1
Axis I
Data engineering as infrastructure
D2
Axis I
System lifecycle and degradation
D3
Axis II
Regulatory compliance by design
D4
Axis II
Algorithmic governance
D5
Axis II
Digital sovereignty and geopolitics
D6
Axis III
Performance evaluation and measurement
D7
Axis III
Economics of industrial AI
D8
Axis III
Human and organisational transformation
D9
Axis I · D1 – D3
Architectural questions are the first that must be resolved — and the most costly to correct after the fact. This axis covers the internal structure of industrial AI systems: how they are built, how their data is treated, and how they age.
Design & Architecture
Architecture of AI systems at scale
A performant AI system in a test environment is not necessarily a deployable system. Architecture at scale raises questions distinct from algorithmic performance: organisational integration (how the system fits into existing workflows and decision-making processes), observability (can one see what the system is doing in production?), resilience (how does the system behave under partial failure?) and reversibility (can one roll back if the system drifts or produces unacceptable decisions?).
These questions are not reducible to technical problems — they engage organisational architectural choices as much as software ones. An AI system without a reversibility mechanism is not merely a technical risk: it is a governance risk that engages the accountability of decision-makers.
Structural questions
RAISE pillars
Associated projects
Design & Architecture
Data engineering as infrastructure
In most organisations, data is treated as a resource — something extracted, transformed and consumed. Data engineering as infrastructure starts from a different position: data is infrastructure, in the same way as networks or storage systems. It must be designed to last, evolve, and be audited.
This entails requirements of quality (is data reliable at source?), traceability (can one reconstruct the origin and transformations of any given data point?), interoperability (can data circulate between systems without loss of meaning?) and temporal stability (do data schemas evolve in a controlled manner?).
In the context of TweenMe, this question takes a concrete and demanding form: how does one build a reliable digital twin from heterogeneous data of variable quality, sourced from systems that were not designed to interoperate? The Smart Data Fertilizer module specifically addresses the HDLSS case — high dimensionality, low sample size.
Structural questions
RAISE pillars
Associated projects
Design & Architecture
System lifecycle and degradation
A deployed AI system is not static. It drifts. Input data evolves, the real distribution diverges from the training distribution, user behaviours change, and regulatory frameworks are updated. Model degradation is an inevitable phenomenon — the question is not how to prevent it, but how to detect, measure and manage it.
This domain covers continuous operational monitoring (model monitoring), periodic revalidation protocols, retraining triggers, and the governance of the complete lifecycle — from production deployment through to system retirement. In regulated environments, any significant model update may constitute a change requiring conformity reassessment.
Structural questions
RAISE pillars
Associated projects
Axis II · D4 – D6
Compliance is not an external constraint added after the fact — it is an architectural property. This axis addresses the conditions under which an AI system can be legitimately deployed, maintained and controlled in a regulated environment.
Regulation & Governance
Regulatory compliance by design
Regulatory compliance treated as a final phase — an audit passed after building — structurally generates non-compliant systems or prohibitive remediation costs. Compliance by design starts from the inverse principle: applicable regulatory requirements are read, understood and translated into design constraints from the architectural phase onward.
The AI regulatory environment is multiple and stratified: the EU AI Act for high-risk systems, MDR/IVDR for software medical devices, GDPR for personal data, HDS for healthcare data hosting, NIS2 for the cybersecurity of critical infrastructures. These instruments do not exclude one another — they overlap and create cumulative obligations that only an architectural reading allows treating without redundancy.
Structural questions
RAISE pillars
Associated projects
Regulation & Governance
Algorithmic governance
Algorithmic governance denotes the set of mechanisms by which an organisation exercises effective control over the AI systems it deploys. This is not a policy question — it is a question of organisational architecture.
It covers three interdependent dimensions: accountability (who answers for the system's decisions?), auditability (can one reconstruct the decision chain for each system output?) and risk control (do effective mechanisms exist to interrupt, correct or contest the system's decisions?). Human supervision, to be real, must be structural — not cosmetic.
Structural questions
RAISE pillars
Associated projects
Regulation & Governance
Digital sovereignty and geopolitics
Digital sovereignty is not an abstract industrial policy question — it is a concrete deployment constraint. For organisations deploying AI in critical environments, it translates into architectural decisions: where is data hosted? Under which jurisdiction? Who has access, under what conditions and with what contractual guarantees?
The geopolitical dimension has intensified with the proliferation of national and regional regulatory regimes: French data localisation requirements for healthcare data (HDS certification), SecNumCloud-qualified trusted cloud for public administrations, ITAR constraints for certain defence sectors. These requirements are not optional — they condition the very legality of deployment. Sovereign AI is not a patriotic luxury: it is a condition of deployability.
Structural questions
RAISE pillars
Associated publications
Axis III · D7 – D9
A viable AI system is not merely one that works — it is one that can be evaluated, funded and sustained over time by organisations and individuals. This axis addresses the economic, organisational and human conditions of industrial deployment.
Scale & Viability
Performance evaluation and measurement
Measuring the performance of an AI system in production is structurally more difficult than measuring its performance in testing. In testing, metrics are chosen, conditions are controlled, data is clean. In production, the system operates in a noisy environment, with imperfect data, on cases the training set had not anticipated.
This domain covers evaluation metrics adapted to critical environments — clinical accuracy, false positive rates in high-stakes contexts, robustness across sub-populations, algorithmic fairness — and validation protocols under real conditions. In regulated environments, performance evaluation is also a compliance obligation: results must be documented, reproducible and publishable.
Structural questions
RAISE pillars
Associated projects
Scale & Viability
Economics of industrial AI
Industrial AI has a cost — in energy, in infrastructure, in human time, and in organisational capital. These costs are frequently underestimated in the design phase and overestimated in the sales phase, generating poorly calibrated investment decisions in both directions.
This domain covers the real economics of deployment: total cost of ownership (TCO), return on investment measured against genuine business indicators (not algorithmic metrics), energy constraints (datacentre infrastructure and electrical grids cannot indefinitely absorb growing computational demand), and funding models adapted to public and regulated environments.
Structural questions
RAISE pillars
Associated publications
Scale & Viability
Human and organisational transformation
AI systems that fail in production rarely fail for purely technical reasons. They fail because the organisations deploying them were not prepared to absorb them — neither structurally nor culturally.
This domain addresses the cognitive load imposed on operators (a physician receiving 50 predictive alerts per day will effectively act on none of them), organisational resistance (AI systems disrupt existing power balances and workflows), training (users must understand the system's limitations, not merely its interface) and the social acceptability of algorithmic decisions in high-stakes human contexts.
Structural questions
RAISE pillars
Associated projects
Is it economically sustainable?
Is it governable over time?
If any of these questions remain open,
that is where the work begins.