HomePositions

Doctrine & position papers

Positions

Open questions the industry answers too quickly, with unexamined certainties. What the Twingital Institute thinks — and why.

Position P1 · Physical constraints

AI does not scale indefinitely. Physical constraints are an architectural reality.

The growth of AI's computational demand structurally exceeds the expansion capacities of energy infrastructure. This is not a technical problem to be solved — it is a constraint to be integrated into every architectural decision.

The dominant industry discourse treats the energy question as a solvable engineering challenge: more data centers, more renewables, more chip efficiency. That position is technically correct but strategically incomplete.

The construction timelines of electrical infrastructure (10–15 years for an EHV line, 5–8 years for a power plant or a renewable park of meaningful scale) are structurally incompatible with AI innovation cycles (18–24 months). That temporal asymmetry is not cyclical — it is constitutive of the problem.

For actors in regulated environments (healthcare, critical infrastructure, defence), the operational conclusion is direct: energy cost must be a first-rank design criterion, on equal footing with algorithmic performance and regulatory compliance.

EnergyArchitectureInfrastructure

Read the article → AI's Supposed Virtuality Facing the Wall of Reality · 17 p · Feb 2026

Read the article → Allocating the AI Kilowatt-Hour: Why the Energy Market Is Not a Protocol · 11 p · May 2026

Position P2 · Sovereignty

Digital sovereignty is not a political option. It is a deployability condition.

For AI deployed in critical French and European environments, the localization of data and models is not negotiable — neither from a regulatory standpoint, nor from an operational continuity standpoint.

The debate on digital sovereignty is often framed as a political dispute (protectionism vs. openness) or as a performance trade-off (hyperscaler cloud vs. less efficient sovereign cloud). Both framings miss the point.

For organizations deploying AI under HDS, GDPR, NIS2 or the EU AI Act, the choice of cloud provider is a compliance choice before being a performance choice. A clinical AI system hosted on a non-qualified infrastructure is not suboptimal: it is illegal.

SovereigntyCloudRegulation

Read the article → Digital Sovereignty Is Not a Political Debate · 14 p · May 2026

Read the article → Sovereignty Is a Stack, Not a Label · 12 p · May 2026

Read the article → Energy Sovereignty: The Layer Nothing Can Compensate · 11 p · May 2026

Read the article → A Model Is Not Sovereign Because It Is Open · 15 p · May 2026

Position P3 · Governance

Cosmetic human supervision is worse than no supervision.

An AI system endowed with nominal but ineffective human supervision creates an illusion of control that de-responsibilizes decision-makers without delivering the expected benefits of human vigilance.

The EU AI Act and most AI governance frameworks require human oversight for high-risk systems. In practice, that requirement frequently produces validation interfaces that are not used, alerts systematically acknowledged without examination, and review workflows whose cognitive load exceeds the operators' capacity.

Human supervision is not a button to activate — it is an architectural property that must be designed, calibrated, and measured.

GovernanceEU AI ActAccountability

Domain D5 — Algorithmic governance

Read the article → AI Governance Is Not a Policy. It Is an Architecture.

Position P4 · Evaluation

Algorithmic benchmarks do not measure what matters in production.

AUC, F1-score, and accuracy are laboratory metrics. The performance of a clinical AI system is measured on patient outcome indicators — and nowhere else.

A model achieving 0.94 AUC on a test set may have null, or even harmful, clinical performance in production — if the test set does not represent the real distribution, if false positives generate caregiver overload, or if the population under-represented in training is precisely the one that would benefit most from the tool.

The definition of performance metrics is a clinical and ethical act, not a technical one.

EvaluationClinical validationISPOR

Read the article → Measured Performance, Operational Reliability: The Distinction the Industry Refuses to Make · 12 p · Apr 2026

Read the article → Public Benchmarks Have Lost the Right to Decide Alone · 11 p · Apr 2026

Read the article → Benchmark performance is not deployability: three reliability ports, not three metrics · 2 p · May 2026

Position P5 · Data

Data quality is the principal constraint of medical AI. Not algorithms.

In 80 % of failing medical AI projects, the root cause is the quality or structure of the data — not the sophistication of the model.

The AI industry invests massively in model architectures (transformers, foundation models, fine-tuning) and structurally under-invests in data engineering. Solving the data problem before choosing a model is not a preliminary step — it is the core of the work.

DataTweenMeHDLSS

Domain D2 — Data engineering

Position P6 · Digital twins

A medical digital twin is not a predictive model. It is a knowledge infrastructure.

The value of a medical digital twin lies not in point prediction — it lies in the capacity to simulate, test, and validate interventions before applying them to the real patient.

Most tools presented as "medical digital twins" are in fact risk models. A digital twin in the strict sense is interactive, dynamic, and counterfactual: it answers the question "what would happen if?", not merely "what is the current risk?".

Digital twinsTweenMeSimulation

TweenMe project →

Position P7 · Health-economic evidence

No clinical AI should be scaled without independent health-economic evidence.

Impressive technical progress does not exempt one from economic and clinical proof. And that proof — independent, rigorous, published — is systematically missing.

The healthcare AI industry produces convincing technical demonstrations: algorithmic performance on controlled datasets, time savings on targeted tasks, pilot studies of restricted scope. These results are real but insufficient. They do not demonstrate that the system improves clinical outcomes at scale, nor that it is economically sustainable for the healthcare system that funds it.

The generalization of a clinical AI should be conditioned on an independent health-economic study. Without that evidence, we are funding the enrichment of a monopolistic hyperscaler with public money, not the improvement of care.

Health economicsClinical evidenceCommon goodIndependence

Article — Read on LinkedIn →

Your AI system is performant. Is it deployable?

These positions engage architectural choices.
The RAISE Framework operationalizes them.