Doctrine & editorial positions
Open questions to which the industry responds too quickly, with unexamined certainties. What the Twingital Institute thinks — and why.
A position is not a blog post. It is a structured argument on a structural question — a question that engages architectural, industrial policy or governance choices, and on which opposing positions have concrete consequences.
These positions reflect the doctrine of the Twingital Institute. They are updated when the evidence warrants it. They are citable and debatable.
Editorial note — Positions marked "In preparation" are the subject of articles currently being written in the Publications section. They are stated here in their thetic form, prior to full development.
Position P1 · Physical constraints
"The growth of AI computational demand structurally outpaces the expansion capacity of energy infrastructure. This is not a technical problem to be solved — it is a constraint to be integrated into every architectural decision."
The dominant industry narrative treats the energy question as a solvable engineering challenge: more datacentres, more renewables, more efficient chips. This position is technically correct but strategically incomplete.
The construction timelines for electrical infrastructure (10–15 years for a high-voltage line, 5–8 years for a significant generation plant or wind farm) are structurally incompatible with AI innovation cycles (18–24 months). This temporal asymmetry is not conjunctural — it is constitutive of the problem.
For actors in regulated environments (healthcare, critical infrastructure, defence), the operational conclusion is direct: energy cost must be a first-order design criterion, on a par with algorithmic performance and regulatory compliance. AI systems that ignore this constraint at the architecture phase generate medium-term deployability risks.
Position P2 · Sovereignty
"For AI deployed in critical French and European environments, the localisation of data and models is non-negotiable — neither from a regulatory standpoint, nor from an operational continuity standpoint."
The debate on digital sovereignty is often framed as a political debate (protectionism vs. openness) or a performance debate (hyperscaler cloud vs. less efficient sovereign cloud). Both framings miss the point.
For organisations deploying AI in environments subject to HDS, GDPR, NIS2 or the EU AI Act, the choice of cloud provider is a compliance choice before it is a performance choice. A clinical AI system hosted on non-qualified infrastructure is not suboptimal: it is unlawful.
The geopolitical dimension adds a further layer: extra-territorial access clauses (the US CLOUD Act, in particular) create confidentiality and continuity risks that cannot be covered by contractual guarantees alone.
Position P3 · Governance
"An AI system equipped with nominal but ineffective human supervision creates an illusion of control that de-responsibilises decision-makers without delivering the expected benefits of human vigilance."
The EU AI Act and most AI governance frameworks require human supervision for high-risk systems. In practice, this requirement frequently generates validation interfaces that are not used, alerts systematically acknowledged without review, or review workflows whose cognitive load exceeds operator capacity.
Human supervision is not a button to be switched on — it is an architectural property that must be designed, calibrated and measured. It entails choices about the cognitive load imposed on operators, about which cases deserve genuine human attention, and about the mechanisms for contesting algorithmic decisions.
Position P4 · Evaluation
"AUC, F1-score and accuracy are laboratory metrics. The performance of a clinical AI system is measured against patient outcome indicators — and nowhere else."
A model achieving 0.94 AUC on a test set may have clinically null or even harmful performance in production — if the test set does not represent the real distribution, if false positives generate clinician overload, or if the sub-population underrepresented in training is precisely the one that would benefit most from the tool.
Defining performance metrics is a clinical and ethical act, not a technical one. It engages clinicians, patients, regulators and architects — and it must precede any training or deployment decision.
Position P5 · Data
"In 80% of medical AI projects that fail, the root cause is data quality or structure — not model sophistication."
The AI industry invests massively in model architectures (transformers, foundation models, fine-tuning) and structurally underinvests in data engineering. This allocation is inverted relative to the real needs of medical production environments.
Real medical data is heterogeneous, incomplete, non-standardised and sourced from systems not designed to interoperate. Solving the data problem before selecting a model is not a preliminary step — it is the core of the work. This is precisely the purpose of TweenMe's Smart Data Fertilizer module.
Position P6 · Digital twins
"The value of a medical digital twin does not lie in point prediction — it lies in the capacity to simulate, test and validate interventions before applying them to the real patient."
The majority of tools presented as "medical digital twins" are in reality risk models — they calculate a probability, but do not allow simulation of an intervention's effect. A digital twin in the strict sense is interactive, dynamic and counterfactual: it answers the question "what would happen if?", not merely "what is the current risk?".
This distinction entails radically different architectural choices — in terms of modelling, update frequency, integration into clinical workflows, and the governance of decisions derived from simulation.
Position P7 · Health-economic evidence
"Impressive technical progress does not exempt from economic and clinical proof. And that proof — independent, rigorous, published — is systematically missing."
The healthcare AI industry produces compelling technical demonstrations: algorithmic performance on controlled datasets, time savings on targeted tasks, pilot studies with restricted scope. These results are real but insufficient. They do not demonstrate that the system improves clinical outcomes at scale, nor that it is economically sustainable for the health system that funds it.
Scaling a clinical AI should be conditioned on an independent health-economic study — conducted by an entity with no financial ties to the vendor — demonstrating net benefit for the common good: measurable improvement in patient outcomes, documented reduction in systemic costs, or both. Without this proof, public money funds a monopolistic hyperscaler's enrichment, not the improvement of care.
These positions entail architectural choices.
The RAISE Framework operationalises them.