Article — Position paper · ○ Open access

Does AI reduce clinicians' cognitive load — or increase it?

Architecture, methodology and systemic conditions for AI that genuinely simplifies care

Jérôme Vetillard · · Twingital Institute · 4 min read
🇫🇷 Lire en français ↓ Download PDF

The decisive criterion: net time

Before any discussion of a model’s medical relevance, the question that dominates real-world adoption remains pragmatic: does the tool save time? The clinician reasons in net balance — learning time, integration time, reliability assessment time, additional collective coordination time, minus time actually saved. If this balance is negative, the tool will not be integrated, regardless of its statistical performance. In a tumor board meeting, if the AI requires several extra minutes of explanation, adoption becomes fragile. The adoption criterion is not AUC: it is the ratio of perceived clinical value to induced time friction.

Thirty years of erratic digital layering

The EHR was supposed to simplify: it shifted clinical time toward documentation. Computerized prescription was supposed to reduce errors: provided pharmacy and patient records were interoperable. Telemedicine was supposed to streamline care pathways: it added an imperfectly coordinated parallel circuit. With each technological wave, the promised simplification translated into a transfer of complexity. The persistence of the index card in hospital wards is neither nostalgic nor technophobic — it is structural. It signals that digital systems have not yet achieved a sufficient level of robustness, simplicity and resilience to fully substitute for paper-based workflows.

Implicit techno-solutionism and the methodological flaw

This cycle rests on three implicit postulates: hospital complexity would be essentially informational, algorithmic optimization would suffice to reduce human burden, and the failure of a tool would mean the next version will perform better. Yet hospital care is an unstable socio-technical system. The structural methodological flaw lies in the absence of rigorous analysis of real work activity: specifications derived from formal processes, theoretical organizational charts, declarative interviews. Real work never matches described work. The disciplines capable of modeling this complexity exist — medical anthropology, cognitive ergonomics, in-situ shadowing, NASA-TLX, Cognitive Work Analysis — but remain peripheral in healthcare. The more intelligent the system becomes, the more costly the absence of fine-grained understanding of real work activity.

Clinical decision-making is collective

Hospital decision-making is not a solitary act by a physician facing a screen. It is the product of distributed cognition: tumor boards, medical staff meetings, but also corridor exchanges and rapid arbitrations. In an oncological tumor board, the therapeutic decision results from a cross-referencing of divergent imaging interpretations, benefit/risk assessment, organizational constraints, patient preferences, and the implicit experience of practitioners. AI designed to assist an isolated actor produces an individual recommendation within a collegial process. The AI then adds a meta-cognitive layer: the issue is no longer merely deciding on treatment, but deciding the degree of confidence to grant to the algorithmic suggestion. The question becomes: does the model integrate without friction into a distributed decision space?

Hospital IT departments: a foundation under stress

Deploying AI in clinical production requires a mature technical foundation — industrialized DevOps, versioned data governance, continuous monitoring, robust cybersecurity. Hospital data centers are generally not dimensioned for AI workloads. The question of execution location (hyperscalers vs. sovereign infrastructure) engages the sovereignty of health data. AI also expands the attack surface: prompt injection, extended trust perimeters of agentic architectures, risk of data leakage induced by the very logic of the system. The vendor landscape is historically fragmented, with a triple lock-in (technical, contractual, organizational). Each additional AI layer generates new integrations, new data flows, new failure points. Under the EU AI Act, healthcare institutions remain responsible for the governance of high-risk systems they deploy — including when AI is embedded within a vendor solution.

Encapsulating intelligence: the Qualees / TweenMe approach

The approach adopted with TweenMe consists of encapsulating the digital twin model within an application component inserted at a precise point in the hospital value chain. This encapsulation serves several functions: clear delimitation of the functional scope on a specific uncertainty, integration into the existing sequence (case preparation, staff meeting, tumor board, therapeutic validation) without creating a parallel space, strict architectural separation of the predictive engine from transactional systems. As a high-risk system under the EU AI Act, compliance is built into the architecture: usage traceability, post-market surveillance, anomaly detection, drift monitoring. AI is not a diffuse horizontal layer. It is a vertical component, inserted within a defined value chain.

Toward an ecosystemic architecture: PREDICARE

Even a properly encapsulated and governed AI remains partial if the overall architecture continues to be structured around the acute episode. Predictive medicine requires an architectural transformation at the territorial scale: longitudinal data continuity beyond institutional silos, population-level governance, effective primary-to-hospital care integration, alignment of funding mechanisms with prevention. The digital twin becomes territorial — it models aggregated trajectories, population-level risks, dynamic matching between care demand and supply. AI is no longer a peripheral module inserted into an EHR: it becomes an infrastructure component. In an unchanged system, AI tends to amplify existing fragilities. In a re-architected system, it can become a lever for anticipation and stabilization. The challenge is not adding intelligence — the challenge is redesigning the system in which that intelligence operates.

Read the document