The Journey of Digital Twins Toward Becoming an Operational Concept
The concept of ‘Digital Twin’ has generated sustained and growing interest across industries. Yet this enthusiasm coexists with a persistent definitional ambiguity that risks relegating Digital Twins to the status of a buzzword rather than an operational concept.
The term itself is richly polysemic. ‘Digital’ can encompass IoT sensor streams, wearable devices, 5G connectivity, SCADA systems, knowledge graphs, or AI algorithms. ‘Twinning’ can refer to anything from a static 3D visualization to a fully bidirectional, real-time control loop. In our discussions with customers, there is no universally accepted definition of what a Digital Twin truly is, nor a shared understanding of how it can drive ROI-proven initiatives at scale.
Predictive maintenance was once hailed as an obvious, high-ROI use case. However, many ROI projections optimistically assumed a leap from no maintenance at all to a fully predictive landscape — overlooking that scheduled maintenance already existed as a robust practice. Digital Twin initiatives must benchmark against existing operational practices, not against the absence of any solution.
Digital Twins comprise a Physical Object (PO) and a Logical Object (LO). Essential properties: representativeness and contextualization, reflection, entanglement, replication, persistence, memorization, composability, accountability, servitization, predictability, and programmability.
Based on the entanglement property — the degree and directionality of data synchronization — Digital Twins can be classified into progressive stages: Digital Model (no automated exchange), Digital Shadow (one-way PO → LO), Digital Twin (bidirectional PO ↔ LO), Cognitive DT (bidirectional + predictive), Collaborative DT (bidirectional + advisory).
ISO 23247 provides a reference architecture for manufacturing. ISO/DIS 24576 aims to establish a broader definitional framework. The Asset Administration Shell (AAS) provides a concrete interoperability layer.
The most consequential architectural decision: how is the level of abstraction determined? The ontology formalizes the abstraction decision — making explicit which dimensions are modeled, which are deliberately excluded, and why. An ontology-driven approach constrains the feature space to clinically meaningful variables, guides modeling technique selection, and provides a basis for clinical validation.
Entanglement creates data flows requiring fine-grained access control. The persistence/memorization paradox confronts GDPR’s right to erasure. Bidirectional synchronization introduces catastrophic attack surfaces — a compromised Digital Twin controlling medical devices or critical infrastructure. Prompt injection adds a novel threat vector for LLM-based Cognitive Digital Twins. Auditability is a regulatory convergence requirement across EU AI Act, MDR, and GDPR.
TweenMe is positioned as a universal generator of Digital Models and Digital Shadows. The ontological engine provides the structured framework within which domain-specific Digital Twins are constructed — guiding variable selection, constraining the model space, enabling composability through shared ontological interfaces, and providing auditability.
The paracetamol abstraction problem demonstrates the importance of defining the right clinical question. A multimodal model for senology shows ontology-constrained synthetic data generation with clinical validation requirements.
Five elements are essential: definitional precision situated within ISO/AAS normative landscape, ontological rigor making abstraction decisions explicit and auditable, cybersecurity as foundation (not afterthought), honest ROI assessment benchmarked against existing practices, and methodological transparency about synthetic data limitations and validation requirements.