Article — Position paper · 🔒 Registration required

Clinically-Informed Neural Networks (CINNs)

When published medical literature becomes a learning constraint — and why it is not regularisation

Jérôme Vetillard · · Twingital Institute · 2 min read
🇫🇷 Lire en français

Preliminary note — Intellectual property protection. CINNs constitute an original architectural contribution under IP protection. This article is deliberately limited to the theoretical framework, epistemological positioning, and positioning within the literature. Implementation details are not addressed and will be the subject of a later publication after filing.

1. The inspiring analogy: PINNs

Physics-Informed Neural Networks solved an elegant problem: how to incorporate a known physical law into neural network learning, not as additional training data, but as a constraint on the space of admissible solutions.

The loss function becomes: 𝓛 = 𝓛_data + λ · 𝓛_physics

2. The specific problem in clinical biology

In physics, laws are known, deterministic, universal. But the dynamics of complex chronic pathologies do not obey a known differential equation. What exists is population-level statistical knowledge from decades of published clinical research.

3. The central insight: clinical literature as constraint source

𝓛 = 𝓛_data + λ · 𝓛_clinical(f(x), θ_lit), where 𝓛_clinical measures the gap between model outputs and statistical parameters estimated from the literature.

4. Positioning within existing literature

Covering knowledge distillation, constrained optimisation, Bayesian deep learning, Universal Differential Equations, TRIPOD validation, and transfer learning. CINNs occupy a distinct position: not where equations are partially known, but where only population-level statistical knowledge is available.

5. Why this is not regularisation

L1/L2 regularisation constrains parameter space 𝒲. CINNs shift the constraint to trajectory space 𝒯. A regularisation constraint says “prefer simple solutions.” A CINN constraint says “prefer biologically plausible solutions.”

6. The loss balancing problem

Gradient imbalance between loss terms is the primary failure source. Strategies include Self-Adaptive PINNs, gradient normalisation, and uncertainty weighting — with the added dimension of epistemic value weighting.

7. What CINNs aim to deliver — and what they do not solve

Expected benefits: out-of-distribution stability in HDLSS regime, clinical interpretability, regulatory traceability (MDR). Unresolved limitations: extraction quality, source bias propagation, constraint weighting, source hierarchy, temporality.

8. Validation criteria and falsifiability conditions

The central hypothesis can be stated falsifiably: in the clinical HDLSS regime, external constraint brings measurable calibration gain — provided distributional divergence remains below threshold Δ*.

9. Conclusion: constraint as a form of knowledge

Data are rare. Knowledge is not. CINNs attempt to bridge this asymmetry — making collective knowledge an active learning constraint.

This architecture is a hypothesis. It remains to be validated.

Read the document

Access the full article

Enter your details to access the document. Free access — no sales outreach.

Personalized document · Free access · No sales outreach