Every metric is computed on data the model never saw during training. We report both successes and limitations.
Request DemoWithin-indication ranking accuracy on held-out TCGA data. Proves we rank patients within cancer types, not just between them.
On Green-tier external CPTAC cohort (N=229) where ISS exceeds threshold. DRO-trained model on validated subset.
Trained and validated across all major TCGA cohorts for pan-cancer applicability.
C-index comparison on held-out TCGA pan-cancer cohort
+27% improvement over Cox Proportional Hazards on high-confidence predictions. DNAI's epistemic uncertainty calibration identifies when predictions are reliable.
Proliferation and context subspaces are statistically independent, enabling clean biological interpretation.
Proliferation latent correlates strongly with MKI67 expression, validating biological meaning.
High-fidelity reconstruction across all input modalities.
Model vs. Biology: Physics parameters learned from PDX (patient-derived xenograft) growth curves accurately predict real tumor dynamics.
Math vs. Math: Learned trajectory emulator matches numerical ODE solver, enabling <5ms inference.
400-1000x faster than numerical solver, enabling real-time treatment optimization.
Note: We do not validate trajectories on TCGA (snapshot data) to avoid temporal paradoxes. PDX data provides true longitudinal measurements.
Trained primarily on TCGA (9,393 patients, 33 cancer types). DRO training improves cross-site generalization, but performance on rare cancers or non-standard sample preparation may vary.
Not approved for clinical decision-making. Intended for research and pilot deployments.
Validated on 1,031 patients across 10 independent CPTAC cohorts (never seen during training). DRO-trained model achieves pooled C-index 0.718 [0.684, 0.750], with 7/9 cohorts above random. Additional external datasets: CGGA (970 glioma), SCAN-B (3,069 breast).