Patent Pending

U.S. Provisional Application No. 63/988,480

System and Method for Distributionally Robust Training of Physics-Constrained Cancer Digital Twins Using Dual-Mode Risk Set Construction

A system and method for training distributionally robust cancer digital twins and selecting therapeutic regimens. The system extracts Tissue Source Site (TSS) identifiers from patient barcodes to define acquisition environments. A dual-mode risk set construction engine allows configuration between (1) a pooled cross-site mode that computes global log-likelihood denominators to enforce cross-site ranking invariance, and (2) a strictly isolated mode using segmented GPU kernels to enforce statistical independence. An exponentiated gradient ascent loop updates environment weights to upweight the most difficult environments based on per-environment losses.

17 Claims
Back to Whitepapers

CROSS-REFERENCES

CROSS-REFERENCE TO RELATED APPLICATIONS

This application is related to commonly owned U.S. Provisional Patent Application No. 63/967,576, filed January 25, 2026, entitled “SYSTEM AND METHOD FOR PHYSICS-CONSTRAINED SIM-TO-REAL TRANSFER LEARNING IN COMPUTATIONAL ONCOLOGY,” and to commonly owned disclosures directed to complementary aspects of the same platform architecture, including (i) ontology-guided gradient modulation for stable cross-species domain adaptation, and (ii) runtime enforcement of physical and statistical reliability envelopes in differentiable clinical digital twins. The present application addresses a distinct technical problem—distributionally robust training across heterogeneous data acquisition environments—and does not claim priority to the aforementioned applications.

I.FIELD OF THE INVENTION

The present invention relates generally to artificial intelligence, computational biology, and high-performance computing. More specifically, the invention relates to systems and methods for training neural networks on multi-center medical data using configurable risk set scopes—ranging from strictly isolated GPU memory structures to pooled cross-site ranking constraints—to optimize generalization in physics-constrained tumor dynamics models.

II.BACKGROUND OF THE INVENTION

1.Shortcut Learning and Batch Effects

Precision oncology increasingly relies on machine learning models to integrate high-dimensional patient data—such as genomics, transcriptomics, and histopathology—to predict clinical outcomes. A critical challenge in deploying such models is “shortcut learning,” where the model learns to identify the hospital or sequencing center from which the data originated rather than the underlying tumor biology. This “batch effect” occurs because different hospitals (“environments”) utilize different tissue staining protocols, slide scanners, and DNA sequencing platforms. A model trained via standard Empirical Risk Minimization (ERM) often minimizes global loss by overfitting to these institution-specific artifacts, causing performance degradation during external validation.

2.Limitations of Existing Harmonization

Existing solutions have significant limitations. Harmonization techniques like “ComBat” often remove biological signals correlated with the batch. Furthermore, the mathematical formulation of the training objective—specifically the Cox Proportional Hazards loss—presents a dilemma regarding “partial likelihood contamination.”

3.Pooled vs. Stratified Risk Sets

In standard deep learning frameworks, survival loss is computed by sorting a batch globally. This creates a “pooled” risk set where the denominator for a patient in Hospital A includes patients from Hospital B. While this can technically allow gradients to bleed across environmental boundaries (potentially encouraging shortcut learning), it also enforces a “global ranking” constraint, compelling the model to learn features that rank patients consistently across different institutions.

Conversely, strictly isolating risk sets by hospital (stratified Cox) prevents contamination but removes this global ranking pressure, potentially leading to models that rank well within a hospital but fail to generalize across hospitals.

4.Need for the Invention

There is an unmet need for a training framework that can flexibly switch between these regimes—supporting both strict memory-isolated computation for statistical independence and pooled computation for cross-site ranking invariance—while utilizing Distributionally Robust Optimization (DRO) to upweight the worst-performing environments. Furthermore, naive implementations of these losses on GPUs often scale poorly (O(N2)O(N^2) memory), creating a need for optimized, memory-efficient kernels for both modes. Finally, translating model predictions into clinical decisions requires rigorous safety bounds and standardized output formats for clinical integration.

III.SUMMARY OF THE INVENTION

The present invention provides a computer-implemented system and method for training cancer digital twins using Hospital-Environment Grouping and a survival-specific adaptation of Group Distributionally Robust Optimization (Group DRO), implemented via a dual-mode risk set construction engine.

Dual-Mode Risk Set Construction
A configurable training pipeline that extracts Tissue Source Site (TSS) codes to define environments and allows selection of two distinct risk set construction modes: Mode 1 (Pooled Cross-Site) computes a global cumulative log-sum-exp across all environments, enforcing cross-site ranking invariance. Mode 2 (Strictly Isolated) uses a Segmented LogCumSumExp Kernel operating on contiguous per-environment GPU buffers, physically preventing cross-environment memory reads.
Group DRO with Exponentiated Gradient
Regardless of risk set mode, the system employs an exponentiated gradient ascent mechanism to update environment weights qkq_k based on computed losses LkL_k. This focuses training on the “worst-case” environments, ensuring the model does not overfit the largest or easiest hospitals.
Physics-Constrained Regimen Selection
At inference, the system utilizes the robustly trained model to parameterize a Neural ODE. Parameters are strictly constrained to biological ranges (e.g., ρ[0,0.3]\rho \in [0, 0.3] per day). The system numerically integrates the ODE for candidate dosage schedules and generates a treatment recommendation data structure stored in a patient-associated database record.

IV.BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram of the system architecture, illustrating the extraction of Tissue Source Site (TSS) codes and the parallel processing of omics and Whole Slide Imaging (WSI) data.

FIG. 2 illustrates the Dual-Mode Risk Set Construction logic, contrasting the Global LogCumSumExp (Mode 1) with the Segmented LogCumSumExp (Mode 2).

FIG. 3 details the inference-time pipeline, from Information Sufficiency Score (ISS) routing through mandatory ODE simulation to treatment recommendation output.

FIG. 1: System Architecture with TSS Environment Extraction

Patient BarcodesTCGA-XX-YYYYTSS code extractionEnvironment Groups132 TSS environmentsmin 20 samples / mergeMulti-Modal InputRNA (2579) + DNA (500)CNV (1886) + Meth (1000)WSI (1536d UNI2-h)33 cancer typesVAE v5.10 (Frozen)z_full (328d latent)DRO Training EngineMode 1: PooledGlobal LSECross-site rankingMode 2: IsolatedSegmented LSEPer-env independenceHypernetworkPhysics-constrainedρ, β, ω paramsNeural ODEdV/dt = f(V; θ)Trajectory V(t)Treatment Recommendationenv groups + z_fullz_full + WSIq_k weights
Block diagram showing multi-modal data ingestion, TSS code extraction from patient barcodes, environment grouping, and the DRO training pipeline feeding into the Hypernetwork and Neural ODE.

FIG. 2: Dual-Mode Risk Set Construction

Risk Set Denominator ScopeMODE 1: POOLED CROSS-SITEMODE 2: STRICTLY ISOLATEDBatch (sorted by T_i, all envs mixed)ABACBAGlobal LogCumSumExp(all envs in denominator)LSE_i = log( Σ_{j: T_j ≥ T_i} exp(h_j) )j spans ALL environmentsPer-Environment Masking for L_kL_k = -(1/N_events,k) Σ M_ik(h_i - LSE_i)Cross-Site Ranking InvarianceHospital A patients ranked against Hospital BStronger external generalizationContiguous Hazard Buffer B (segmented)Env AEnv BEnv CSegment Descriptor Array S(start_k, end_k, n_events_k)Memory isolation boundariesLSE_i = log( Σ_{j=start_k}^i exp(h_j) )j spans ONLY environment kSegmented LogCumSumExp KernelResets accumulation at segment boundariesPer-Environment Statistical IndependenceNo cross-environment memory readsEquivalent to stratified Cox lossGroup DRO: Exponentiated Gradientq_k q_k · exp(η · L_k) / ZL_k (pooled)L_k (isolated)
Contrasting Mode 1 (Pooled Cross-Site) where the global LogCumSumExp denominator includes patients from all environments, versus Mode 2 (Strictly Isolated) where a segmented kernel resets accumulation at environment boundaries.

FIG. 3: Inference Pipeline with ISS Routing and ODE Simulation

601Receive Multi-Modal Input602Encode (VAE + Hypernetwork)ISS Check(Mahalanobis dist)610Fallback / Abstain603 - Physics ConstraintsGenerate Bounded ParametersConstrained Params:ρ ∈ [0, 0.3] (Sigmoid)β ∈ [0, 1] (Sigmoid)ω > 0 (Softplus)604Simulate Regimens (ODE)MC Dropout (50 passes)605Select Optimal Regimen606Store in DatabasePassFail
During inference, the system calculates an Information Sufficiency Score (ISS), generates physics-constrained parameters, simulates candidate regimens via ODE integration, selects the optimal regimen, and stores the recommendation in a database.

V.DETAILED DESCRIPTION OF THE INVENTION

[0001] System Architecture

[0001]The system comprises a computing platform (e.g., a GPU-accelerated server) configured to process multi-modal patient data. The data includes:

Multi-omics data
RNA-seq (e.g., 2,579 genes mapped to TME pathways), DNA mutation (500 genes), Copy Number Variation (CNV, 1,886 features), and Methylation (1,000 features).
Histopathology data
Whole Slide Images (WSI) processed into tile embeddings (e.g., 1536-dimensional vectors from UNI2-h).
Provenance Metadata
Patient barcodes containing Tissue Source Site (TSS) identifiers, extracted from fixed positions within TCGA-format barcode strings.

[0002] Foundation Model (VAE)

[0002]The system utilizes a hierarchical, biologically-disentangled Variational Autoencoder (VAE). The VAE encodes raw omics data into a latent space (zfullz_{\text{full}}, 328 dimensions). The latent space is structured into biologically meaningful slices, including zprolifz_{\text{prolif}} (proliferation) and zpathwayz_{\text{pathway}} (pathway activity, 50 MSigDB Hallmark pathways). The VAE is preferably frozen during downstream training to ensure latent stability.

[0003] Hospital-Environment Grouping

[0003]To enable robust training, the system defines “environments” (ekEe_k \in E) representing distinct data acquisition distributions.

Environment Extraction Logic:

  1. The system parses a patient identifier string (e.g., TCGA-XX-YYYY) to extract the Tissue Source Site (TSS) code (e.g., XX).
  2. The TSS code maps to a specific hospital. In experimental validation, 132 distinct TSS environments were identified.
  3. Sample-Based Merging: Environments with fewer than 20 samples are merged into a composite “OTHER” environment to ensure statistical stability.

[0004] Mode 1: Pooled Cross-Site Construction (Production Default)

[0004]In this mode, the system enforces Cross-Site Ranking Invariance. The entire batch of size NN (containing patients from all environments) is sorted by observed time TiT_i in descending order, and a global cumulative log-sum-exp term is computed:

LSEi=log(j:TjTiexp(hj))\text{LSE}_i = \log \left( \sum_{j:\, T_j \ge T_i} \exp(h_j) \right)

Crucially, the summation includes patients from all environments. This means the risk of a patient in Hospital A is normalized against patients in Hospital B. To compute the per-environment loss LkL_k required for Group DRO, the system applies a mask Mi,kM_{i,k} which is 1 if patient ii belongs to environment kk and has an observed event, and 0 otherwise:

Lk=1Nevents,ki=1NMi,k(hiLSEi)L_k = - \frac{1}{N_{\text{events},k}} \sum_{i=1}^{N} M_{i,k} \cdot (h_i - \text{LSE}_i)

This configuration forces the model to learn features that are robust enough to rank patients consistently across different hospitals, yielding superior external generalization.

[0005] Mode 2: Strictly Isolated Construction (Segmented Kernel)

[0005]In this mode, the system enforces Per-Environment Statistical Independence.

Contiguous Buffer Allocation
The system allocates a contiguous hazard buffer BB in GPU memory, partitioned into KK segments. Each segment BkB_k spans indices [startk,endk)[\text{start}_k,\, \text{end}_k).
Segment Descriptor Array (S)
A descriptor array stores tuples (startk,endk,n_eventsk)(\text{start}_k,\, \text{end}_k,\, n\_\text{events}_k).
Segmented LogCumSumExp
The system executes a custom GPU kernel that resets the cumulative sum at segment boundaries defined by S. The kernel is structurally prevented from reading hazard values outside the current environment's segment.

The segmented kernel computes:

LSEi=log(j=startkiexp(hj))for startki<endk\text{LSE}_i = \log \left( \sum_{j=\text{start}_k}^{i} \exp(h_j) \right) \quad \text{for } \text{start}_k \le i < \text{end}_k

This mode is mathematically equivalent to stratified Cox loss and prevents any gradient interaction between hospitals. It is preferred when strict per-environment statistical independence is required.

[0006] Survival-Specific Distributionally Robust Optimization

[0006]The training objective is to minimize the maximum loss across all environments: minθmaxkLk(θ)\min_\theta \max_k L_k(\theta). The system maintains environment weights q={qk}q = \{q_k\} and performs an exponentiated gradient update:

Exponentiated Gradient Update:

  1. Forward Pass: Compute Lk(θ)L_k(\theta) using either Mode 1 or Mode 2.
  2. Weight Update:
qkqkexp(ηLk)  /  Zq_k \leftarrow q_k \cdot \exp(\eta \cdot L_k) \;/\; Z

where η\eta is a step size and ZZ is a normalization factor ensuring kqk=1\sum_k q_k = 1.

  1. Backward Pass: Update network parameters θ\theta to minimize qkLk(θ)\sum q_k L_k(\theta).

This mechanism focuses training on the “worst-case” environments, ensuring the model does not simply overfit the largest or easiest hospitals.

[0007] Physics-Constrained Parameter Generation

[0007]The Hypernetwork predicts parameters for a tumor dynamics equation. The system enforces strict constraints via bounded activation functions:

Tumor Growth Rate (ρ)
ρ=σsigmoid(zρ)0.3\rho = \sigma_{\text{sigmoid}}(z_\rho) \cdot 0.3. Constrained to [0,0.3][0, 0.3] per day.
Drug Sensitivity (β)
β=σsigmoid(zβ)\beta = \sigma_{\text{sigmoid}}(z_\beta). Constrained to [0,1][0, 1].
Immune Kill Rate (ω)
ω=Softplus(zω)\omega = \text{Softplus}(z_\omega). Constrained to (0,+)(0, +\infty).

These constraints ensure generated trajectories are physically plausible by design rather than by post-hoc rejection. Non-negativity is guaranteed at the architecture level, preventing numerical instability during ODE integration.

[0008] Inference and Treatment Recommendation

[0008]At inference time, the system performs the following steps:

A. Adaptive Routing (ISS):

The system calculates an Information Sufficiency Score (ISS) based on the Mahalanobis distance to the training centroid. If ISS < Threshold, the patient is routed to a fallback protocol.

B. Regimen Optimization Loop:

For each candidate regimen, the system numerically integrates the ODE:

dVdt=ρV(1VKcap)βD(t)VωI(t)V\frac{dV}{dt} = \rho V \left(1 - \frac{V}{K_{\text{cap}}}\right) - \beta D(t) V - \omega I(t) V

where VV is tumor volume, KcapK_{\text{cap}} is carrying capacity,D(t) D(t) is drug concentration, and I(t)I(t) is an immune state function.

The system performs Monte Carlo (MC) dropout passes (e.g., 50) to generate trajectory distributions and identifies the regimen minimizing tumor burden at a clinical horizon (e.g., 365 days).

C. System Output:

The system generates a treatment recommendation data structure comprising:

  • selected_regimen_id: Unique identifier for the recommended regimen.
  • predicted_trajectory_array: Tensor [T×S][T \times S].
  • uncertainty_bounds: Tensor [T×S×2][T \times S \times 2] (2.5th/97.5th percentiles from MC dropout).
  • physics_compliance_flags: Boolean validation of parameter bounds.
  • information_sufficiency_score: The computed ISS value.

The system stores this data structure in a database record associated with the patient identifier or transmits it to a clinical decision support interface.

[0009] Experimental Results

[0009]The system was validated on a multi-center dataset comprising 33 cancer types and 132 Tissue Source Site (TSS) environments.

Mode 1 (Pooled Cross-Site) Performance:

CPTAC External C-index

0.718

[0.684, 0.750]

TSS Held-Out C-index

0.866 ± 0.014

Physics Compliance

100%

TSS Environments

132

Mode 2 (Strictly Isolated) Performance:

CPTAC External C-index

0.623

[0.591, 0.654]

Internal Global C-index

0.733

Physics Compliance

100%

Delta vs Mode 1 (CPTAC)

-9.5pp

Mode Comparison:

MetricMode 1 (Pooled)Mode 2 (Isolated)Delta
CPTAC External C-index0.7180.623-9.5pp
Internal Global C-index0.7270.733+0.6pp
TSS Held-Out C-index0.866
Physics Compliance100%100%0

Mode 1 achieves superior external generalization; Mode 2 achieves slightly higher internal metrics.

These results demonstrate that cross-site ranking pressure (Mode 1) significantly improves generalization to unseen institutions, while strict isolation (Mode 2) optimizes for the training distribution but may sacrifice external transferability. Both modes maintain 100% physics compliance.

[0010] Specific Embodiment: Training Hyperparameters

[0010]In one specific embodiment, training hyperparameters comprise:

Batch Size

128

Max Epochs

200

Learning Rate

3e-4

Optimizer

AdamW

DRO Step Size (η)

0.01

Min Env Samples

20

VI.CLAIMS

What is claimed is:

Claim 1. (Training Method)

A computer-implemented method for training a distributionally robust cancer survival prediction model, comprising:

  1. receiving, by a processor, a training dataset comprising multi-modal patient data and associated patient identifier strings;
  2. parsing, by the processor, the patient identifier strings to extract a Tissue Source Site (TSS) code for each patient, wherein the TSS code indicates a data acquisition environment;
  3. assigning each patient to one of a plurality of environment groups (eke_k) based on the extracted TSS code;
  4. executing, by a GPU, a forward pass of a neural network to generate risk scores and physics-constrained tumor dynamics parameters for the patients;
  5. computing, for each environment group eke_k, a partial log-likelihood loss (LkL_k) by applying a configured risk set construction protocol, wherein said protocol defines which patients in a batch constitute a risk set denominator for a given patient;
  6. updating a set of environment weights qkq_k using an exponentiated gradient ascent rule defined as qkqkexp(ηLk)/Zq_k \leftarrow q_k \cdot \exp(\eta \cdot L_k) / Z, wherein η\eta is a step size and ZZ is a normalization factor; and
  7. updating model parameters of the neural network to minimize a weighted sum of the per-environment partial log-likelihood losses weighted by the updated environment weights qkq_k.

Claim 2.

The method of claim 1, wherein the configured risk set construction protocol is a pooled cross-site protocol comprising: computing, from risk scores of patients from all environment groups in the batch, a global cumulative log-sum-exp term defining a global risk denominator for each event time; and computing the partial log-likelihood loss LkL_k by summing log-likelihood terms strictly for event observations assigned to environment group eke_k, such that the risk denominator for a patient assigned to environment group eke_k includes contributions from patients assigned to at least one other environment group, thereby enforcing cross-site ranking invariance.

Claim 3.

The method of claim 1, wherein the configured risk set construction protocol is a strictly isolated protocol comprising:

  1. allocating, in a GPU memory, a contiguous hazard buffer BB partitioned into KK segments, wherein each segment BkB_k corresponds to an environment group eke_k and spans indices [startk,endk)[\text{start}_k,\, \text{end}_k);
  2. constructing a segment descriptor array SS containing tuples of (startk,endk,n_eventsk)(\text{start}_k,\, \text{end}_k,\, n\_\text{events}_k); and
  3. executing a segmented logcumsumexp reduction kernel parameterized by the segment descriptor array SS, wherein said kernel resets accumulation at segment boundaries defined by SS, such that the risk denominator for a patient in environment group eke_k is computed exclusively from patients within the same environment group eke_k.

Claim 4.

The method of claim 3, wherein the segmented logcumsumexp reduction kernel structurally prevents cross-environment memory access during denominator accumulation, thereby ensuring per-environment statistical independence.

Claim 5.

The method of claim 1, wherein the physics-constrained tumor dynamics parameters include a tumor growth rate (ρ\rho) constrained to the range [0,0.3][0, 0.3] per day via a sigmoid scaling function, and a drug sensitivity coefficient (β\beta) constrained to the range [0,1][0, 1].

Claim 6.

The method of claim 1, further comprising merging any environment group containing fewer than a predetermined threshold of samples into a composite environment group prior to computing the partial log-likelihood loss.

Claim 7. (Inference & Therapy System)

A computer-implemented system for personalized oncology therapy selection, comprising: a memory storing a trained multi-modal neural network model; and a processor coupled to the memory and configured to:

  1. receive multi-modal input data for a target patient, including omics data and optional whole slide imaging (WSI) data;
  2. generate, using the trained multi-modal neural network model, a set of physics-constrained tumor dynamics parameters comprising a growth rate (ρ\rho), a drug sensitivity (β\beta), and an immune kill rate (ω\omega);
  3. enforce biological constraints on the parameters such that ρ[0,0.3]\rho \in [0, 0.3], β[0,1]\beta \in [0, 1], and ω>0\omega > 0;
  4. receive a definition of a plurality of candidate therapeutic regimens, wherein each regimen specifies a drug concentration function D(t)D(t) over a time horizon;
  5. for each candidate therapeutic regimen, simulate a tumor volume trajectory V(t)V(t) by numerically integrating a differential equation parameterized by the generated physics-constrained parameters and the drug concentration function D(t)D(t);
  6. select a recommended therapeutic regimen from the plurality of candidate therapeutic regimens that has a minimum predicted tumor burden at a specified clinical time horizon;
  7. generate a treatment recommendation data structure comprising at least: (i) a selected_regimen_id identifying the recommended therapeutic regimen; (ii) a predicted_trajectory_array of dimensions [T×S][T \times S]; (iii) an uncertainty_bounds array of dimensions [T×S×2][T \times S \times 2] derived from Monte Carlo dropout forward passes; and (iv) physics_compliance_flags indicating adherence to the biological constraints; and
  8. store the treatment recommendation data structure in a database record associated with a patient identifier for the target patient, or transmit the treatment recommendation data structure to a clinical decision support interface configured to display the predicted trajectory array.

Claim 8.

The system of claim 7, wherein the trained multi-modal neural network model is trained using Distributionally Robust Optimization (DRO) over hospital-specific environment groups to ensure the generated parameters are robust to data acquisition artifacts.

Claim 9.

The system of claim 7, wherein the processor is further configured to:

  1. calculate an Information Sufficiency Score (ISS) for the target patient based on a distance between the target patient's latent representation and a training distribution centroid; and
  2. route the multi-modal input data to a robust generalist checkpoint of the neural network model if the ISS exceeds a sufficiency threshold, or to a fallback checkpoint if the ISS is below the sufficiency threshold.

Claim 10.

The system of claim 7, wherein numerically integrating the differential equation comprises using a fixed-step Euler solver or an adaptive Runge-Kutta solver.

Claim 11.

The system of claim 7, wherein the differential equation is a Lotka-Volterra equation of the form:

dVdt=ρV(1VKcap)βD(t)VωI(t)V\frac{dV}{dt} = \rho V \left(1 - \frac{V}{K_{\text{cap}}}\right) - \beta D(t) V - \omega I(t) V

wherein VV is tumor volume, KcapK_{\text{cap}} is carrying capacity, and I(t)I(t) is an immune state function.

Claim 12.

The system of claim 7, wherein the uncertainty_bounds are computed by executing a plurality of forward passes of the neural network with dropout enabled, and calculating the 2.5th and 97.5th percentiles of the resulting trajectory distribution at each time step.

Claim 13. (Computer-Readable Medium)

A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform operations comprising:

  1. extracting environment labels from patient barcodes in a multi-center cancer dataset, wherein the environment labels correspond to Tissue Source Sites (TSS);
  2. assigning patients to environment groups based on the extracted environment labels;
  3. executing a forward pass of a neural network to predict tumor dynamics parameters;
  4. computing a per-environment partial log-likelihood loss (LkL_k) for each environment group using a pooled risk set construction, wherein said pooled risk set construction comprises: calculating a global cumulative log-sum-exp term across all environment groups in a batch; and masking event contributions such that LkL_k aggregates loss only for patients in environment group kk while utilizing the global cumulative log-sum-exp term as a denominator, thereby enforcing cross-site ranking invariance;
  5. updating a set of environment weights qkq_k using an exponentiated gradient ascent rule: qkqkexp(ηLk)/Zq_k \leftarrow q_k \cdot \exp(\eta \cdot L_k) / Z; and
  6. updating model parameters of the neural network to minimize a weighted sum of the per-environment partial log-likelihood losses weighted by qkq_k.

Claim 14.

The non-transitory computer-readable medium of claim 13, wherein the operations further comprise projecting the predicted tumor dynamics parameters into a bounded feasible set prior to computing the loss, ensuring that the predicted parameters remain physically valid regardless of the environment weights.

Claim 15.

The non-transitory computer-readable medium of claim 13, wherein the neural network comprises a Variational Autoencoder (VAE) for genomic encoding and a separate hypernetwork for parameter prediction, and wherein the VAE is frozen during the updating of the neural network.

Claim 16.

The non-transitory computer-readable medium of claim 13, wherein the pooled risk set construction enables the neural network to learn biological features that rank patients consistently across different Tissue Source Sites.

Claim 17.

The non-transitory computer-readable medium of claim 13, wherein the patient barcodes are formatted according to The Cancer Genome Atlas (TCGA) standards, and the TSS is extracted from a fixed position within the barcode string.

VII.ABSTRACT OF THE DISCLOSURE

A system and method for training distributionally robust cancer digital twins and selecting therapeutic regimens. The system extracts Tissue Source Site (TSS) identifiers from patient barcodes to define acquisition environments. A dual-mode risk set construction engine allows configuration between (1) a pooled cross-site mode that computes global log-likelihood denominators to enforce cross-site ranking invariance, and (2) a strictly isolated mode using segmented GPU kernels to enforce statistical independence. An exponentiated gradient ascent loop updates environment weights to upweight the most difficult environments based on per-environment losses. At inference, the system generates physics-constrained parameters (e.g., growth rate ρ[0,0.3]\rho \in [0, 0.3]) and simulates therapeutic regimens via a differential equation. A treatment recommendation data structure comprising the selected regimen and Monte Carlo uncertainty bounds is stored in a patient-associated database record.

[End of Application]

Interested in licensing this technology?

Contact us to discuss partnership and licensing opportunities for the Distributionally Robust Training System.