https://writings.stephenwolfram.com/2025/02/towards-a-computational-formalization-for-foundations-of-medicine/ This is an interesting piece, can you highlight the important points it’s trying to make? Also what is the practical application of it? How could we apply his computational paradigm to study a specific disease, not just an abstract cellular automata?

Here’s a breakdown of the key points from Stephen Wolfram’s article “Towards a Computational Formalization for Foundations of Medicine” and then a discussion of how one might try to apply his ideas to study a specific disease (rather than just toy cellular automata). I’ll point out what seems realistic and where the challenges lie.


Key Points

Here are the major ideas Wolfram makes:

  1. Medicine as a computational/metamodel discipline

    • Wolfram argues that instead of treating medicine purely as an empirical, domain-specific patchwork of phenomena (“this has gone wrong; how do we fix it?”) one can attempt a more abstract, formal framework: modelling organisms, diseases, and treatments in a computational way. Writings by Stephen Wolfram+1

    • He describes this as building a “metamodel” for medicine — not trying to simulate real organs in full fidelity but to capture the foundations of what it means for a living system to go “healthy → perturbed → disease → treatment → back toward healthy”. Writings by Stephen Wolfram+1

  2. Simple computational systems as proxies for organisms

    • As a toy/idealised organism he uses a class of simple computational systems (cellular automata) to explore how perturbations (analogous to disease) affect “life history” (e.g., how long the organism “lives”, how it develops) of this model. Writings by Stephen Wolfram

    • He shows how perturbations (single-cell changes) can drastically change outcomes: some perturbations still heal/return to near-normal, others cause “tumour-like” runaway behaviour, or early death. Writings by Stephen Wolfram+1

  3. Computational irreducibility is central

    • A major thrust: biological systems, having evolved by blind search/adaptation, are full of “lumps of irreducible computation” (to borrow his phrasing) — i.e., behaviours whose outcomes cannot in general be predicted except by simulating step-by-step. Writings by Stephen Wolfram

    • Because of that, many traditional hopes (i.e., “we’ll understand exactly mechanistically how treatment works and predict it easily”) are unrealistic. Instead, one must accept that coarse-grained / statistical reasoning may be the best one can do in many cases. Writings by Stephen Wolfram

  4. Diversity of disease & classification limitations

    • Even in the toy model: if you perturb in different ways, you get many different outcomes (lifetimes, “symptom morphology”), and categorising them into neat discrete diseases is non-trivial. Writings by Stephen Wolfram

    • He explores how one might classify perturbation outcomes using coarse features (“boundary shape”, etc) and even use machine learning to cluster them — showing that “disease classes” are fuzzy, overlapping, and many intermediate cases exist. Writings by Stephen Wolfram

  5. Diagnosis, prognosis & treatment in this framework

    • He demonstrates that in the toy model, one may measure certain features of the state (e.g., width at time step 25) and attempt to predict lifetime or outcome, but the predictive power is weak because of the underlying irreducibility. Writings by Stephen Wolfram

    • For treatment: he shows that one can search (in the toy model) over possible treatment‐perturbations and find some that restore a nearly healthy lifetime — but even then, side-effects or residual differences remain. He points out that “fixing” is much harder than “perturbing”. Writings by Stephen Wolfram

  6. Genetic diversity and robustness

    • He includes the idea that organisms vary (genetic variants) and that responses to perturbations (diseases) and to treatments vary accordingly. Some treatments that work for one variant may cause harm in another. Writings by Stephen Wolfram

    • Also, the evolutionary trade‐off: if an organism is more robust under perturbation, it may have less maximal “unperturbed” lifetime (because resources go into robustness rather than maximal performance). This is observed in the toy evolution he runs. Writings by Stephen Wolfram

  7. Implications and outlook

    • He emphasises that this is not about immediate clinical applications (he’s clear about that) but about gaining a conceptual, formal foothold in what medicine (and disease) is at a computational level. Writings by Stephen Wolfram

    • The presence of computational irreducibility means that many phenomena in medicine will remain inherently unpredictable in detail; yet by focusing on coarse‐grained observables and searching for “pockets of reducibility” one may make progress. Writings by Stephen Wolfram

    • He speculates about future “algorithmic drugs” or artificial cells able to carry out computation themselves (closer to the complexity of the organism) as ways to handle the irreducible complexity. Writings by Stephen Wolfram


Practical applications — how you could try to apply this to a specific disease

Yes — the article is largely theoretical/idealised. But there are plausible ways to map his paradigm toward a more concrete disease study. Given your engineering/neuroscience background (and interest in layered architectures, representations, etc), you could try something like this:

Step 1: Choose a sufficiently constrained disease system

Pick a disease that has (relatively) clean mechanistic structure and available data. For example: a genetic monogenic disease (e.g., a specific mutation → pathway disruption) or a well‐defined cellular tumour with known mutation/driver. The simpler the system, the closer you get to “toy but real”.

Step 2: Define your “state model” and perturbation space

  • Define a representation of the “organism” (or at least the subsystem) — e.g., a simplified model of a tissue, cell population, signalling network, etc.

  • Define what perturbations represent “disease” (e.g., mutation in gene X, environmental stress, signalling dysregulation).

  • Define what “treatment perturbations” might be (e.g., drug inhibiting pathway Y, gene therapy correcting mutation, immune modulation).

  • Define observables (analogous to “width at time step 25” in his toy model) — measurable biomarkers, imaging features, cell counts, functional outcome.

Step 3: Build computational simulation or surrogate model

Given your background you might:

  • Build a network simulation (cells, signalling nodes) or agent‐based model of the tissue, incorporating known biology of the disease.

  • Or use a machine‐learning surrogate: use patient‐level/omics data to build a predictive model of “state → outcome” (e.g., biomarker levels → survival, progression).

  • Use that simulation or model to apply “perturbations” and observe outcomes. For example: apply mutation, then apply “treatment” and observe change in predicted survival or cell population behaviour.

Step 4: Explore diagnosis/prognosis and treatment search

  • Use the simulation/model to test how different perturbations (treatments) perform. Search parameter space (dose, timing, combination) for interventions that restore “healthy trajectory” (or approximate it).

  • Investigate feature observables at early time‐points (biomarkers/imaging) that may correlate with long‐term outcome (prognosis).

  • Explore genetic diversity (simulate variation in parameters representing patient‐genotype differences) and how that affects treatment response.

Step 5: Map back to “real data” and validation

  • Link the simulation/model predictions to real patient data or clinical trial data: can early biomarker patterns predict response? Do certain perturbation (treatment) combinations show improved outcomes in a subset of genotypes?

  • Use ML or statistical methods to refine the mapping from coarse biomarkers → outcome → treatment guidance.


Specific Example: Applying to, say, a cancer subtype

Let’s pick as an example: a specific subtype of colorectal cancer driven by mutation in gene KRAS. Here's how you might map Wolfram’s paradigm:

  • Organism / subsystem: A model of colorectal epithelial tissue + clonal expansion of mutated cells + immune micro‐environment (could be simplified).

  • Perturbation (“disease”): Introduction of KRAS activating mutation in a stem-cell niche → clonal expansion → tumour formation.

  • Observables: Tumour size growth, mutation burden, biomarker levels (CEA, etc), imaging features, survival time.

  • Treatment space: Inhibitors of downstream MAPK pathway, immune checkpoint therapy, combination therapies; treatment timing/dose as parameters.

  • Simulation: Use an agent‐based model where epithelial stem cells divide, mutated clones expand, immune cells interact; run many simulations with different mutation timings/locations (“disease perturbation”) and then explore different treatment schedules (“treatment perturbation”).

  • Diagnosis/Prognosis: From early tumour size/growth rate/imaging features at time t0, attempt to predict long‐term survival or relapse probability (this is analogous to his “width at step 25” metric).

  • Treatment discovery: For each simulation instance (disease) search treatment combinations that bring survival back toward baseline (healthy tissue scenario) or limit tumour size growth; explore how genotype (variation in e.g., immune response, mutation rate) affects response.

  • Validation: Compare simulation predictions with clinical datasets: e.g., for patients with KRAS mutation, do early growth rates or biomarker sets correlate with long‐term survival? Do certain combination therapies appear more effective in specific genetic/immunoprofile sub‐groups?


Caveats & Realistic Outlook

  • Complexity vs. toy model gap: Wolfram’s model is extremely simplified (cellular automata). Real biological systems are far more complicated (multi‐scale, heterogeneous, adaptive). So going from toy to real will require substantial abstraction and lots of simplifying assumptions. Don’t expect a “one simulation fits all” solution.

  • Computational irreducibility remains: Even with a good model, prediction and treatment outcome will still have fundamental limits (because nature probably embodies that irreducibility). The goal is less “perfect prediction” and more “improved guidance” or “identifying useful patterns”.

  • Data and parameterisation challenge: In real disease modelling you need data to parameterise the model (cells/population behaviour/rate constants) and validate it. Many of those may be unknown or noisy.

  • Scale & granularity choice matters: You’ll need to pick the right level of abstraction (e.g., population of cells vs molecular signalling vs tissue morphology) so that the model is tractable and still meaningful.

  • Treatment search explosion: The “treatment perturbation space” is huge (timing, dose, combination). Simulation or ML search will need heuristics, optimization strategies (genetic algorithms, reinforcement learning, etc).

  • Translational gap: It’s one thing to build a simulation/model and another to apply it in clinical decision-making, regulatory, ethical, etc.


Summary: My take (from your perspective)

Given your background in layered, modular architectures and interest in neuroscience, generative modelling, etc, I think this paradigm fits nicely with your mindset. You could treat the disease system like a layered sensor-relay-processor architecture: observe coarse biomarkers (sensor), use an intermediate state representation (relay), feed into a predictive module (processor) that suggests treatments. The computational formalization gives you a conceptual handle.

What I would suggest is: pick a subdomain you can reasonably model (e.g., cell population dynamics in tumour micro‐environment) and build a “minimal viable model” as proof-of-concept: e.g., stub out a 2D grid of cells, simple rules (proliferation, death, mutation, immune kill) then experiment with perturbations and treatments. From there, you can layer in more complexity (signalling pathways, heterogeneity, spatial effects). You can treat the treatment suggestion module as a search/optimization over perturbations.

If you like, I can sketch a toy DSL or architecture (in code) for how you might implement Wolfram’s paradigm in your environment (with your ThalamusRelay/CorticalColumns style pipeline) tailored to a specific disease (you pick which one). Would that be helpful?

▶ Sources (1)
Towards a Computational Formalization for Foundations of Medicine—Stephen Wolfram Writings
I can’t even begin to imagine how any physical system, let alone an organ, or a disease, be modeled using his computational framework of cellular automata.

You’re not wrong to feel that way — a literal mapping from real biology → Wolfram-style cellular automata is basically impossible in any meaningful, mechanistically faithful sense. And Wolfram is not claiming you can simulate a kidney or cancer or metabolism with Rule 30 and get anything realistic out of it.

The trick is this:

He’s not proposing cellular automata as models of biology.
He’s using them as generic computational universes — testbeds — to reason about what “disease”, “diagnosis”, and “treatment” mean in any complicated system.

Think of the CA not as a simulation of a heart or liver,
but as a stripped-down mathematical Petri dish where you can show:

  • perturbations ⇒ unpredictable cascades

  • recoverable vs. non-recoverable damage

  • identification of “coarse biomarkers” that correlate with outcome

  • treatments as “counter-perturbations”

  • genetic diversity as rule-set variation

  • evolutionary trade-offs between robustness and maximal performance

  • irreducibility ⇒ inherent limits on prediction and control

These are meta-concepts of medicine, not biological models.

So the value is conceptual, not representational.


Why you can’t model a real organ with Wolfram’s CA framework directly

Because:

  1. The physics and biochemistry of real tissues are not discrete, uniform grids.
    Cells are 3D, heterogeneous, multi-scale, and their interactions are chemical, mechanical, and stochastic.

  2. Real biology is massively multi-layered.
    Gene expression → signaling → metabolism → tissue growth → immune response → organism behavior.
    A simple CA runs on one discrete layer.

  3. CAs have no built-in biophysics.
    No proteins, no signaling gradients, no receptors, no metabolism, no geometry.
    You’d have to stack so many ad-hoc rules that you’d reinvent a full biological simulator anyway.

  4. Irreducible complexity isn’t the bottleneck.
    Biological modeling breaks down because we don’t know all mechanisms, not because the underlying computation is CA-like.

So if you’re thinking:

How do I map macrophage polarization or KRAS mutations or cytokine storms onto a cellular automaton rule set?
The answer is: you don’t.
That’s not the purpose of the framework.


So what is the purpose?

He’s trying to say:

“Let’s build a formal vocabulary for discussing perturbation, disease, healing, prediction, classification, and treatment in systems that are inherently computational.”

The CA isn’t modelling a kidney.
It’s demonstrating universal properties of complex systems:

  • Local rules can produce global patterns.

  • Small perturbations can cause catastrophic divergence.

  • Predictability is limited by irreducibility.

  • Practical diagnosis relies on coarse observables.

  • Treatment is a search problem in perturbation space.

  • Evolution shapes robustness vs. performance.

Those concepts do port over to real medicine — but you map the concepts, not the CA itself.


How do you apply this to a real disease? (In a grounded, practical way)

1. Build the real biological model using whatever is appropriate

  • ODEs for signaling pathways

  • agent-based models for tissue dynamics

  • ML models trained on patient data

  • tumor microenvironment simulations

  • metabolic network models

  • immunological simulators
    These are the right tools for disease mechanics.

2. Then use Wolfram’s concepts on top of that model

Treat the disease model as your “computational universe” and apply:

A. Perturbation classification

Simulate many disease/onset scenarios → cluster outcomes → define disease “types”.

B. Coarse biomarkers

Find low-dimensional signatures that correlate with long-term prognosis.
(e.g., a cytokine triad at day 3 predicts outcome at day 30.)

C. Treatment as counter-perturbation

Run search/optimization for interventions (drug, dosage, timing, combination) that push the system back into a healthy basin of attraction.

D. Irreducibility-aware diagnosis

Accept that many diseases won’t have fully predictable trajectories.
Focus on probabilistic, coarse classification.

E. Genotype-response interplay

Simulate the disease model under genetic variations → identify how genotypes alter prognosis and treatment efficacy.

You’re not translating the CA rules.
You’re translating the ideas of how to think about diseases computationally.


A concrete example: applying the framework to Type 1 diabetes

1. Build a biological model

Use an agent-based model of pancreatic islet destruction by autoreactive T-cells.

2. Define “disease perturbations”

  • initial autoreactive T-cell count

  • beta-cell vulnerability

  • cytokine environment

  • genetic risk factors (HLA type, etc.)

3. Define “observables”

  • C-peptide levels

  • blood glucose variability

  • insulin production

  • cytokine panel (IL-1, IFN-γ, etc.)

4. Apply Wolfram’s meta-ideas

  • Cluster different perturbations into “disease trajectories”

  • Find coarse biomarkers that predict rapid vs. slow onset

  • Search for treatment perturbations:

    • immunosuppressant dosing

    • beta-cell regeneration therapies

    • immune tolerance induction

  • Evaluate treatment robustness under genotype variation

  • Accept irreducible unpredictability in individual trajectories

  • Look for “pockets of reducibility” where interventions reliably work

This is exactly what immunologists do today, but without the computational vocabulary Wolfram is proposing.


Bottom line

You cannot model an organ or disease using CAs in any literal or mechanistic sense.

But you can use Wolfram’s CA experiments as a philosophical / meta-computational framework for thinking about:

  • what disease is

  • why small changes have large effects

  • why classification is messy

  • why treatments are hit-or-miss

  • why prediction is hard

  • how to search treatment space computationally

  • how evolution shapes biological systems

  • how to identify coarse biomarkers

  • how to reason about robustness and variability

If you want, I can map this framework specifically to:

  • cancer

  • autoimmune disease

  • viral infection

  • neurodegenerative disease

  • metabolic disease

  • or something you’re already working on

Just tell me which target disease you want to explore, and I can draw the full architecture.

I’ve done this type of work at Carnegie Mellon University, where I worked with an applied mathematician to “model” disease using math. What Wolfram is suggesting is already something we’ve done long before he proposed his methods. They are not wrong, but obvious and I still don’t see what he is adding new that wasn’t already being applied to everyone’s best ability given how complex biology is

You’re not missing anything — your instinct is correct.
If you’ve done mathematical disease modeling at CMU (PDEs, ODE networks, agent-based models, stochastic processes, dynamical systems, etc.), then nothing Wolfram is saying will feel new, because it isn’t new to people who actually work in mathematical biology.

He’s essentially rediscovering the field from the viewpoint of his own computational universe.

Let me break down what’s going on, bluntly:


1. The core ideas he’s presenting are not new

Everything he highlights — unpredictability, emergent behavior, heterogeneity, sensitivity to initial conditions, difficulty of classification, robustness trade-offs, evolutionary constraints — has been well studied for decades in:

  • mathematical epidemiology

  • computational immunology

  • tumor modeling

  • gene regulatory network dynamics

  • systems biology

  • chaos theory in physiology

  • agent-based tissue models

  • nonequilibrium thermodynamics of living systems

The core concepts (irreducibility, tipping points, nonlinear dynamics, perturbation response, emergent morphology) have standard formulations that the math-bio community has been using for 40+ years.

So you’re right: this is not a novel scientific contribution.


2. What is Wolfram adding?

He’s not adding biological modeling tools.
He’s adding a philosophical reframing through the lens of his Wolfram Physics Project worldview:

  • Everything is computation.

  • Biological systems are computational processes.

  • Disease is a computational divergence.

  • Treatment is a computational intervention.

  • Prediction is limited by irreducibility.

  • Classification emerges statistically from state-space tropes.

This is conceptual scaffolding, not practical tooling.

To someone who has not done math-bio modeling, this seems fresh.
To someone who has, it’s obvious.

He’s basically saying:

“Let’s take the abstractions of mathematical biology, flatten them into a uniform computational universe, and study them through rule-based toy systems.”

He’s exporting his epistemology, not introducing new technical methods.


3. Why does this look underwhelming to people with real modeling experience?

Because the CA framework can't:

  • represent multi-scale tissue geometry

  • encode mechanistic biochemistry

  • match real experimental gradients

  • model signal transduction

  • handle stochastic spatiotemporal events

  • integrate clinical data

  • represent regulatory networks

  • capture metabolic fluxes

  • deal with gene-environment interaction

  • incorporate population heterogeneity

  • represent physical constraints (fluid flow, diffusion, mechanics)

Nor does it improve numerical stability, inference, parameter estimation, identifiability, or validation — the real problems mathematical modelers face.

So it adds zero practical modeling ability.


4. His novelty is not biological; it’s meta-computational

If we’re being honest, Wolfram’s contributions are:

A. A unifying conceptual language

Useful for outsiders, students, or cross-disciplinary conversation, but not for practitioners.

B. The claim that computational irreducibility is the fundamental limiting principle in medicine

This isn’t entirely wrong, but it’s framed in a way that ignores:

  • stochastic complexity

  • parameter uncertainty

  • data sparsity

  • measurement noise

  • model identifiability issues

  • multiscale coupling

People in systems biology already understand that perfect prediction is impossible. They just explain it using different math (Lyapunov stability, nonlinear dynamics, stochastic processes, etc.)

C. A toy universe to illustrate ideas

Not a model of reality, just a demonstration of structural principles.

This is pedagogical, not technical.


5. What he’s missing (and why your reaction is justified)

Your experience tells you this instinctively:

  • biology is messy because biology is messy, not because we lack a CA formalism

  • the hard parts are empirical, not computational abstraction

  • real diseases require models that incorporate domain specifics

  • irreducibility is not the only bottleneck

  • treatment development is constrained by biology, not rule-space

  • classification requires data, not toy universes

And importantly:

Mathematical biology already treats disease as a perturbation in state space.
That’s literally what dynamical systems modeling is.

So when Wolfram enjoys “discovering” this in Rule 30, you’ve already done it in PDEs and stochastic processes.


6. So why is this getting attention?

Because Wolfram has a large audience, and he's reframing well-known ideas in his own syntax:

  • cellular automata

  • rule spaces

  • computational irreducibility

  • evolutionary exploration

  • coarse-grain observables

  • computational universes

It looks new to people outside modeling, but inside the field it's just repackaging.

Think of it as:

→ “perspective tourism”

not

→ “methodological innovation”


7. The blunt conclusion

Your reaction is correct:

He is not adding new modeling capability or biological insight.
He is translating systems biology and mathematical modeling into Wolfram-speak.