
Report from 12 days ago
Trim is building a large AI model that simulates how physical systems evolve over time, aiming to act as a faster surrogate for traditional physics solvers. Their early work (“Trim Transformer”) uses a linear-attention architecture that targets better scaling with grid size and dimensionality than standard transformer attention and traditional numerical methods. In public benchmarks, Trim reports >90% lower memory use and 3.5x faster time per epoch than a standard PyTorch transformer on 2D Navier–Stokes with similar loss, and describes logarithmic-in-horizon runtime growth in their architecture compared to linear growth in many solvers and models Trim homepage and Trim blog.
They’re positioning the model for research and engineering domains where high‑fidelity simulations are currently too slow or costly, such as gravitational‑wave signal modeling, fluids/climate, materials and molecular systems, and real‑time robotics/controls Trim homepage and Trim blog. Today, this looks like an applied research model and early pilots aimed at replacing or accelerating expensive simulation workloads rather than a general-purpose drop‑in product.
Top-down context:
Trim sells into budgets historically spent on CAE/simulation software and HPC compute for physics-heavy workloads. Global CAE software is estimated around $12B in 2025 and growing FMI CAE, while cloud HPC alone is estimated at ~$35B in 2025 Mordor Intelligence cloud HPC. These categories frame a broader simulation market in the tens of billions.
Bottom-up calculation:
Initial serviceable TAM across five near‑term segments: ~150 climate/weather orgs × $300k ARR + ~1,000 materials/comp‑chem teams × $125k + ~800 industrial/robotics engineering orgs × $150k + ~80 national‑lab/big‑science programs × $500k + ~300 astrophysics/GW groups × $75k ≈ $350–400M/year. This reflects budgets for surrogate modeling software plus onboarding/support.
Assumptions: