What do they actually do
Thunder Compute runs a developer-focused GPU cloud. Users launch GPU-backed instances from a web console, CLI, or a VS Code integration, then connect in one click without manual driver setup. It offers two modes: a low-cost “prototyping” tier that uses GPU virtualization/oversubscription (“GPU‑over‑TCP”) and a “production” tier that turns off those optimizations for predictable performance and full CUDA/graphics compatibility (quickstart, how it works, production mode, pricing).
A typical flow is: pick a GPU and tier, launch, and connect from VS Code or the CLI. Environments include persistent storage and common ML tooling; users can snapshot and swap GPU types mid‑project without rebuilding the environment (quickstart, production mode). Thunder emphasizes single‑server and up to a few GPUs per instance; it does not offer the large‑scale autoscaling/cluster features that hyperscalers provide today (pricing, blog comparison).
Public on‑demand pricing includes common GPUs like A100/H100 and lower prototyping rates. The company targets independent ML engineers, researchers, startups, and students, and also advertises enterprise/VPC installs for customers who want Thunder’s software in their own environment (pricing, about).
Who are their target customer(s)
- Independent ML engineer / solo researcher: Needs cheap, on‑demand GPUs with minimal setup; wants one‑click access from their editor instead of managing drivers/SSH (pricing, quickstart).
- Early‑stage startup ML team: Wants to prototype quickly and switch GPU types without rebuilding environments; can’t afford long lead times or high cloud bills (quickstart, pricing).
- Small product team doing fine‑tuning or low‑latency inference: Needs predictable, repeatable performance and dislikes instability from oversubscription; requires a supported production option for customer‑facing runs (production mode).
- Students and academic researchers: Are budget‑conscious and need short bursts of GPU time with persistent storage for checkpoints and minimal setup overhead (pricing, quickstart).
- IT / infrastructure lead at a mid‑sized company: Wants to reduce GPU cloud spend or utilize existing on‑prem/VPC GPUs while maintaining control and uptime; needs software deployable into their environment and compatible with corporate networks (about, production mode).
How would they acquire their first 10, 50, and 100 customers
- First 10: Recruit from the YC network and founder contacts; offer credits and hands‑on onboarding to observe usage, remove setup friction, and prioritize bug fixes. Publish two short case notes to guide immediate product and messaging changes.
- First 50: Target independent ML engineers, students, and early startups via the VS Code marketplace, relevant subreddits/Discords/Slacks, and a small university program; provide time‑limited credits and quickstart templates for fine‑tuning/inference. Add a referral program and publish 3–4 how‑to guides plus a demo repo showing GPU swaps without rebuilds.
- First 100: Convert the best users into paid pilots with discounted production tier and a simple one‑month SLA; offer a one‑page cost calculator for teams with cloud GPU bills. Start light outbound to small ML startups and IT leads, promote via VS Code marketplace and a few university/bootcamp partnerships, and standardize onboarding/support.
What is the rough total addressable market
Top-down context:
The direct market is GPU‑as‑a‑service, estimated at about USD 4.37B in 2025 (Grand View Research). Beyond that, the broader data‑center GPU market is on the order of $100B+ annually (e.g., ~$122B in 2025) with heavy AI capex tailwinds (Stratview, Goldman Sachs).
Bottom-up calculation:
Treat the GPUaaS category (~$4.37B in 2025) as the immediate SAM for Thunder’s current product; at that base, capturing 0.1–1.0% implies roughly $4.4M–$43.7M in annual revenue potential from on‑demand GPU rentals (Grand View Research).
Assumptions:
- Thunder competes primarily in the GPUaaS segment with on‑demand instances and a production tier similar to market peers.
- Share‑of‑market arithmetic approximates revenue potential at current category size and typical pricing levels.
- Enterprise/on‑prem revenue is excluded from the near‑term bottom‑up and considered longer‑term expansion.
Who are some of their notable competitors
- Vast.ai: Marketplace for on‑demand GPUs from individuals and data centers; very low prices but variable supply/reliability since it’s a marketplace (site, pricing).
- RunPod: Developer‑focused GPU cloud with fast spin‑up, VS Code/cloud‑IDE workflows, and serverless inference; emphasizes flexible instance types and endpoints over GPU‑oversubscription (product, VS Code guide).
- Lambda (Lambda Cloud): Turnkey GPU instances and multi‑GPU clusters aimed at predictable, production‑grade performance and enterprise/private clusters—closer to Thunder’s production mode (instances, pricing).
- Paperspace / Gradient (DigitalOcean): Integrated MLOps platform (notebooks, training, deployments) for teams, students, and researchers; competes on managed UX and workflows rather than aggressive oversubscription (Gradient, notebooks).
- CoreWeave: Large AI‑focused cloud for production scale with high‑performance networking and enterprise tooling; better for big training/low‑latency inference than budget prototyping (platform, AI infrastructure).