Starcloud logo

Starcloud

Data centers in space

Summer 2024active2024Website
Artificial IntelligenceHard TechSatellitesClimateCloud Computing
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 29 days ago

What do they actually do

Starcloud operates an in‑orbit computing payload that runs datacenter‑class GPUs on a small satellite in low Earth orbit. In November 2025, it launched Starcloud‑1 carrying an Nvidia H100 and used it to run and train AI models in space, including Google’s Gemma and a NanoGPT run the company described as the first LLM trained in orbit (YC profile, CNBC).

Early customers use it to process satellite data before downlink and to run proof‑of‑concept AI workloads. Reported examples include on‑orbit inference on synthetic‑aperture radar (SAR) imagery and hosting a partner cloud stack (Crusoe) so third parties can deploy workloads in orbit. The typical flow is: uplink data or a task to Starcloud’s spacecraft, schedule it on the onboard GPU(s), run training/fine‑tuning or inference in orbit, then downlink much smaller processed results, reducing bandwidth costs and latency (GeekWire, white paper).

Who are their target customer(s)

  • Satellite operators (e.g., Earth‑observation constellations): Downlink capacity is scarce and expensive; moving large volumes of raw sensor data to Earth adds cost and delays. They need reliable, on‑orbit preprocessing/inference so only useful, reduced outputs are returned.
  • Geospatial analytics firms and commercial imagery buyers: Time‑to‑insight suffers when teams must transfer and process terabytes of imagery on the ground. They need faster, smaller data products that arrive ready for analysis or end‑customer delivery.
  • AI/ML teams needing burst GPU capacity for training/fine‑tuning: It's hard to secure top‑tier GPUs on demand and energy/cooling costs are high. They need additional, bookable capacity windows—even in unconventional locations—when terrestrial supply is constrained.
  • Cloud operators and managed‑service providers: They want to offer orbital workloads without building space‑qualified power, cooling, and ops. They need a platform partner that provides the physical infrastructure and reliable operations in orbit.
  • Hyperscalers and large industrial compute buyers (longer‑term): Compute demand and energy/cooling costs are growing. They explore long‑term, diversified capacity options and would consider new supply models if cost, reliability, and integration hurdles are met.

How would they acquire their first 10, 50, and 100 customers

  • First 10: Run tightly scoped pilots with satellite operators and imagery customers that process real sensor data onboard (e.g., SAR), proving downlink savings and faster turnaround; offer free/discounted compute and hands‑on integration to convert pilots to first paid deals (YC, CNBC, white paper).
  • First 50: Package a standard on‑orbit inference product (fixed deliverables/pricing) and sell it via direct outreach to analytics buyers and through data‑provider partnerships so reps can close without custom engineering each time; use case studies and references to shorten cycles (YC, white paper).
  • First 100: Scale through hosting/reseller deals with cloud operators (e.g., Crusoe) while running targeted outbound to AI teams and enterprises for multi‑month reserved blocks; publish uptime/throughput metrics and simplify booking/APIs to build trust and accelerate procurement (GeekWire, NVIDIA blog, white paper).

What is the rough total addressable market

Top-down context:

Near‑term demand is anchored in satellite data services/Earth‑observation processing, a market estimated around $5–13B today depending on scope (Grand View Research, Fortune Business Insights, Allied MR via Yahoo). Longer‑term adjacencies include GPU‑as‑a‑service and hyperscale cloud/datacenter spend, which are much larger but require proven scale and reliability (GVR GPUaaS, FBI Hyperscale).

Bottom-up calculation:

Using a $10B mid‑range satellite data services market and assuming 20–40% tied to processing/analytics and bandwidth that could move on‑orbit, the near‑term processing pool is ~$2–4B. If Starcloud captures 5–20% of that slice in early geographies/use cases, that implies tens to a few hundred million dollars in annual revenue potential as capacity scales (assumptions informed by EO market reports and current on‑orbit SAR inference demonstrations, e.g., GeekWire).

Assumptions:

  • 20–40% of EO/satellite data spend is processing/analytics and bandwidth that can be shifted to orbit.
  • Adoption starts with SAR/imagery customers and grows as standard offerings and SLAs mature.
  • Pricing and service reliability are competitive with terrestrial alternatives for targeted workloads.

Who are some of their notable competitors

  • D‑Orbit (Space Cloud Services): Operates ION Satellite Carrier with Space Cloud Services to run applications and process data directly in space—an established in‑orbit compute hosting offering (D‑Orbit).
  • Unibap (SpaceCloud): Provides radiation‑tolerant edge computing hardware/software (SpaceCloud OS and platforms) used for in‑orbit processing, including with D‑Orbit missions (Unibap, Unibap+D‑Orbit).
  • Ubotica (CogniSAT): Delivers AI edge computing for satellites (CogniSAT) and powers ESA Φsat missions to execute AI apps onboard, reducing downlink needs (Ubotica, ESA Φsat‑2).
  • HPE Spaceborne Computer: ISS‑based high‑performance edge computing program proving COTS servers for AI/HPC in space—demonstrates space compute feasibility, though primarily in ISS context (HPE).
  • Sidus Space (FeatherEdge/Exo‑Base): Offers on‑orbit AI/ML processing as a service integrated on its LizzieSat platform (FeatherEdge/Exo‑Base), targeting near‑real‑time EO analytics (press).