Lilac logo

Lilac

Spot GPUs that use idle GPUs from enterprises

Summer 2025active2025Website
Machine LearningCloud ComputingInfrastructureAI
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 20 days ago

What do they actually do

Lilac makes open-source software that you run yourself to manage and schedule GPU training jobs across your own machines and cloud VMs. Teams install a small server and agent, register nodes, and submit jobs via CLI or a web UI; the system queues, schedules, and tracks jobs in one place across mixed environments (GitHub, docs).

They are also building a spot GPU marketplace that would broker idle enterprise and cloud GPU capacity to buyers as interruptible instances. This marketplace is not live yet; the website invites waitlist signups and highlights partner onboarding activity, including a public LOI with BluSky AI to resell idle capacity (getlilac.com, press).

Who are their target customer(s)

  • Data scientists / ML engineers running model training: They need a simple way to submit, monitor, and re-run jobs across cloud and on‑prem GPUs; today they spend time hunting for available GPUs and dealing with failed allocations (docs).
  • Infrastructure / platform engineers managing GPU fleets: They want one place to register mixed cloud and on‑prem nodes, control access, and keep utilization high; current tooling is fragmented across environments (docs, YC).
  • Startups and academic researchers seeking cheaper GPU time: They need access to enterprise‑grade GPUs at lower cost and can tolerate interruptions, but don’t trust consumer spot channels for reliability or support (getlilac.com).
  • Enterprise IT/procurement with unused GPU contracts: They sit on idle, expensive GPU capacity but lack an easy, compliant way to monetize or share it externally (getlilac.com, press).
  • Cloud/managed service providers with excess GPU capacity: They want to fill unused hours but integrating pricing, billing, and interruption handling into a third‑party marketplace is operationally heavy (getlilac.com).

How would they acquire their first 10, 50, and 100 customers

  • First 10: Convert active open‑source users (stargazers, issue openers, contributors) into paid pilots with free white‑glove onboarding, short pilot SLAs, and one‑to‑one support to remove integration friction (GitHub, docs, YC).
  • First 50: Run targeted outreach to startup and university ML teams with tutorials/webinars and reproducible examples; convert waitlist signups with time‑limited pilot credits and published playbooks and results to ease approvals (docs, getlilac.com).
  • First 100: Scale via partner listings of idle capacity and a small direct sales motion to mid‑market infra teams using a templated procurement pack and onboarding SRE; offer early‑access buying flow with standardized billing and a clear interruption/SLA spec (getlilac.com, press).

What is the rough total addressable market

Top-down context:

The global data‑center GPU market is estimated around $120B in 2025, with substantial growth expected through 2030; cloud infrastructure services overall exceed $400B annually, underscoring the scale of GPU‑backed workloads (MarketsandMarkets, Statista).

Bottom-up calculation:

Using the ~$120B 2025 data‑center GPU market as a baseline and assuming 10–30% of spend is realistically accessible as interruptible/spot capacity yields a TAM of ~$12B–$36B for a marketplace brokering idle enterprise and cloud GPU hours (MarketsandMarkets).

Assumptions:

  • Baseline uses MarketsandMarkets’ 2025 data‑center GPU market (~$120B).
  • 10–30% of GPU spend is interruptible/resellable (idle enterprise capacity and cloud excess).
  • Marketplace can access both enterprise and cloud supply with appropriate legal, billing, and interruption handling.

Who are some of their notable competitors

  • Run:AI: Enterprise GPU scheduler and resource management layer for multi‑tenant clusters; overlaps with Lilac’s self‑hosted scheduling/control plane.
  • Vast.ai: Public marketplace matching GPU hosts with renters for cheaper, interruptible instances; overlaps with Lilac’s planned spot GPU marketplace.
  • Slurm (SCHEDMD): Widely used open‑source HPC scheduler for on‑prem clusters; overlaps on job scheduling but not on a resale marketplace.
  • CoreWeave: Specialist GPU cloud offering on‑demand and discounted/spot capacity; competes for buyers seeking lower‑cost, enterprise‑grade GPUs.
  • Lambda (Lambda Labs): GPU instances, on‑prem appliances, and managed clusters for ML teams; overlaps where teams want simple cluster setup and steady GPU access across environments.