Stellon Labs logo

Stellon Labs

Building tiny frontier AI models that run on edge devices

Summer 2025active2025Website
Artificial IntelligenceEdge Computing Semiconductors
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 24 days ago

What do they actually do

Stellon Labs is a two-person AI research group building tiny, open-source neural models that run on CPUs on phones, Raspberry Pis, and in browsers. Today they ship KittenTTS, a ~15M‑parameter text‑to‑speech model with an ONNX weight file under 25 MB, published as code, weights, and a pip/ONNX package designed to run without a GPU GitHub Hugging Face.

Developers are already running KittenTTS in browser demos and small self‑hosted servers on low‑power devices, and the launch attracted community traction on Hacker News and Hugging Face web demo server example HN thread HF space.

Who are their target customer(s)

  • Mobile app developers who need offline voice: They can’t rely on cloud TTS due to latency, connectivity, and privacy; they need very small CPU‑only models that won’t drain battery and are easy to bundle. Stellon’s ~25 MB, CPU‑first ONNX TTS targets this need GitHub HF model.
  • Hobbyists and makers on Raspberry Pi or in the browser: Most modern neural TTS models need GPUs or large downloads; they want something that runs locally on tiny hardware or client‑side. Community demos show KittenTTS running in browsers and on small devices web demo server example.
  • Small product teams for privacy‑sensitive devices (toys, healthcare, appliances): They can’t send user audio to third‑party clouds and lack compact, high‑quality on‑device models that meet legal and product constraints. Stellon focuses on tiny local models and packaging for constrained hardware.
  • Researchers and prototyping engineers needing reproducible, open tiny models: SOTA speech models are often large, closed, or GPU‑only; they need small open weights and runnable code to iterate quickly. Stellon publishes code, weights, and a pip package for easy local runs GitHub HF model.
  • Hardware OEMs and integrators shipping constrained devices: Porting and optimizing ML for ARM phones, wearables, or embedded boards is time‑consuming and often needs vendor support. Stellon’s roadmap includes mobile SDKs, deployment tooling, and custom help for on‑device integration GitHub checklist .

How would they acquire their first 10, 50, and 100 customers

  • First 10: Convert active community users by directly contacting starrers/forkers and HN/Reddit/HF commenters; offer hands‑on help (paired sessions, installable demo app) in exchange for feedback and a short case study.
  • First 50: Publish step‑by‑step tutorials and sample apps (Android/iOS/Raspberry Pi) plus a tiny “getting started” library; promote in developer forums and run free group onboarding webinars. Begin small fixed‑fee pilots for indie teams to convert learnings into product improvements and references.
  • First 100: Package short, paid OEM/integrator pilots (model tuning + integration guide + a week of support). List SDK and paid support on HF/PyPI/marketplaces, demo at embedded/IoT events, and use pilot references to sell repeat engagements.

What is the rough total addressable market

Top-down context:

Analysts estimate the global TTS market at roughly USD 3.5–4.0B with ~12–14% CAGR, while on‑device/edge AI markets are in the multi‑tens of billions and growing quickly MarketsandMarkets Mordor Intelligence Grand View Research — On‑Device AI Grand View Research — Edge AI. Stellon targets the on‑device TTS slice for apps/devices where cloud is impractical.

Bottom-up calculation:

Using a USD ~3.5–4.0B TTS base, if 5–10% of usage is on‑device near‑term, TAM ≈ USD 175–400M; if 15–25% shifts on‑device in 2–4 years, TAM ≈ USD 525M–1.0B MarketsandMarkets. Expansion into adjacent on‑device speech/multimodal tooling taps parts of the broader edge AI market (tens of billions) Grand View Research.

Assumptions:

  • Share of TTS that runs on‑device grows from ~5–10% today to ~15–25% in 2–4 years (driven by privacy, latency, and offline needs).
  • Stellon monetizes via SDKs, support, and custom integrations rather than cloud usage fees, focusing on mobile and embedded deployments.
  • Analyst reports measure different scopes (software vs. total edge hardware+software), so figures are used to bracket opportunity, not summed.

Who are some of their notable competitors