What do they actually do
Storia AI builds Sage, an open‑source, chat‑style code assistant you can run locally or via their hosted app. Developers point Sage at a repo (GitHub URL or local), index it or use lighter retrieval, then ask questions about how the code works. Answers come with citations back to source files and docs. The project is Apache‑2.0 licensed and configurable to use local LLMs (e.g., Ollama) and vector stores (e.g., Marqo) or cloud providers like OpenAI and Anthropic (README, LICENSE, docs).
Today, Storia has pre‑indexed a set of open‑source repos and is onboarding a closed group of partner companies (seed → Series C). The Sage repo shows community interest and currently carries an archived/read‑only notice; the hosted app and docs remain referenced (YC page, Sage repo).
Next, they aim to deepen contextual understanding (grounding answers in exact code and citations) and expand from public repos to private team code with deployment options that meet privacy needs, as they continue onboarding partners into real workflows (YC page).
Who are their target customer(s)
- Growing startup engineering teams (seed → Series C): Onboarding is slow and interrupts senior engineers; new hires struggle to understand unfamiliar code and past decisions. They want faster, code‑grounded answers with citations to speed ramp‑up (YC page).
- Open‑source maintainers: They field repetitive questions from users/contributors. They want authoritative, cite‑backed answers about the repo so they spend less time explaining design and usage choices (Sage repo).
- Security‑conscious teams with private codebases: They cannot send code to public LLMs. They need a local/on‑prem option that keeps code private while returning trustworthy, source‑linked answers (README).
- Engineering managers and senior engineers on legacy systems: They must review, refactor, and make architectural decisions and need evidence‑based explanations of why code is written a certain way—not just generated snippets (YC page).
- Internal platform/dev‑tooling teams: They need a code assistant that integrates with CI/search and can be tuned for precise retrieval and citations without hallucinations (benchmarks).
How would they acquire their first 10, 50, and 100 customers
- First 10: Convert existing beta partners and indexed OSS maintainers into hands‑on pilots, with concierge onboarding and two short case studies quantifying time saved and trust gains.
- First 50: Use referrals from early pilots; target startup engineering teams via YC/network intros and relevant Slack/Discord communities; run live demos and privacy‑focused office hours to convert to paid trials.
- First 100: Launch a self‑serve hosted tier plus a clear on‑prem/consulting package; partner with developer platforms and large OSS projects to list Sage as an add‑on for low‑friction discovery and adoption.
What is the rough total addressable market
Top-down context:
Analysts size AI code tools/software dev tools at roughly $6–7B today with projections into the $20–35B range by ~2030 (Mordor, Grand View, Markets & Markets, Mordor dev tools).
Bottom-up calculation:
Using a mid‑case: 20.8M professional developers × ~$500 per developer per year ≈ $10.4B TAM, grounded in developer population estimates and common AI tool budgets (JetBrains, DX budgeting).
Assumptions:
- Targeting professional developers (≈20.8M) rather than the broader hobbyist population.
- Per‑seat pricing in the $200–$800/year range, mid‑case $500/seat/year.
- Meaningful adoption among teams that pay for developer tooling and need private‑repo context.
Who are some of their notable competitors
- GitHub Copilot (incl. Enterprise): Widely adopted AI coding assistant with chat and enterprise features; integrates deeply with GitHub and can incorporate organizational context at scale.
- Sourcegraph Cody: Enterprise code assistant focused on codebase‑aware answers and citations across large, multi‑repo estates; strong code search and context tooling.
- Codeium: AI coding assistant with chat and code comprehension; offers enterprise plans oriented to privacy and team deployment.
- Tabnine: AI code assistant emphasizing privacy, on‑prem/self‑hosted options, and models tuned for enterprise use.
- Continue.dev (open‑source): Open‑source IDE extension that connects to local or cloud LLMs and can retrieve code context—popular with teams favoring self‑hosted workflows.