
Continuous learning for AI agents
Report from 27 days ago
Lemma provides a hosted tool that connects to your production AI agents, watches real user traffic and outcomes, flags failures or drift, and pinpoints the exact step that broke. It then runs structured experiments (e.g., prompt/template variants), analyzes results, and proposes prompt changes you can apply via API or have Lemma open as a pull request in your repo https://www.uselemma.ai/ https://www.ycombinator.com/companies/uselemma.
The product is publicly available with demos and a free trial, and it’s being marketed to engineering teams shipping customer‑facing AI features. Lemma is listed as a YC Fall 2025 company and appears to be in early commercial rollout focused on pilots rather than broad self‑serve plans https://www.uselemma.ai/ https://www.ycombinator.com/companies/uselemma.
Top-down context:
Lemma sits in AI/ML monitoring, observability, and continuous‑improvement tooling. Public MLOps estimates range from low billions today to mid‑tens of billions by 2030, suggesting a narrow TAM of roughly $2–$6B and an expanded TAM up to the low‑tens of billions as scopes broaden Fortune Business Insights Grand View Research. Broader observability and generative‑AI software markets are larger but overlap with MLOps and should not be double‑counted MarketsandMarkets Grand View Research.
Bottom-up calculation:
Example framing: 40k–120k teams globally running production LLM agents over the next 5–7 years, with an average annual spend of ~$50k on monitoring/experimentation and continuous learning, implies ~$2B–$6B in annual spend addressable by vendors like Lemma.
Assumptions: