Laminar logo

Laminar

Open-source all-in-one platform for engineering AI products

Summer 2024active2024Website
AIOpsArtificial IntelligenceDeveloper ToolsSaaSB2B
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 29 days ago

What do they actually do

Laminar is an open‑source observability and evaluation platform for LLM and agent‑based applications. Its TypeScript and Python SDKs instrument LLM calls, tool use, and custom functions (with integrations for providers and frameworks like OpenAI, Anthropic, Gemini, LangChain, Vercel AI SDK, Browser Use, Playwright, etc.), and stream traces to a UI for real‑time inspection docs GitHub installation docs.

For browser‑based agents, Laminar records the browser window/video and synchronizes it with the agent trace so engineers can watch exactly what the agent “saw” at each step. It also provides evaluations and labeling (including building datasets directly from production traces), full‑text search and a SQL editor/API for querying traces/evals, custom dashboards, and a playground to rerun spans and compare prompts/models without changing code browser‑agent observability evaluations datasets SQL editor custom dashboards overview. The project is Apache‑2.0 and self‑hostable, with a managed cloud option at laminar.sh for teams that don’t want to run it themselves GitHub hosting options site.

Who are their target customer(s)

  • Teams building browser‑based agents (e.g., using Browser Use or similar): They need to reliably reproduce and debug web interactions by seeing what the agent saw and did, but current tooling makes reproducing multi‑step browser behavior slow and guess‑prone browser‑agent observability YC post.
  • ML engineers / model owners running LLM features in production: They struggle to detect regressions and measure prompt/model changes because failures are scattered across logs and aren’t easy to convert into evaluated datasets and repeatable tests evaluations datasets.
  • Platform / SRE engineers operating LLM/agent infrastructure: They need high‑throughput, reliable instrumentation and a choice of self‑hosting or managed service, but setting up and scaling tracing pipelines is time‑consuming tracing intro hosting options.
  • Product managers and QA teams responsible for content quality and safety: They want a fast path from a bad output to labeled examples and evaluations to prioritize fixes, but dataset creation and human labeling workflows are manual and fragmented today datasets manual evals.
  • Engineering teams stitching together multiple LLM providers and agent frameworks: They spend time writing adapters and custom logging to get consistent traces across tools, instead of focusing on product work GitHub integrations installation docs.

How would they acquire their first 10, 50, and 100 customers

  • First 10: Go direct to existing OSS users and browser‑agent teams (active GitHub contributors/issues, Index users, and Browser Use adopters) and run short, hands‑on pilots to instrument agents and capture session replays that produce immediate debugging value and case studies lmnr repo Index repo ClickHouse Browser Use write‑up.
  • First 50: Scale developer‑first onboarding with one‑line SDK setup, quickstarts, prebuilt SQL dashboards, and templates for turning traces into eval datasets; run webinars and RFC‑style walkthroughs to turn users into references and reproducible case studies getting started SQL/datasets.
  • First 100: Formalize integrations and co‑marketing with agent/framework maintainers (e.g., Browser Use, LangChain, Playwright), list hosted trials in partner channels/marketplaces, and use a small CS motion with SLAs and CI/eval integrations to convert platform/SRE and ML teams to paid hosted plans installation docs hosting options.

What is the rough total addressable market

Top-down context:

Laminar sits at the intersection of observability/APM and MLOps. Public estimates place APM around ~$8.4B and MLOps around ~$3B near‑term, implying a combined top‑line of roughly $10–12B today APM report MLOps report.

Bottom-up calculation:

Treat APM as the umbrella and add MLOps (to avoid double‑counting), then assume AI/agent‑specific observability is 10–30% of that combined spend to get a $1.1–3.4B SAM; an early SOM of 0.5–3% of SAM implies low‑tens of millions in ARR if converted successfully observability/tools APM MLOps AI‑in‑observability.

Assumptions:

  • APM is the broader umbrella; add MLOps as an adjacent market without double‑counting.
  • AI/agent observability grows to 10–30% of combined APM+MLOps spend over time.
  • Early specialist platform captures ~0.5–3% SOM in the first 3–5 years.

Who are some of their notable competitors

  • Langfuse: Open‑source LLM observability platform with tracing, prompt management, evaluations, and analytics—commonly used by teams building LLM apps site.
  • LangSmith (by LangChain): Managed tracing, evaluation, and dataset tooling for LangChain applications; tightly integrated with the LangChain ecosystem site.
  • Helicone: LLM observability proxy that logs requests/responses, costs, and performance, with hosted dashboards and analytics site.
  • HoneyHive: LLM evaluation and experimentation platform with monitoring and prompt/version management for AI features in production site.
  • Arize Phoenix: Open‑source library and UI for evaluation and monitoring of LLM/RAG systems from Arize, including traces and diagnostics site.