HumanLayer logo

HumanLayer

Getting AI Coding Agents to solve hard problems in complex codebases

Fall 2024active2024Website
Artificial IntelligenceDeveloper Tools
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 2 months ago

What do they actually do

HumanLayer ships two pieces of infrastructure for teams building AI agents and automation. First, an open‑source desktop IDE called CodeLayer that developers run locally to orchestrate AI coding agents (built around Claude Code), manage context and run parallel sessions. It’s installable via Homebrew and actively maintained on GitHub with release builds and docs guiding setup and usage (docs, GitHub).

Second, a hosted SDK/API (with Python and JS libraries) that lets agentic programs pause at risky steps to request human input or approval, route those requests over Slack/email/Discord, and only continue once a person signs off. Developers add primitives like require_approval() or “human as a tool,” and reviewer responses are fed back into the agent’s context. The SDK integrates with common agent frameworks (LangChain, CrewAI, ControlFlow, LlamaIndex) and multiple model providers so teams can bring their own stack (product site, PyPI, GitHub).

In practice, teams install CodeLayer and/or the SDK, mark operations that need human oversight, and route approvals to the right person. Reported early use cases include agents that draft/review outbound emails, a DevOps agent proposing DB migrations with human sign‑off on execution, and an automated newsletter pipeline that escalates to humans where needed (HN post, YC profile).

Who are their target customer(s)

  • Platform/DevOps engineers building agentic automation (deploys, DB migrations): They need to let agents propose and run high‑risk actions but require human sign‑off so migrations/rollbacks don’t break prod. Today there isn’t a simple, auditable way to route approvals to the right person and block risky calls until authorized.
  • Product teams building customer‑facing agents (support chat, outbound emails): They want agents to act autonomously but need a quick human check when the agent might mis-handle a customer or send an embarrassing message. They lack a way to collect contextual approvals or corrections without breaking the user flow.
  • Security, compliance, and legal teams: They need records of who approved what and controls to prevent forbidden actions. They lack consistent approval workflows, role‑based controls, and audit trails that meet policy or regulatory needs.
  • Small engineering teams/startups shipping internal automation: They want lightweight human checks in scripts and bots without building custom routing, notifications, and gating logic. Re‑implementing approvals across Slack/email/Discord slows delivery and increases risk.
  • AI/ML teams evaluating and improving agents: They need to capture human judgments and corrections during approvals and feed them back into training/evaluation. There’s no easy, contextual pipeline to turn reviewer feedback into data.

How would they acquire their first 10, 50, and 100 customers

  • First 10: Start with hands‑on pilots for dev teams already engaging via the open‑source repo and launch threads; recruit GitHub contributors and HN/press commenters to run free short pilots and iterate on integration (GitHub, HN).
  • First 50: Publish ready‑made workflow templates (e.g., DB migrations, outbound email, support handoffs), run targeted workshops/office hours for DevOps/product teams, and convert active OSS users to paid pilots with prioritized support (SDK docs, CodeLayer docs).
  • First 100: Productize a “For Teams” offering (seats, audit logs, RBAC/escalation, contract pilots) and run a mixed motion: inbound case studies, targeted outbound to security/compliance leaders, and partnerships with agent frameworks to land multi‑team pilots (12‑factor agents/vision, YC).

What is the rough total addressable market

Top-down context:

Near‑term, the overlapping AI agents/orchestration and RPA/business automation markets are each estimated in the single‑digit billions, implying a combined immediate opportunity in the low billions, with rapid growth expected (MarketsandMarkets AI agents, Mordor RPA). Long‑term, if approvals/human‑in‑the‑loop become a horizontal layer across enterprise apps, the broader enterprise‑software spend (≈$900B in 2024) sets the upper bound context (Gartner).

Bottom-up calculation:

Focus on small/mid engineering orgs adding agentic automation and needing human approvals. If 10,000 teams adopt a paid approval/safety layer at $10k–$30k ACV, that implies a $100M–$300M near‑term serviceable TAM. This excludes broader enterprise platform embedding and assumes only teams actively deploying higher‑risk automation.

Assumptions:

  • ~10k orgs actively piloting or deploying agentic/RPA automation that touches risky actions in the next few years.
  • Average contract value of $10k–$30k for SDK/control plane + team features (seats, audit logs, RBAC).
  • Excludes vendors’ built‑in approval features; counts only buyers who prefer a separate, cross‑stack approval layer.

Who are some of their notable competitors