Omnara logo

Omnara

The Command Center for AI Agents

Summer 2025active2025Website
AIOpsArtificial IntelligenceAI Assistant
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 19 days ago

What do they actually do

Omnara lets developers run a small CLI wrapper around terminal-based AI agents (starting with Claude Code) so the same session mirrors to a cloud-backed web dashboard and mobile apps. From there, you can see agent messages and terminal output in real time and keep working away from your desk GitHub repo YC page iOS app.

When an agent pauses for input or proposes changes, Omnara sends a push notification; you can inspect logs or diffs and approve, modify, or reject changes from your phone or browser, with edits syncing back to the terminal session. They’ve focused first on Claude Code but show example integrations and flags to support other agent CLIs and human-in-the-loop workflow nodes (e.g., n8n) YC page GitHub repo.

Pricing is public: a free tier (up to 10 sessions/month), a Pro plan ($20/month), and an Enterprise tier that adds team features, notification escalation, custom integrations, and a 99.9% uptime SLA pricing.

Who are their target customer(s)

  • Terminal-first developer using Claude Code or similar agents: They have to babysit a terminal and wait for agent prompts, losing time when away from their desk. They want to respond and continue sessions from web or mobile.
  • Automation engineer wiring agents into workflows (e.g., n8n): They need a simple way to pause flows for human reviews and approvals across devices. Existing tooling doesn’t provide a clean, cross-device human-in-the-loop step.
  • Small dev team doing code changes via agents: They worry about incorrect edits and lack easy, remote access to logs/diffs and approvals. They need a lightweight review and approval gate for agent-generated changes.
  • Platform/ops engineer running production agent jobs: They need uptime guarantees, escalation paths, access controls, and audit trails to allow agents to touch real systems. Current ad-hoc setups don’t meet reliability and compliance needs.
  • Engineering manager coordinating agent-driven work: They lack a single place to see what agents did, who approved changes, and whether outcomes met expectations. They need centralized visibility and approval trails.

How would they acquire their first 10, 50, and 100 customers

  • First 10: Personally invite active Claude Code users and contributors from Omnara’s GitHub and YC launch list; offer free Pro access and direct onboarding to validate mobile handoff and approvals GitHub YC page.
  • First 50: Publish short demos/how‑tos (including the n8n human‑in‑the‑loop example) and share across Hacker News, GitHub Discussions, and Claude/Anthropic communities; run live demos to convert watchers to users GitHub.
  • First 100: Target small teams and platform owners with case studies and simple SDKs/integrations for CI/CD and workflow tools; offer time‑boxed Enterprise pilots with SLAs/audit features to ops and engineering managers pricing YC page.

What is the rough total addressable market

Top-down context:

There are about 27 million software developers worldwide in 2024 Evans Data. 62% say they currently use AI tools in their development process, indicating broad adoption of AI-assisted workflows Stack Overflow 2024. If 5–10% of developers need to monitor/approve long‑running agents, that’s roughly 1.35–2.7 million potential seats; at $20/user/month ($240/year), TAM is about $324M–$648M ARR pricing.

Bottom-up calculation:

Assume 150k developers actively run terminal-based agent CLIs (e.g., Claude Code and similar) and 20–30% convert to paid at $20/user/month, yielding 30k–45k seats and ~$7.2M–$10.8M ARR initially; broader adoption across automation teams and small orgs would expand this pricing.

Assumptions:

  • Share of developers who need agent oversight (human-in-the-loop, long-running sessions) is 5–10% of global devs.
  • Price point uses Omnara’s Pro plan at $20/user/month ($240/year) pricing.
  • Early terminal-based agent users (Claude Code and similar) total ~100k–200k globally today, with 20–30% willing to pay for cross-device control and approvals.

Who are some of their notable competitors

  • Langfuse: Open-source LLM observability and evaluation (tracing, prompts, feedback). Competes on monitoring and quality workflows rather than multi‑device session mirroring.
  • LangSmith (LangChain): Tracing, evaluation, and monitoring for LangChain apps. Strong for developers building with LangChain; less focused on terminal session handoff across devices.
  • Humanloop: Observability, monitoring/alerts, and human‑in‑the‑loop review for LLM apps and agents. Targets production oversight and debugging workflows.
  • AgentOps: Testing and monitoring for AI agents with a focus on reliability and behavioral analysis. Overlaps on agent operations and observability.
  • CrewAI (Studio): Agent framework and UI for orchestrating multi‑agent “crews.” Provides a control surface for agents, though with a different focus than cross‑device terminal session control.