What do they actually do
s2.dev provides a hosted API for append-only streams that are durable and replayable in real time. Apps write events to a stream, connected clients receive updates instantly, and on reconnect or crash, clients can resume from the exact position and replay history. s2 runs the storage, fanout, and connection management so teams don’t have to build and operate custom WebSocket/SSE infrastructure plus a history store themselves (homepage).
The service is aimed at app-level use cases like collaborative editors (e.g., Yjs), chat/activity feeds, agent sessions/logs, and real-time observability, with demos showing per-session streams and recovery/replay patterns (Yjs integration demo, multiplayer terminal write-up, agent sessions).
Who are their target customer(s)
- Builders of collaborative apps (editors, whiteboards, multiplayer UIs).: They need each user’s changes to be saved and replayable and struggle to keep everyone in sync after crashes/reconnects, often wiring fragile custom session-history code (Yjs demo, multiplayer terminal).
- Small product teams adding live features (chat, presence, activity feeds).: Without a streaming platform, they patch Postgres/Redis to act like streams and hit scale, reliability, and complexity limits (YC launch note).
- AI/agent platform engineers.: They want a durable stream per agent/session for logs, transcripts, or state, but spend time on reconnection logic and risk losing context on crashes (agent sessions blog).
- Observability/infra teams building low-latency dashboards and alerts.: They need durable, real-time event feeds and incur high ops overhead to maintain custom ingestion and socket infrastructure (observability demo mention).
- Platform/ops teams responsible for WebSocket/SSE endpoints.: They want an easier, serverless alternative to running connection management and many independent streams themselves (s2 homepage).
How would they acquire their first 10, 50, and 100 customers
- First 10: Run targeted 4–6 week pilots with developers from s2’s demos and blog posts; have s2 engineers do the integration, validate reconnection/durability, and deliver a replayable-history demo they can show internally (Yjs demo, s2-term write-up).
- First 50: Publish copy‑paste SDK examples and one‑click demos for the top three use cases (collab editors, agent sessions, observability), do focused outreach in relevant OSS communities, and sponsor/run hackathons where teams must build with s2; offer credits to winners (agent sessions, YC note).
- First 100: Productize onboarding (self‑serve signup, sample apps, and Postgres/Redis migration guides), turn pilots into short case studies, list on marketplaces/newsletters, and add a DevRel hire plus a small SDR motion targeting platform/ops teams; pair a modest paid trial with fast paid support (homepage, observability demo mention).
What is the rough total addressable market
Top-down context:
Using market estimates for streaming analytics (~$27.8B in 2024 and ~$35.05B in 2025) and developer tools (~$6.41B in 2025) yields a combined reference of ~$$41.4B; taking a 5–15% slice for app‑level real‑time streams implies a TAM of ~$2.1B–$6.2B (Fortune Business Insights, Mordor Intelligence).
Bottom-up calculation:
With ~47M global developers, assume 5% build apps needing durable per‑session streams (~2.35M devs). At ~5 engineers per team (~470k orgs) and 5% adoption (~23.5k customers) at ~$499/mo (~$6k/yr) SMB ARPU, the near‑term serviceable opportunity is ~$$141M ARR, with enterprise deals expanding upside (SlashData, Stream pricing, Ably, Pusher).
Assumptions:
- Only 5–15% of the combined streaming analytics + dev tools markets map to app‑level real‑time streams.
- 5% of developers work on relevant use cases; 5% of those orgs adopt a hosted API; SMB ARPU ~$6k/yr.
- Enterprise adoption and ARPU could materially increase revenue beyond the SMB-oriented bottom‑up estimate.
Who are some of their notable competitors
- Ably: Hosted realtime pub/sub for global WebSocket delivery with presence and short‑term history/rewind; strong delivery layer but teams often add separate durable storage and replay for long‑term session history (pub/sub, pricing/history).
- Pusher Channels: Managed WebSocket/pub‑sub service with presence and simple channel tooling; easy for chat/cursors, but lacks built‑in durable, replayable stream storage for long‑term history (overview, message history guidance).
- Confluent Cloud / Apache Kafka: Durable streaming platform with persistence and replay; powerful but operationally heavier and optimized for high‑throughput pipelines, not many small per‑user/session streams (Confluent Cloud, Kafka tradeoffs).
- Liveblocks: Collaboration service (CRDTs/Yjs) for presence, multiplayer cursors, and document history; great for editors but opinionated toward document collaboration rather than a generic per‑session append‑only stream (product/docs, Yjs/history).
- Firebase (Realtime Database / Firestore): Client-accessible realtime databases with offline caching and sync; convenient for realtime views, but not designed as an infinite, durable, replayable per‑session stream store (Firestore offline, Realtime Database overview).