Weave logo

Weave

AI to understand engineering work

Winter 2025active2025Website
B2BAnalyticsAI
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 10 days ago

What do they actually do

Weave connects to a team’s code repositories and automatically reads pull requests and code reviews. It uses LLMs plus custom models to estimate the effort behind each change (a “Weave Hour”), attribute how much code was written by AI vs. humans, and score review quality. These signals are rolled up into dashboards and team reports for leaders to track real engineering work, AI usage, and review practices workweave.dev, YC profile, guide.

Teams typically connect their repos, let Weave analyze PRs/diffs/review threads, and then use the dashboards to spot who’s blocked, where reviews slow delivery, and where AI is helping or underused. Customers report using these outputs in standups and to adjust review standards and AI workflows workweave.dev, YC profile.

For buyers who need enterprise readiness, Weave advertises SOC 2 Type I, TLS in transit, AES‑256 at rest, and hosting on Google Cloud security. They say hundreds of fast‑growing engineering teams use the product, including companies like Reducto, Superpower, and PostHog, with a noticeable share of new YC startups adopting it YC profile.

Who are their target customer(s)

  • Head/VP of Engineering at a fast‑growing product company: Needs an evidence‑based view of how much real engineering work is getting done and where AI is changing output to guide priorities and staffing. Weave provides PR‑level effort estimates and AI vs. human attribution to inform those decisions workweave.dev, YC profile.
  • Engineering manager or team lead: Needs to identify blocked contributors, slow review practices, and coaching opportunities. Weave surfaces review‑quality scores and team rollups that managers use in standups and process changes guide, workweave.dev.
  • CTO/Director at a mid‑market or enterprise: Needs vendor‑grade security, cross‑team benchmarking, and metrics that justify AI tooling to execs. Weave advertises SOC 2 and is building finance‑facing ROI outputs security, seed announcement.
  • Security, compliance, or platform lead: Needs traceability and auditability for AI‑produced code to enforce safe tooling and provenance rules. Weave attributes AI vs. human changes and stores PR/review data for analysis workweave.dev, security.
  • Head of Finance / FP&A supporting engineering: Needs to translate engineering activity and AI impact into dollarized ROI for budgeting and procurement. Weave is building outputs that turn engineering metrics into ROI signals guide.

How would they acquire their first 10, 50, and 100 customers

  • First 10: Founder‑led outreach via YC/investor networks to early adopters, offering a concierge onboarding to connect repos, tune attribution, and run short pilots that validate Weave Hours and AI vs. human attribution YC profile, seed announcement.
  • First 50: Productize the pilot into a simple self‑serve “connect your data” flow and pair it with targeted content (benchmarks, how‑to guides) and community channels; leverage early case studies and referrals to reach similar teams workweave.dev, guide.
  • First 100: Run a hybrid PLG + enterprise motion: add sales/CS for procurement‑friendly POCs, finalize security/compliance, and productize finance‑facing ROI reports; launch integrations/marketplace listings to drive inbound and support mid‑market/enterprise deals security, seed announcement.

What is the rough total addressable market

Top-down context:

Global professional developers are estimated at ~28.7M (2024) Statista. Comparable tools price in the low‑hundreds per seat per year (e.g., Pluralsight Flow ≈ $600/user/year; LinearB contracts imply mid‑hundreds per contributor) Flow pricing, Vendr on LinearB, Waydev pricing, G2 Waydev.

Bottom-up calculation:

Seat‑based TAM: 28.7M developers × $200/yr ≈ $5.7B (low); × $425/yr ≈ $12.2B (mid); × $800/yr ≈ $23.0B (high) Statista, Flow pricing, Vendr on LinearB.

Assumptions:

  • Per‑engineer pricing model applies broadly to this category.
  • All addressable seats use PR‑based workflows and value AI attribution/effort analytics.
  • Competitor price bands are representative of sustainable pricing for Weave.

Who are some of their notable competitors

  • LinearB: Repo‑based delivery and developer‑experience dashboards with workflow automation for PRs/approvals; emphasizes operational metrics and automation rather than LLM‑driven AI attribution or per‑PR effort estimates LinearB, Weave.
  • Waydev: Manager‑facing dashboards from commits/PRs/tickets to highlight bottlenecks, velocity, and cost views; focuses on historic delivery diagnostics, not LLM attribution of AI vs. human contributions per PR Waydev, Weave.
  • Pluralsight Flow (GitPrime): Aggregates commit/PR/ticket data for flow and cycle‑time insights aimed at leaders; measures delivery and handoffs, not per‑PR AI provenance or a “Weave Hour” effort metric Pluralsight Flow, Weave.
  • Code Climate Velocity: Engineering insights and reports surfacing review hotspots and collaboration trends; centers on code‑quality and collaboration metrics rather than LLM‑enhanced attribution/effort estimation Code Climate Velocity, Weave.
  • GitClear: Developer‑focused repo analytics and PR review tooling to reduce review time and surface tech‑debt; overlaps on PR/review analysis but not marketed around LLM‑based AI provenance and per‑PR effort estimates as core outputs GitClear, Weave.