What do they actually do
Delty sells an "AI staff engineer" for enterprise engineering teams. It ingests a company’s codebase, docs, and architecture notes, then answers system‑level questions, proposes designs, and can guide or supervise code‑writing agents. The company positions Delty as complementary to IDE code generators (e.g., Copilot, Cursor) by adding organization‑specific system knowledge and architectural context to the code those tools produce (website, YC listing, company LinkedIn).
Today Delty is sold via a demo/white‑glove onboarding motion with enterprise controls. They emphasize zero‑retention LLM usage, encryption in transit/at rest, SAML/OIDC SSO, audit/compliance logging, and flexible deployment (cloud, VPC, or on‑prem) and document data‑handling commitments on their privacy page (enterprise page, privacy). Teams typically request a demo, deploy under their security controls, ingest engineering artifacts, and then use Delty for system‑level answers, design guidance, onboarding, and to supervise/augment coding agents and IDE tools (website, enterprise page, company LinkedIn).
Founder posts discuss lessons from running agents in production with real engineering orgs, indicating they’ve operated beyond prototype demos into production pilots or deployments (founder post).
Who are their target customer(s)
- Staff/principal engineer across multiple teams: Constant ad‑hoc design questions and inconsistent implementations eat time, making deep design work and longer‑term initiatives hard; needs a way to scale judgment and reduce repeated architecture mistakes.
- Engineering manager of a large product team: New hires and juniors ramp slowly because system decisions and docs are scattered or stale; needs faster, consistent design answers so teams aren’t blocked waiting on a senior person.
- CTO or head of engineering at a mid‑to‑large company: Struggles to enforce consistent technical direction across teams and worries about hidden technical debt; needs reproducible standards and an audit trail for why decisions were made.
- Platform / developer‑experience lead: Overwhelmed by bespoke tool requests and code‑standard enforcement; wants a single, reliable source to guide automated code tools and cut one‑off integrations.
- Security/compliance or SRE lead: Concerned about data leakage and auditability from AI helpers; needs private deployment options, clear access logs, and guarantees that sensitive code isn’t retained or exposed.
How would they acquire their first 10, 50, and 100 customers
- First 10: Founder‑led pilots via YC and personal networks targeting staff/principal engineers and CTOs; deploy under the customer’s security controls for one team, prove value, then convert to paid with a documented onboarding playbook and a reference account.
- First 50: Hire 1–2 enterprise reps and standardize a pilot package (security artifacts, ingestion checklist, onboarding engineer) to speed reviews and time‑to‑value; run targeted outbound to platform leads/SREs/EMs and use early customer case studies and talks for credibility.
- First 100: Productize the pilot into a self‑serve “pilot kit” (deploy scripts, SSO/VPC templates, audit‑log pack), add customer success to drive multi‑team expansion, and open channel partnerships with dev‑tool vendors/consultancies; publish security artifacts to shorten procurement and use a reference‑driven playbook for org‑wide rollouts.
What is the rough total addressable market
Top-down context:
There are roughly 27 million professional software developers worldwide in 2024 (Evans Data). Enterprise AI coding assistants price in the ~$19–$39/user/month range (e.g., GitHub Copilot Business/Enterprise) providing a benchmark for per‑seat pricing in this category (GitHub Copilot plans).
Bottom-up calculation:
If Delty targets mid‑to‑large enterprise developers (assume ~6–8 million globally) at $40–$80 per user per month, the TAM is approximately $2.9–$7.7B annually (6M × $40 × 12 to 8M × $80 × 12), anchored by current enterprise assistant price points (Evans Data, GitHub Copilot plans).
Assumptions:
- Roughly 20%–30% of global developers work in mid‑to‑large enterprises likely to buy org‑wide AI engineering tools.
- Willingness to pay falls around $40–$80 per user per month, anchored to Copilot Business/Enterprise at $19–$39 with a premium for system‑level, architecture‑aware capabilities.
- Pricing is primarily per‑seat; usage is organization‑scoped with enterprise security/deployment requirements.
Who are some of their notable competitors
- Sourcegraph (Cody): Enterprise AI assistant that indexes multi‑repo codebases and answers developer questions with org context; offers self‑hosted options and competes on secure, code‑aware answers and enterprise deployment models.
- GitHub Copilot (Copilot for Business/Enterprise): IDE‑first assistant and agent platform focused on code completion, PR workflows, and in‑editor help; widely adopted and priced per seat but not positioned as an org‑wide architecture decision authority.
- Cursor: AI code editor with agent workflows, repo indexing, and autonomous coding agents; overlaps on agent orchestration but remains editor‑centric rather than a cross‑org architecture advisor.
- Glean: Enterprise search + AI assistant across docs, code, and internal knowledge; competes on reliable org‑wide answers and governance, but centers on knowledge retrieval more than coordinating code‑writing agents.
- Stack Overflow for Teams / Stack Internal: Knowledge capture and Q&A platform that centralizes verified engineering knowledge for people and AI tools; competes on trusted, consistent answers but isn’t focused on code generation or agent orchestration.