Blast logo

Blast

Helping enterprises build safe and compliant LLM apps/agents

Summer 2024active2024Website
B2BHealthcareEnterprise SoftwareAI
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from about 2 months ago

What do they actually do

Blast builds an AI quality assurance and safety platform for enterprise conversational/LLM applications. Teams use it to generate large sets of realistic scenarios, automatically evaluate model outputs against business rules and policies, and catch issues like hallucinations, policy violations, and compliance risks before release. The workflow includes human review for uncertain cases, regression protections to lock quality, and executive‑level reporting for audits and sign‑off (withblast.com).

The product is built for enterprise environments (VPC/on‑prem options) and focuses on getting prototypes production‑ready through continuous testing and monitoring, plus professional services that help fix issues and implement guardrails (withblast.com). The company has highlighted work with large enterprises and a Fortune 50 pilot (YC LinkedIn post, YC company page).

Who are their target customer(s)

  • Enterprise AI/ML teams at large companies: They need to harden LLM prototypes for production but worry about hallucinations and policy breaches causing brand or legal risk; they want a way to find and fix high‑risk failures before launch (withblast.com, YC).
  • Customer‑support and contact‑center leaders: Conversational systems must give correct, policy‑compliant answers in high‑stakes interactions; one wrong answer can erode trust or trigger escalations, so they need testing that reflects real customer question patterns (withblast.com).
  • Compliance, legal, and risk teams in regulated industries: They fear regulatory fines or lawsuits from incorrect or disallowed AI advice; they need auditable checks and business‑rule enforcement to prove safety and compliance (withblast.com, YC).
  • Product managers and engineers running LLM agents: The input space is large and unpredictable; they need automated scenario generation, continuous regression checks, and release gates to lock quality standards (withblast.com).
  • Subject‑matter experts (SMEs) and reviewers: They’re overloaded with edge cases; they need the system to surface uncertain or risky cases and let one correction propagate to many similar scenarios to reduce repetitive work (withblast.com).

How would they acquire their first 10, 50, and 100 customers

  • First 10: Founder‑led, high‑touch pilots with one team per enterprise (AI/ML or contact‑center), embedding a Blast engineer to help fix issues and deliver an auditable report; use Fortune‑50 pilot and VPC/on‑prem as proof points to clear security objections (withblast.com, YC LinkedIn).
  • First 50: Package the pilot into a 4–8 week risk‑reduction offer (scope, test corpus, SME reviews, remediation plan) and hire 1–2 sales engineers per vertical to run two packaged pilots per quarter; collect public case studies and standard security artifacts to speed procurement (withblast.com).
  • First 100: Scale via SI/contact‑center partnerships, publish connectors/playbooks for common stacks, and offer a locked‑scope self‑serve PoC; drive demand with analyst briefings, vertical webinars, and a referral program tied to implementation credits.

What is the rough total addressable market

Top-down context:

Enterprise AI spend is massive and growing, with Gartner projecting ~$1.5T worldwide AI spending in 2025, suggesting a large ceiling for governance/safety tooling as a slice of overall AI budgets (Gartner).

Bottom-up calculation:

Combining adjacent segments—generative‑AI cybersecurity/safety (~$8.7B in 2025, to ~$35.5B by 2031), specialized GenAI safety (~$1.7B in 2024), and MLOps (~$3.4B in 2024)—supports a realistic near‑term TAM for Blast of roughly $10B–$40B over the next few years, adjusting for overlap (MarketsandMarkets, Dataintelo, PS Market Research, GMI Insights).

Assumptions:

  • Published market categories overlap; we adjust downward to avoid double counting.
  • Near‑term buyers are large enterprises with live LLM apps (especially regulated and customer‑facing), concentrating spend.
  • Regulation and production LLM adoption materially expand governance/safety budgets over 3–7 years.

Who are some of their notable competitors

  • Robust Intelligence: AI risk and validation platform offering stress testing/red teaming and continuous testing for ML/LLM systems used by enterprises (site).
  • CalypsoAI: Enterprise LLM security and governance (e.g., Moderator) providing guardrails, evaluations, and controls for compliant deployments (site).
  • Credo AI: AI governance platform focused on risk, policy management, and compliance reporting across AI/LLM initiatives (site).
  • Lakera: Guardrails and threat detection for LLM apps (prompt injection, PII, policy enforcement) aimed at enterprise safety and trust (site).
  • Gantry: LLM evaluation and observability platform to define tests, score outputs, and monitor quality/regressions in production (site).