CTGT logo

CTGT

CTGT safeguards against reputational risk from GenAI across your org

Fall 2024active2024Website
Artificial IntelligenceDeep LearningSaaSB2BEnterprise
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 2 months ago

What do they actually do

CTGT sells an enterprise platform that plugs into a company’s existing LLMs or custom models to monitor outputs, enforce policy, and automatically fix or block unwanted behavior like hallucinations, biased answers, or disclosure violations before the response reaches users. It supports API-based and direct model integrations, offers on‑prem deployment, and creates an auditable trail for compliance teams. They publish SOC 2 materials and position security and data residency as core features of the product CTGT site.

Today, CTGT focuses on regulated enterprises (finance, insurance, healthcare, defense) and reports early production pilots, including work with large financial institutions, an insurer, and Fortune 10 brands. Media coverage and company press note paid pilots and a recent seed round to scale commercialization TechCrunch, GlobeNewswire, CTGT site. They also publish example results, like cutting certain retraining workflows from hours to minutes and shrinking on‑device model footprints with lower latency, though these are company-reported benchmarks rather than third‑party certifications CTGT site.

Who are their target customer(s)

  • Compliance officer at a bank or insurer: Must prevent LLM outputs that violate regulations or internal policy and produce auditor‑ready records. Manual review is slow and expensive; they need real‑time enforcement with logs that map to existing review processes CTGT — finance.
  • Head of ML / ML engineer in an enterprise AI team: Models can hallucinate or drift, and fixing behavior often means brittle prompts or costly retraining. They want policy‑level controls and remediation that reduce retrains and allow faster iteration CTGT site.
  • Product manager for a customer‑facing assistant (telecom, retail, banking): A single wrong answer can trigger complaints and force extra legal checks, slowing launches. They need runtime governance to keep answers on‑brand and compliant so features ship on time TechCrunch, GlobeNewswire.
  • Security / risk officer in healthcare or defense: Cannot send sensitive data to public clouds; require on‑premise deployments, strict access controls, and comprehensive auditability before approving any LLM workflow CTGT site.
  • CTO or infrastructure lead focused on cost and latency: Needs to reduce inference cost and latency and avoid frequent retraining while maintaining governance. Looks for ways to shrink model footprints and speed updates under compliance controls CTGT site.

How would they acquire their first 10, 50, and 100 customers

  • First 10: High‑touch, founder‑led sales into a bank, insurer, telecom mobile division, and a healthcare provider with 4–8 week paid pilots on one risky workflow; deliver on‑prem or private deployment, real‑time remediation, and auditor‑ready logs to convert pilots to referenceable deals CTGT site, press.
  • First 50: Replicate vertical playbooks (banking, insurance, healthcare, telecom) with prebuilt policy templates and standardized onboarding; convert early wins into case studies to shorten procurement and legal review cycles CTGT — finance.
  • First 100: Layer channels and partnerships (consultancies, ISVs, cloud/model providers) and list integrations; offer productized packages with enterprise SLAs and on‑prem options to reduce sales engineering per deal TechCrunch, CTGT site.

What is the rough total addressable market

Top-down context:

Estimates for AI governance software today range from roughly $200–750M in 2024 with fast growth; analysts expect ~30%+ CAGR through 2030 as enterprises operationalize AI oversight IMARC, GMI, Forrester. Adjacent AI model risk management budgets are already in the low billions (≈$5–6B mid‑2020s), indicating substantial spend for runtime governance in regulated sectors Grand View Research, Yahoo Finance.

Bottom-up calculation:

If CTGT targets ~2,000 large BFSI/insurance/healthcare/telecom/defense enterprises actively deploying LLMs in the next few years at an average $150K ACV (between the published $50K Standard tier and typical enterprise add‑ons/on‑prem), the near‑term serviceable market is ~$300M. Expanding to ~10,000 global enterprises yields ~$1.5B potential over time CTGT pricing.

Assumptions:

  • Average enterprise ACV ≈ $150K based on public tiers and likely enterprise features (on‑prem, SLAs, audit tooling).
  • ~2,000 near‑term target enterprises with regulated LLM use; ~10,000 globally over time as adoption matures.
  • Budgets come from AI governance/model risk lines rather than net‑new spend; growth aligns with 30%+ governance CAGR.

Who are some of their notable competitors

  • Fiddler AI: AI observability vendor with Guardrails that moderates prompts/responses and supports VPC/air‑gapped runs. Overlaps on detecting hallucinations/policy violations; Fiddler’s core is observability/explainability vs. CTGT’s emphasis on in‑runtime policy enforcement and remediation Fiddler.
  • Datadog (LLM Observability): Monitoring/observability suite with LLM tracing and metrics tied to infra. Strong for monitoring and alerting; less focused on automated, policy‑level rewriting of outputs before delivery.
  • Arize AI: Evaluation and observability for LLMs and ML models used to debug and measure behavior in dev/prod. Emphasizes traces/evals over acting as a runtime enforcement layer that remediates responses pre‑user.
  • Guardrails AI: Runtime guardrails and PII/leak prevention with deploy‑in‑VPC options. Similar protection layer; positions more as managed guardrail orchestration for agent reliability and sensitive‑data prevention.
  • Dynamo AI (DynamoGuard): Enterprise guardrails and observability with human‑in‑the‑loop and synthetic‑data testing. Overlaps on guardrails/auditability; leans on HITL and testing vs. CTGT’s in‑model policy enforcement and auto‑remediation.