What do they actually do
TruthSystems makes Charter, a governance platform with a browser extension that monitors how lawyers use generative‑AI tools, blocks prompts that violate firm policy in real time, and writes immutable audit logs for compliance teams to review. Early go‑to‑market is focused on law firms and legal teams, with positioning that also speaks to enterprise and academic users who need to control AI use across vendors (site; Law.com; GeekLaw interview).
How it works: admins define firm rules (what can/can’t be pasted into AI tools, which tools are allowed, client‑specific restrictions). The extension runs at the point of use across tools like ChatGPT, Copilot‑style assistants, and third‑party legal AI. If a prompt violates policy—such as including privileged facts or using an unapproved tool—the extension blocks submission, shows a warning, and records an audit entry; compliance teams get dashboards to spot issues and tune policies (site; Law.com; Artificial Lawyer).
Customers deploy a browser extension plus a backend console for policy configuration and logs. Enterprise options include SSO, role‑based access, and single‑tenant or on‑prem deployments, along with SOC 2–style controls that align with law‑firm security requirements (site; security portal).
Who are their target customer(s)
- Partners and practice‑group leaders at large law firms: They must protect client confidentiality and worry lawyers will paste privileged facts into public AI tools; they need immediate blocking and evidence they can show to clients or regulators (site; Law.com).
- Corporate legal and compliance teams: They need to prevent employees from sending trade secrets or drafts to unapproved AI vendors and to produce auditable records showing company AI use is controlled.
- InfoSec/risk teams in regulated industries (finance, healthcare): They lack consistent endpoint controls for AI interactions and need immutable logs and enterprise deployment options to satisfy audits and regulatory reviews (security portal; Artificial Lawyer).
- Small and mid‑sized law firms with limited IT/compliance staff: They can’t manually police every lawyer’s AI use and need a lightweight tool that automatically prevents violations in the browser (site).
- University research groups and academic labs handling sensitive data: They must stop accidental leakage of unpublished research into public models and keep auditable records for sponsors and ethics reviews (GeekLaw interview).
How would they acquire their first 10, 50, and 100 customers
- First 10: Run tight, high‑trust pilots with named sponsors at large law firms, using founder/investor intros and press leads; deploy the extension and policy templates quickly, and deliver audit logs plus signed compliance acceptance as the pilot outcome.
- First 50: Standardize a paid one‑week pilot with an onboarding checklist (SSO, roles, templates) and a dedicated CS lead per account; add a few legal‑IT consultancy partners and use pilot case studies as references to close sales.
- First 100: Scale via channels (legal IT shops, ALSPs, compliance vendors) and a self‑serve/light managed tier for small firms and labs with automated provisioning and standard security docs; pursue 1–2 security certifications and vertical policy templates to ease procurement in regulated industries.
What is the rough total addressable market
Top-down context:
Top‑down, the combined global legal‑technology market (~$29.8B, 2025) and the data‑loss‑prevention market (~$35.4B, 2025) imply an upper‑bound TAM of roughly ~$65B for governance at the intersection of legal ops and endpoint security (Precedence Research; Mordor Intelligence).
Bottom-up calculation:
As a practical bottom‑up lens, if 25% of the 1.32M U.S. lawyers used governance tooling at ~$20/user/month, that alone would be ~79M/year, before expanding to enterprises and other regions (ABA).
Assumptions:
- Seat‑based pricing of ~$20 per user per month for governance/controls.
- 25% adoption among U.S. lawyers as an early legal‑sector focus, excluding non‑legal enterprise users.
- Does not include international legal markets or broader enterprise knowledge‑worker coverage, which would increase the figure.
Who are some of their notable competitors
- Microsoft Purview (DLP/Insider Risk): Enterprise data protection and DLP across endpoints, browsers, and Microsoft apps; can detect and block sensitive data exfiltration to external services, including gen‑AI sites, via policy.
- Netskope: SSE/CASB platform with inline DLP and controls for SaaS and web, including policies to govern interactions with generative‑AI tools at the network and browser layers.
- Cyberhaven: Data Detection and Response that traces data movement and can block uploads/pastes of sensitive content to destinations like ChatGPT, providing user‑level visibility and enforcement.
- LayerX: Browser‑extension security platform that monitors and controls user actions in the browser, with policies for SaaS usage and gen‑AI data protection at the endpoint.
- CalypsoAI (Moderator): Gen‑AI usage governance focused on policy enforcement and model access controls across tools and endpoints, aimed at regulated enterprises adopting AI.