What do they actually do
LogosGuard provides risk management software for enterprises adopting AI. It turns AI policies into executable controls, runs security and compliance tests on AI systems and third‑party vendors, and produces audit‑ready evidence so teams can approve and monitor AI with less manual work (YC company page).
The product supports procurement, security, compliance, and ML teams with targeted stress‑tests (e.g., prompt injection, data leakage), continuous checks for changes and drift, and reporting mapped to frameworks like NIST AI RMF and ISO 42001 so organizations can operationalize AI governance at scale (YC company page).
Who are their target customer(s)
- Enterprise security leaders (CISOs / security teams): They must sign off on AI vendors and internal AI projects but lack standardized controls and tests to prove systems are safe, making approvals slow and risky.
- Procurement and vendor‑risk teams: They can’t reliably verify AI vendors against policy or regulatory frameworks, turning vendor reviews into months‑long manual efforts with inconsistent evidence.
- Compliance and internal‑audit teams: They need evidence mapped to evolving AI frameworks but struggle to convert high‑level AI policies into concrete, auditable controls and reports.
- Product and ML engineering teams shipping AI features: They worry about prompt injection, PII leakage, access‑control gaps, and model drift without simple, repeatable tests or continuous checks to catch problems pre‑release.
- CIOs and executives scaling AI: They lack a single, trustworthy view showing AI systems remain compliant as vendors and models change, which slows organization‑wide AI rollout.
How would they acquire their first 10, 50, and 100 customers
- First 10: Founder‑led, paid pilots via warm intros: map the customer’s AI policy to executable controls, run targeted stress‑tests, and deliver auditor‑ready evidence; secure a testimonial and named case study for a discounted pilot (YC company page).
- First 50: Leverage pilot references to land similar accounts; hire an enterprise AE focused on procurement/vendor‑risk and compliance; package a vendor‑assessment pack that fits existing checklists; run small roundtables/webinars co‑presented with early customers to shorten evaluations.
- First 100: Scale through integrations and templates (e.g., NIST AI RMF / ISO 42001), partner with GRC consultancies, MSSPs, and procurement platforms to co‑sell, and run targeted ABM/LinkedIn campaigns with auditor‑ready collateral; add customer success to convert assessments into annual monitoring contracts.
What is the rough total addressable market
Top-down context:
AI governance market estimates for 2024 range from about $0.23B to $0.89B, with high projected growth (Grand View Research; MarketsandMarkets). Adjacent budgets include vendor‑risk management at ~$10.7B (Grand View Research) and eGRC at ~$60–63B (Grand View Research).
Bottom-up calculation:
If ~8,000 large enterprises run AI vendor assessments and ongoing monitoring at an average $200k ACV, the bottom‑up TAM is roughly $1.6B; this aligns with a base case of AI governance plus a modest share of vendor‑risk budgets being redirected to AI‑specific work.
Assumptions:
- Focus on large enterprises actively adopting AI (~8,000 globally).
- Average contract value for assessment + monitoring of $150k–$250k, midpoint $200k.
- High adoption among this segment as AI deployments expand and regulations mature.
Who are some of their notable competitors
- OneTrust: A major GRC platform offering AI Governance and Third‑Party Risk Management; competes on inventories, policy mapping to frameworks, vendor assessments, and compliance reporting (AI Governance / TPRM).
- Arize AI: ML observability and monitoring for drift, embeddings, and production behavior; overlaps with engineering‑facing monitoring but not procurement workflows (Arize ML Observability).
- Fiddler AI: Explainability, monitoring, and reporting to make models auditable and fair; competes on evidence and reports for audits and validations (Fiddler – Explainable AI).
- TruEra: Model intelligence and ML monitoring with explainability and testing, often used in regulated industries for validation and documentation (TruEra).
- Robust Intelligence: Automated AI security and safety testing (red‑teaming, adversarial tests, input/output validation); overlaps where stress‑testing for prompt injection and leakage is required (Robust Intelligence).