Frekil logo

Frekil

Scale AI for Medical Scans

Spring 2025active2025Website
Artificial IntelligenceHealth TechB2BHealthcareData Labeling
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 14 days ago

What do they actually do

Frekil runs a browser-based platform that turns raw medical scans (X‑rays, CT, MRI, ultrasound, pathology slides) into analysis‑ready, versioned datasets. Teams connect their own cloud storage, and Frekil streams images to annotators’ browsers via pre‑signed URLs so data doesn’t sit on Frekil’s servers. Radiologists label scans using 2D/3D tools with AI assistance (e.g., zero‑shot segmentation, slice interpolation), and the system manages multi‑reader QA, consensus, audit trails, and exports with provenance/version history for research and regulatory use (Frekil site).

Alongside the software, Frekil offers access to a benchmarked marketplace of radiologists and advertises partnerships with radiology chains to help customers who need external annotators and/or data procurement. The company says the platform is live and being used in pilots, and it publicly markets “10x faster” annotation with its AI tooling; at this early YC stage, adoption is best read as focused pilots with a small founding team rather than broad enterprise rollout (Frekil site, YC company page).

Who are their target customer(s)

  • Medical‑AI product teams (startups and established imaging companies): They need large, consistently labeled imaging datasets and multi‑reader QA but lack enough in‑house radiologists; they also need versioned datasets with traceability for model development and regulatory review (Frekil, YC).
  • Academic and hospital research groups running retrospective imaging studies: Their imaging data is messy and siloed across clouds; they spend months cleaning and labeling data manually, which slows publications and hurts reproducibility (Frekil).
  • Hospitals and imaging centers looking to operationalize/monetize imaging data: Radiologists are overloaded, creating backlogs for internal annotation projects; they need to outsource labeling without moving or exposing PHI outside their environment (Frekil).
  • Pharma and diagnostics teams conducting trials or biomarker development: They must coordinate consistent, multi‑reader annotations across sites with audit trails to meet endpoints and satisfy regulators (Frekil).
  • Device makers and CROs preparing regulatory submissions: They need reproducible, versioned datasets and documented consensus/QA workflows to support performance claims with agencies like the FDA (Frekil).

How would they acquire their first 10, 50, and 100 customers

  • First 10: Run high‑touch paid pilots sourced via YC/founder networks and existing radiology partners; deliver one short project per customer that results in a fully annotated, versioned dataset with an audit trail and capture a detailed case study.
  • First 50: Productize pilots into repeatable templates and outbound to VC‑backed med‑AI teams, CROs, and research cores; add referral agreements with radiology chains/imaging centers and a priced “dataset‑as‑a‑service” offering to drive steady partner‑sourced work.
  • First 100: Launch a lighter self‑serve tier with protocol templates so small teams can start quickly; sell turnkey audited dataset packages and reseller agreements to CROs/device vendors to land larger accounts without proportional service overhead.

What is the rough total addressable market

Top-down context:

Industry estimates put the medical image annotation market (software + services) around ~$1–1.5B in 2023, with software‑only subsets much smaller; the broader AI in medical imaging market is >$1B today and projected to reach ~$10–20B over the next decade (DataIntelo, Verified Market Research, Grand View Research).

Bottom-up calculation:

Assume ~3,000 active buyers globally across med‑AI startups, hospitals, and research labs doing imaging AI with average annual spend of ~$250k on labeling/QA/dataset preparation (~$750M), plus ~1,000 pharma/device/CRO imaging programs spending ~$400k per year on multi‑reader QA and trial datasets (~$400M), implying ~ $1.15B near today—consistent with reported ranges.

Assumptions:

  • Roughly 3,000 active organizations worldwide are undertaking imaging‑AI dataset work (startups, labs, hospitals).
  • Average annual spend on labeling/QA/versioning for these orgs is ~$250k; pharma/device/CRO programs average ~$400k.
  • Spend varies widely by modality and trial complexity; figures are blended global averages.

Who are some of their notable competitors

  • RedBrick AI: Web‑based DICOM viewer and annotation platform purpose‑built for radiology with 2D/3D segmentation, QC tools, and services via partner labelers—positioned squarely at healthcare AI teams (site, docs, services).
  • MD.ai: DICOM‑native annotation platform with an FDA 510(k)‑cleared viewer, AI‑assisted labeling, de‑ID, and deployment/validation features; widely used by clinicians and researchers (product, site).
  • V7 (Darwin): General CV annotation platform with a healthcare offering for DICOM imaging, auto‑labeling, and managed labeling services, plus project management and QA workflows (V7 medical imaging).
  • Encord: Data management and annotation platform with DICOM/NIfTI support, 3D tooling, active learning/model evaluation, and governance features oriented to regulated medical imaging use cases (Encord DICOM).
  • Segmed: De‑identified medical imaging data platform sourcing multimodal, regulatory‑grade datasets for AI R&D and validation—competes on data procurement and curated datasets rather than tooling alone (Segmed).