Sciloop logo

Sciloop

Fully automated scientific discovery

Fall 2025active2025Website
AI-powered Drug DiscoveryMachine LearningSaaSCloud ComputingAI Assistant
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 27 days ago

What do they actually do

Sciloop runs a cloud-hosted research agent (Sciloop Lab v_0) that takes a researcher’s codebase and goal, provisions managed cloud compute, runs parallel ML experiments, tracks metrics, handles failures, analyzes results, proposes next steps, and can draft a write-up of methods and findings Sciloop site. The aim is to cut down the time researchers spend on infrastructure and manual synthesis by automating most of the experiment loop YC company page.

Today it’s an invite-only product in early access; the team says they’re onboarding MIT research groups and a few early partners. The current system is built on top of SakanaAI’s open-source “AI Scientist” framework rather than being entirely bespoke Sciloop site, YC page, SakanaAI.

Who are their target customer(s)

  • Academic ML research groups (professors, PhD students): They lose time setting up cloud jobs, recovering from failures, aggregating metrics, and drafting papers. They want experiments run, results analyzed, and write-ups produced with less manual work Sciloop, YC.
  • Industry ML/R&D teams (company research labs): Coordinating reliable, parallel experiments on managed cloud and reproducing results across runs slows iteration and delivery; they need dependable orchestration and analysis to move faster Sciloop, YC.
  • Small ML startups and solo researchers: Without dedicated infra or ops time, they can’t easily run large sweeps or automate analysis, so exploration stalls while they manage infrastructure and manual result synthesis Sciloop.
  • Computational science labs beyond ML (bio/chem/physics): They want hypothesis-to-experiment-to-write-up pipelines that are reliable and reproducible, but cross-domain autonomy and tooling are hard to set up and maintain Sciloop, YC.
  • Enterprise research ops, IT/security, and lab managers: They require strict data isolation, access controls, and auditability before adopting autonomous research systems, and are cautious about giving such systems access to critical datasets or compute Sciloop, YC.

How would they acquire their first 10, 50, and 100 customers

  • First 10: Run invite-only, white‑glove pilots via the founders’ academic networks (starting with MIT groups), offering discounted compute and hands-on integration to deliver one reproducible result and a draft paper per pilot Sciloop, YC.
  • First 50: Publish case studies and reproducible notebooks from early pilots, add a referral program for labs, and do targeted outreach to PIs/R&D leaders plus tutorials at key ML conferences to convert interest into onboarding slots Sciloop, YC.
  • First 100: Productize onboarding (self‑serve templates, managed compute tiers, hardened access controls), add pricing for labs/startups/enterprise, secure cloud credit/channel deals, and run 30–90 day paid pilots with a small customer-success team Sciloop, YC.

What is the rough total addressable market

Top-down context:

Sciloop sits between ML experiment tracking/orchestration and research automation. Initial buyers are academic ML labs and industry R&D teams that already budget for compute and tooling; adjacent products include experiment trackers and MLOps platforms used across research and production.

Bottom-up calculation:

If the initial wedge targets ~2,500 academic ML groups (avg $20k/yr) and ~5,000 industry ML/R&D teams (avg $50k/yr) for managed research automation, the near-term TAM is roughly $350M annually. Expansion to other computational sciences would increase this materially.

Assumptions:

  • Focuses on ML-first customers initially: ~2,500 academic groups and ~5,000 industry teams (directional).
  • Average annual contract: ~$20k for academic labs; ~$50k for industry R&D (software + orchestration, excluding raw compute pass-through).
  • Does not include non-ML computational science labs; those are an expansion segment.

Who are some of their notable competitors

  • SakanaAI: Open-source “AI Scientist” that automates idea generation, code edits, experiment runs, and paper drafting; Sciloop builds on this stack, so DIY teams can adopt SakanaAI directly as an alternative SakanaAI GitHub.
  • Weights & Biases (W&B): Popular experiment tracking, sweeps, and reporting for ML teams; overlaps on run management and metrics but focuses on tracking/automation rather than a full autonomous research agent W&B docs.
  • Comet.ml: Experiment tracking and model registry with enterprise deployments; helps with reproducibility and artifact management but not end‑to‑end autonomous research and paper drafting Comet docs.
  • Determined AI: Open-source training and orchestration (distributed training, HPO, experiment scheduling); strong infra layer but not an autonomous research agent Determined docs.
  • Valohai: Pipeline-first MLOps that enforces reproducible runs across cloud infra; competes on orchestration and reproducibility, not on automated idea-to-paper loops Valohai docs.