Jarmin logo

Jarmin

24/7 Machine Learning Engineer employees

Fall 2025active2025Website
Developer ToolsB2BEnterpriseAIAI Assistant
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 27 days ago

What do they actually do

Jarmin provides a chat-based “autonomous ML engineer” that teams can brief on an ML task. After being granted access to relevant data sources, Jarmin runs experiments, builds models and pipelines, and deploys them to production, with optional human approval gates along the way (Jarmin site, YC page).

Today it’s sold via scoped, founder-led engagements rather than a public self-serve product. The company highlights example use cases across industries but does not publish pricing or public case studies (Jarmin site, YC page).

Who are their target customer(s)

  • SMB to mid-market product teams without in-house ML engineers: They want ML features but can’t hire senior ML talent quickly or affordably, and proofs of concept stall before reliable production rollout (Jarmin site, YC page).
  • Mid-size data science teams overloaded with requests: They struggle to productionize models and spend time on pipelines and tuning instead of shipping monitored, repeatable systems (Jarmin site, YC page).
  • Specialized industry engineering/product teams (e.g., semiconductors, biotech): They have unique data and domain needs but lack domain-ML engineers, leading to long ramp times, costly mistakes, and slow validation cycles (Jarmin site, YC page).
  • Growth/marketing teams needing predictive models (churn, targeting, recommendations): Models that work in experiments break in production or need constant manual tuning; maintaining pipelines and monitoring is burdensome (Jarmin site).
  • Engineering managers responsible for ML reliability/compliance: They need predictable, auditable delivery with testing and monitoring, plus control over approval gates, without adding headcount (Jarmin site).

How would they acquire their first 10, 50, and 100 customers

  • First 10: Founder-led, paid pilots via YC and personal networks. Scope tightly, run the work end-to-end with clear success metrics and deliver a production handoff to convert into references (Jarmin site, YC page).
  • First 50: Standardize the common use cases into vertical playbooks (onboarding, integrations, fixed-scope pricing) and hire a small delivery team to run sales-led pilots while founders open new accounts (Jarmin site, YC page).
  • First 100: Package top use cases with clear pricing, add a low-touch onboarding path, and sign channel partners (SIs/cloud) to feed pipeline; support with deployment templates, short public case studies, and a lightweight SDR/AE motion (Jarmin site, YC page).

What is the rough total addressable market

Top-down context:

Jarmin competes for budgets that currently go to MLOps, AutoML, and related ML engineering services. Recent estimates put MLOps at about $1.58B in 2024 and growing rapidly, while AutoML is estimated around $3.5B in 2024, indicating a multi‑billion dollar combined opportunity (Fortune Business Insights, Grand View Research).

Bottom-up calculation:

If Jarmin sells packaged engagements averaging $100k–$150k ACV to 10,000–20,000 mid-market teams with ML backlogs, that implies a $1.0B–$3.0B TAM for its initial service model. Expanding to more verticals and a self-serve/low-touch tier could widen the reachable base and push TAM higher over time.

Assumptions:

  • Average ACV for an autonomous-ML-engineer engagement is $100k–$150k per year (mix of pilots and ongoing maintenance).
  • There are 10,000–20,000 global mid-market teams with clear ML backlogs and willingness to pay for build‑and‑operate delivery instead of hiring.
  • A portion of current MLOps/AutoML and contractor budgets can shift to autonomous-agent delivery as reliability improves.

Who are some of their notable competitors

  • DataRobot: Enterprise platform for automating model development, deployment, and governance at scale; typically bought as software plus services, not a hireable chat engineer (DataRobot).
  • H2O.ai: AutoML and MLOps software (including on‑prem) and vertical tools for teams to build and run models themselves; a platform approach rather than a conversational engineering service (H2O.ai).
  • AWS SageMaker (Autopilot/Canvas): Managed AWS stack for AutoML, notebooks, and deployment; customers operate their own experiments and endpoints, competing for the same “ship to production” budget (AWS SageMaker).
  • Scale AI: Strength in data labeling, curation, evaluation, and agent infrastructure for enterprises; focuses on data and evaluation pipelines rather than acting as a single conversational ML engineer (Scale, Nucleus).
  • Weights & Biases: Experiment tracking, model registry, and MLOps tooling for in‑house teams to run reliable ML; a toolset to install and operate internally, not an autonomous engineer (W&B).