What do they actually do
Relling is building large, high‑quality multimodal datasets—synchronized video plus depth, LiDAR, IMU/motion, audio, and other sensors—and the storage/processing tools that AI and robotics teams use to train and evaluate “world models” from real‑world data (relling.co, YC listing).
Today they appear to be in build/hiring mode without a broadly available public dataset or hosted service; the site notes “Public Company Updates — Coming soon,” and they’re hiring across engineering and operations, which suggests the core data and infrastructure are under active development (relling.co, YC listing, jobs listing).
Who are their target customer(s)
- Perception and control engineers at robotics startups and hardware companies: Collecting and time‑aligning multimodal sensor streams (cameras, depth, IMU, force) is slow, expensive, and error‑prone, which slows iteration and model quality (relling.co, YC listing).
- Autonomous‑vehicle and mobile‑robot teams using LiDAR, depth, and motion sensors: They face high costs and operational burden to gather large, diverse, labeled real‑world runs, and public datasets often don’t match their deployment environments (relling.co, YC listing).
- Academic and industry ML researchers working on world models or embodied AI: They need large, synchronized multimodal datasets and standard benchmarks, but many available datasets are small, poorly aligned, or missing modalities (relling.co, YC listing).
- ML/platform engineers responsible for data pipelines and experimentation: High‑bandwidth video+sensor data stresses storage, formats, preprocessing, and serving, turning dataset management into a bottleneck (relling.co, YC listing).
How would they acquire their first 10, 50, and 100 customers
- First 10: Run high‑touch pilots with ~10 targeted teams (YC robotics startups, local robot companies, select labs), delivering a small synchronized dataset tailored to one task and embedding an engineer for 4–6 weeks; turn successful pilots into case studies and referrals.
- First 50: Standardize the pilot into a repeatable onboarding package (curated starter dataset, integration scripts, short support engagement) and staff 1–2 engineers to run multiple pilots in parallel; use warm referrals and pilot results to reach perception/control teams at startups and mid‑size hardware companies.
- First 100: Host dataset challenges and workshops at robotics/ML conferences, co‑publish benchmarks with prominent labs, and launch a simple self‑serve access tier while keeping a premium high‑touch offering for fleets/AV; support with light outbound and a partner program for integrators/resellers.
What is the rough total addressable market
Top-down context:
Direct TAM for packaged multimodal datasets plus labeling/collection and specialized mapping is roughly $7–10B today, with broader potential in the tens of billions if hosted infra, APIs, and benchmarking platforms are included (Grand View Research, IMARC, HD map market summary, TBRC robotic software, BCG, McKinsey AV, ABI Research).
Bottom-up calculation:
Conservative bottom‑up: data collection/annotation at roughly $2–4B today plus ~${4.4}B for HD maps implies ~$7–10B direct TAM for synchronized multimodal datasets and adjacent data services (Grand View Research, IMARC, HD map market summary).
Assumptions:
- Focus is on packaged datasets + adjacent data services; excludes double‑counting with broader robotics/AI software.
- HD maps are a proxy for specialized real‑world sensor/mapping data bought by AV/robotics teams.
- Many enterprises build in‑house; TAM reflects buyers willing to purchase/licence external datasets and tools.
Who are some of their notable competitors
- Waymo Open Dataset: Large public camera+LiDAR driving dataset with benchmarks widely used in AV research; a go‑to alternative to custom collection for perception/planning tasks (site, GitHub).
- nuScenes (Motional): Urban driving dataset with multi‑camera, LiDAR, IMU/GPS and leaderboards; a de facto standard for AV perception and planning evaluations (nuScenes, Motional).
- KITTI: Classic academic driving/robotics datasets with synchronized images, LiDAR, and IMU/GPS; common baselines for benchmarking and model development.
- Scale AI: Commercial data collection, curation, and labeling platform for video/3D sensor data used by robotics and AV teams; competes on infra + human‑in‑the‑loop services rather than open benchmarks (Scale).
- Roboflow: Dataset management and annotation platform for images/video used to store, label, version, and serve visual datasets; overlaps on tooling but not on publishing synchronized LiDAR/IMU/force benchmarks (platform, docs).