Anara logo

Anara

AI for researchers

Summer 2024active2024Website
ConsumerB2BAI
Sponsored
Documenso logo

Documenso

Open source e-signing

The open source DocuSign alternative. Beautiful, modern, and built for developers.

Learn more →
?

Your Company Here

Sponsor slot available

Want to be listed as a sponsor? Reach thousands of founders and developers.

Report from 29 days ago

What do they actually do

Anara is a web-based research workspace where users collect PDFs and other files, ask questions with an AI chat, and get answers with clickable citations that jump to the exact passages in the original documents. This lets users verify claims quickly instead of relying on opaque summaries (homepage, user guide).

In practice, users import materials (including via connectors like Zotero, Mendeley, Google Drive, OneDrive, and Notion depending on plan), chat over a file, folder, or their whole library, inspect cited passages in the built‑in PDF viewer with annotations, draft with automatic citations, and create study materials like flashcards. The product also includes agents that can handle tasks such as deep multi‑document search or structured extraction, and supports collaboration with shared folders and team workspaces (pricing/connectors, chat & agents, PDF annotations).

Anara is available as a free tier plus paid plans with clear limits and features; higher tiers add enterprise connectors, admin controls/SSO options, and access to multiple underlying language models by plan (pricing).

Who are their target customer(s)

  • Students (undergraduate and Master’s): They juggle many lecture files and readings, spend time hunting for exact quotes or figures, and struggle with properly formatted citations. They don’t trust AI summaries unless they can see where answers came from.
  • PhD candidates and academics: They need to synthesize large literatures for papers and grants, but manual reading and cross-checking takes weeks. They require verifiable, clickable citations to defend every claim and avoid hallucinations.
  • Bench scientists and R&D researchers (industry labs): They must extract methods, protocols, or data across many papers and internal reports, often missing details buried in PDFs or tables. They need fast, accurate synthesis that traces back to original methods/results.
  • Clinical researchers and clinicians doing research: They need trustworthy clinical evidence fast for protocol design or case reports, and unverifiable AI output is unacceptable. They must combine external literature with internal study data in an auditable, reproducible way.
  • Lab leads, PIs, and institutional teams (admins/IT): They need secure, team-wide access with live sync to existing tools and enforceable permissions/compliance. Onboarding stalls when tools lack SSO/admin controls or produce outputs that can’t be audited for institutional responsibility.

How would they acquire their first 10, 50, and 100 customers

  • First 10: Hand-sell to close networks (YC mentors, university contacts, friendly PhDs/labs) with 1:1 demos and guided onboarding; offer temporary Pro access and priority support, then publish short case notes on how clickable citations and PDF annotations solved specific pain points (docs, pricing).
  • First 50: Run a campus-ambassador program and partner with grad societies, instructors, and academic Slack/Discord groups to distribute promo codes and assignment-focused tutorials. Support with webinars and Zotero/Mendeley import guides to reduce setup friction (students, Zotero integration).
  • First 100: Pilot 6–8 week team trials with PIs, small R&D groups, and university libraries using shared folders, admin controls, and live-sync workspaces; bill if pilots show value. Drive interest via conference workshops on rapid literature synthesis with agents/Deep Search and publish pilot metrics/testimonials (enterprise features, agents & Deep Search).

What is the rough total addressable market

Top-down context:

There are about 264M higher‑education students worldwide and roughly 8.8M researchers/R&D professionals; there are also ~21k higher‑education institutions that enable institutional licensing (UNESCO, Science|Business/UNESCO, WHED).

Bottom-up calculation:

A practical mid-case is researchers + research‑active students: 8.8M researchers at ~$300/year ($2.64B) plus 10% of 264M students (26.4M) at ~$60/year ($1.584B) ≈ $4.224B/year in TAM (UNESCO, Science|Business/UNESCO).

Assumptions:

  • 10% of global tertiary students are research‑active and willing to pay at student pricing.
  • Representative ARPU: ~$60/year for students and ~$300/year for researchers; actual ARPU depends on tier mix and geography.
  • Institutional licensing adds upside (e.g., 50–200 seats/institution at $500–$1,000/seat/year).

Who are some of their notable competitors

  • Elicit: AI assistant built for literature reviews: semantic search over large paper sets, automated extraction/tables, and sentence‑level citations. Strong for systematic reviews; less of a shared PDF workspace with annotations and multi‑agent automation compared to Anara (site, literature review features).
  • Humata: PDF‑focused Q&A and summarization with in‑document citations/highlights. Simple multi‑document interrogation; fewer workspace, collaboration, and admin controls than tools oriented to enterprise research workflows (site, press).
  • Consensus: AI search engine that synthesizes answers from hundreds of millions of academic papers with cited evidence. Optimized for web‑scale evidence synthesis rather than a private, editable library workspace for user files (site, how it works).
  • ReadCube / Papers: Mature literature management and PDF reading with annotations, citation tools, and enterprise/library deployments. Strong on institutional workflows and subscriptions; less focused on AI chat with sentence‑level, verifiable answers across user uploads (site, enterprise plans).
  • Scholarcy: Tool that ingests PDFs to produce structured summaries, key points, references, and flashcards. Good for rapid single‑paper extraction; not a full collaborative workspace with chat and cross‑library, click‑to‑verify citations (site, features).