What do they actually do
AnswerThis is a web app that helps researchers find, read, summarize, cite, and draft academic content faster. It combines search over a large corpus (the site advertises 250M+ research papers) and web sources with an automated literature‑review generator that produces summaries with line‑by‑line, clickable citations. Users can build a personal or team library (upload PDFs, import Zotero/Mendeley) and use an AI writer to draft sections with embedded citations and export bibliographies homepage guide pricing.
For teams and enterprises, AnswerThis offers in‑app custom tool building (e.g., paraphrasers, peer‑review helpers) and a biotech/biopharma offering that includes NDA‑bound, human‑assisted reports and living evidence tables. The product materials emphasize verifiable citations and workflows that let users inspect sources to mitigate LLM hallucinations risk custom tools biotech features founder interview.
Who are their target customer(s)
- Academic researchers (PhD students, postdocs, early‑career faculty): They need to quickly find and synthesize relevant papers from large result sets and spend significant time formatting citations and bibliographies for theses and manuscripts.
- Principal investigators and lab leads: They want a shared place for the team’s papers and summaries to avoid rework during onboarding and to ensure consistent literature searches and reference lists.
- Biotech and biopharma R&D teams: They require auditable, up‑to‑date literature summaries and evidence tables under NDA. Automated outputs must be verifiable and defensible for internal and regulatory decision‑making.
- Grant writers and research administrators: They face tight deadlines to produce well‑referenced background sections and proposals, and lose time turning scattered sources into cohesive, citation‑backed drafts.
- Systematic reviewers and evidence‑synthesis teams: They need reproducible searches, traceable source lists, and living reviews that stay current, but the manual screening and provenance tracking are time‑consuming.
How would they acquire their first 10, 50, and 100 customers
- First 10: Founder‑led pilots with 6–8 PhD/postdocs and 2–4 PIs in known networks; run live demos and a short free managed pilot that imports their libraries, in exchange for feedback and a case study or intro.
- First 50: Campus workshops and office hours at 8–12 target universities with 6–10 campus ambassadors to onboard labs; convert attendees using 30‑day team trials tied to a real literature review.
- First 100: Use the top 10–20 case studies to run outbound to biotech R&D leads and PIs with short NDA’d pilots delivering a living evidence table or draft section; hire one technical account manager to run pilots and convert to self‑serve paid or enterprise trials pricing biotech offering.
What is the rough total addressable market
Top-down context:
UNESCO’s 2021 Science Report estimates 8.854 million full‑time equivalent researchers globally in 2018, indicating a large global base of potential academic users. In parallel, industry databases list tens of thousands of biotech companies worldwide (Biotechgate reports ~23,700 biotech companies), representing the enterprise segment for audited literature synthesis UNESCO Science Report stats Biotechgate company counts.
Bottom-up calculation:
Initial SAM focused on English‑speaking academia and biotech: assume 1.0M individual academic seats at ~$120/year (~$120M), 5,000 lab/department team subscriptions at ~$5,000/year (~$25M), and 1,000 biotech enterprise/managed accounts at ~$20,000/year (~$20M). Combined near‑term SAM ≈ ~$165M.
Assumptions:
- Pricing reflects typical individual/team SaaS and light enterprise managed research averages; actual AnswerThis pricing may differ.
- Academic adoption estimate targets a subset of global researchers and graduate researchers in English‑language markets.
- Biotech enterprise count assumes ~4% adoption of global biotech firms for the managed/enterprise tier in the near term.
Who are some of their notable competitors
- Elicit: AI‑first literature‑review assistant focused on question‑driven extraction and customizable reports, built heavily on Semantic Scholar; overlaps on automated lit‑review and evidence extraction Elicit library guide.
- Semantic Scholar: Free academic search and citation graph for discovery and influence tracking; strong at discovery/metrics but not an end‑to‑end drafting or team library platform Semantic Scholar.
- ResearchRabbit: Visual discovery and collections tool for mapping citation networks, building collections, and alerts; competes on discovery and team collections, not automated review drafting ResearchRabbit.
- Scite.ai: Provides ‘smart citations’ showing supporting/contradicting contexts; used for verification and citation‑context analysis, complementary when provenance matters Scite.
- Consensus: AI search that returns evidence‑backed answers and cited summaries, including a Deep Search literature‑review mode; overlaps on fast, citation‑grounded answers Consensus.