What do they actually do
GradeWiz is a web-based assistant that helps instructors grade faster. Teachers upload PDFs or scanned packets of student work; the system uses computer vision to split pages into questions, match pages to students, draft rubric scores, and write a short feedback summary for each student. Instructors review and edit results before clicking “Publish Grades.” This is a review-in-the-loop workflow, not fully automated grading (GradeWiz site/FAQ).
The product is used by college instructors and TAs and is piloting in K–12. The company highlights deployments and pilots at universities such as Cornell, Penn State, Cal Poly Humboldt, Syracuse, and Hunter College (YC listing; Cornell Chronicle/eLab coverage). GradeWiz reports 30,000+ submissions graded, “3× faster” grading with a “99%” accuracy benchmark vs. TAs; Cornell pilot reporting cites roughly 60% time reduction and about 97% agreement in their trials (GradeWiz site; Cornell Chronicle). They surface institution-readiness items (FERPA, accessibility, VPAT/HECVAT) for school procurement (GradeWiz – Institution-ready).
Who are their target customer(s)
- College instructors teaching medium-to-large courses: They spend hours turning around stack‑graded PDFs and scanned exams and struggle to provide consistent, actionable feedback under time pressure.
- Graduate TAs and course graders: They do repetitive work matching pages to students, applying rubrics, and re-explaining common mistakes; time savings and consistency are key needs.
- K–12 classroom teachers piloting the product: They receive handwritten, nonstandard work with little prep time and need fast draft grading and simple feedback publishing to students.
- Department chairs and academic program leads: They want to reduce faculty workload and standardize grading quality across courses, but need evidence of time/accuracy gains before wider rollout.
- IT/procurement and compliance teams at schools: They must vet vendors for FERPA, accessibility, and security and need clear documentation and onboarding to fit established procurement processes.
How would they acquire their first 10, 50, and 100 customers
- First 10: Founder-led outreach to instructors/TAs in existing networks to run free or discounted pilots, with hands-on onboarding and before/after metrics collection (Cornell coverage; waitlist).
- First 50: Standardize the pilot playbook (sample PDFs, 1–2 week turnaround, time-saved report), then convert via TA/instructor referrals and case studies from early pilots (YC; GradeWiz site).
- First 100: Leverage pilot outcomes to sell department-level and early institutional deals; provide HECVAT/VPAT/FERPA docs, offer annual licenses and implementation packages, and work higher‑ed/K‑12 buying channels (GradeWiz – Institution-ready).
What is the rough total addressable market
Top-down context:
In the U.S., there are about 4.2 million K–12 teachers and 1.5 million postsecondary faculty, indicating a very large base of educators who regularly grade student work (NCES K–12 Fast Facts; NCES COE Postsecondary Faculty).
Bottom-up calculation:
As an initial serviceable segment, assume ~300,000 U.S. higher‑ed instructors teaching problem‑set/exam‑heavy courses and ~1,000,000 grades 6–12 math/ELA/science teachers use a grading assistant; at $200–$400 per user per year, this yields roughly $260M–$520M in annual spend potential for these two segments combined.
Assumptions:
- Share of instructors with frequent handwritten/open‑response grading in higher ed is ~300k; grades 6–12 relevant K–12 teachers ~1M (illustrative).
- Pricing modeled as $200–$400 per educator per year for grading assistance (SaaS seat or course license).
- Analysis focuses on U.S. only and excludes administrators’ budgets and enterprise add‑ons.
Who are some of their notable competitors
- Gradescope: Popular grading platform with grouping of similar answers and AI-assisted workflows, strongest on structured, fixed-layout assignments rather than arbitrary scanned pages (Gradescope; AI-assisted grading).
- Crowdmark: Online workflow for scanning, distributing, and team‑grading paper exams with comment libraries and analytics; emphasizes collaborative grading rather than auto‑drafting generative feedback (Crowdmark).
- Akindi: Scantron‑style multiple‑choice assessment platform with LMS rostering and reporting; optimized for MCQ, not free‑form handwritten splitting or generative feedback (Akindi).
- ZipGrade: Mobile app that scans bubble sheets for quick MCQ grading; inexpensive and portable but limited to answer‑sheet workflows (ZipGrade).
- Turnitin (Feedback Studio, ecosystem): Incumbent suite for similarity checking and writing feedback; owns Gradescope and competes in essay evaluation and institutional procurement (Turnitin – Feedback Studio; Turnitin acquires Gradescope).