1 in 1,000 makes it through.
We contact 2,000+ senior AI engineers each month. We accept 12. Every candidate survives a 5-stage process — this page is the high-level shape. The full rubric is ours, shared under NDA.
- Contacted2,000+/ monthStage 1
- Screened~250Stage 2
- Advanced~30Stage 3
- Accepted12/ monthStage 4
Five stages.
Headlines only. Each stage has its own rubric, scoring, and pass/fail thresholds — we don't publish those. Three things matter most: production taste, AI fluency, and behavior under ambiguity.
- 01
Initial screen
Sourced from active AI builder communities, GitHub contributors, and direct outreach. Filtered on years of senior production experience and shipped AI surface area.
- 02
Technical assessment
Production code review. Real systems they shipped — not LeetCode. We're looking for taste, defensiveness, and tradeoffs, not algorithm puzzles.
- 03
EQ + behavioral
How they communicate, push back on PRs, ask questions in ambiguity, and behave when they don't know the answer. Embedded engineers without strong EQ break clients quickly.
- 04
Paired AI challenge
A live, scoped problem we co-pair on with the candidate. Real Cursor + Claude session. We watch how they think with AI tools, not just whether they can use them.
- 05
Final filter — Jess
Jess Mah (Data Scientist · UC Berkeley CS at 19) runs the final technical conversation herself. Every accepted engineer clears her bar — no exceptions. References, compensation alignment, and cultural fit with the teams we embed with.
We don't publish our full rubric for the same reason a fund doesn't publish its scoring matrix: it's the IP. What we will share, on the first call, under NDA: per-stage benchmarks, sample assessments, and the exact bar a senior AI engineer has to clear to get embedded with one of our clients.
Want the full rubric?
It's not on the public site by design. We share the full assessment criteria, sample evaluations, and per-stage benchmarks on the first call — under NDA.