What the Background means?
The network is the core data structure of our talent graph - researchers, labs, and affiliations as nodes.
Graph-theoretic algorithms - centrality, shortest path, embeddings — traverse it to surface the right candidates.
I've spent eight years building organizations and programs for high-impact work, with the last four focused on AI safety — fieldbuilding, talent infrastructure, grantmaking advice, and the ventures that fill the gaps. I am Co-CEO at SteadRise (formerly Impact Academy) and a co-founder of Secure AI Futures Lab.
I'm based in New Delhi and Singapore, and I've collaborated with UK AISI, FAR.AI, Apollo Research, GovAI, and Schmidt Sciences.
What I work on
Grant investigation, advisory, and the theory-of-change work behind AISCF, GAISF, and SAFL.
Indian AI safety fieldbuilding — AISCF, GAISF, AI governance convenings, advisory across LMICs.
Talent infrastructure at SteadRise: 50K-profile ML talent map, 4,200-candidate funnel, 13 partner labs.
SAFL's India AI Tracker — 50+ stakeholders, used by international orgs as a reference.
Secure AI Futures Lab — co-founded org for safe and trusted AI in India + APAC.
Measuremint, Singapore AI Safety Hub, Budhimaan Baccha → RLHF, 80K Hours advising, and more.
Mostly a curious person who wandered into AI talent work and stayed because the problems kept getting more interesting. Sourcing researchers and engineers for teams building reliable AI systems - start with the talent map, build the pipeline, calibrate the bar, close the offer. Running the ops and the distributed team behind it end-to-end.
When I'm not buried in candidate spreadsheets or tuning an LLM about scoring rubrics, I'm probably lurking on Hacker News, poking at a side project that won't ship, or out on a walk with my dogs Wolfie and co — I document their adventures over at @thebarkitects_.
Founded and scaled India's first cohort-based AI safety fellowships — AISCF at IIT-Delhi (900+ EOIs, 22 completers, 5 PhD applicants, 1 SERI MATS facilitator) and the earlier Alignment Research Fellowship (600+ applicants across 40 STEM universities, 24 selected). Currently taking AISCF to IIT-Bombay and onward to the rest of the top 5 IITs and IISc.
Investigations, advisory memos, and theory-of-change work that have shaped funder decisions on India AI safety. The four-pathway EAIF proposal became the strategic foundation for AISCF and SAFL. Ongoing advisory work with international funders evaluating LMIC AI safety opportunities.
Convening government, academia, and industry leaders through flagship summits, panels, and capacity-building programs in India and abroad. Most recently: SAFL × India AI Impact Summit 2026 with a Hardware-Rooted Sovereignty Workshop alongside Prof. Stuart Russell and India's IT Secretary S Krishnan — see events.
Co-founded SAFL with $250K+ via Schmidt Sciences and the AI Security Tactical Opportunities Fund. India + APAC focus on safe and trusted AI. Operating the India AI Tracker (50+ stakeholders), partnering with IITs and IISc on AI for Science, Social Good, and Trustworthy AI, and convening cross-sector stakeholders around technical AI safety.
At SteadRise: a 50K-profile ML research talent map, a 4,200-candidate funnel across 13 partner labs (FAR.AI, UK AISI, Apollo Research and others), and the evaluation infrastructure (LLM-graded binary-signal scoring, JD fingerprinting across 10 skill dimensions) that scales candidate triage. Talent work as the operational arm of the fieldbuilding mission.
Measuremint — a voice-first AI career agent (ElevenLabs + Claude) testing whether AI-native recruiting can clear high-volume Indian markets of 2K–20K applicants per role. Earlier: Budhimaan Baccha digital literacy → RLHF pivot, Singapore AI Safety Hub exploration, and 80,000 Hours advising.
Upcoming
Jul 6th – 11th, 2026
International Conference on Machine Learning
Seoul, South Korea
Aug 15 – 21, 2026
International Joint Conference on Artificial Intelligence
Bremen, Germany
Dec 6th - 12th 2026
Conference on Neural Information Processing Systems
Syndney, Australia

Some of my Collaborators














