AI PhD Student Researcher - Fall 2026
Quick Summary
About Handshake Handshake was founded on a simple belief that everyone deserves a path to a great career, regardless of where they went to school or who they know. Today, we power 25 million job seekers, 1 million+ employers, and 1,600 educational institutions.
Handshake was founded on a simple belief that everyone deserves a path to a great career, regardless of where they went to school or who they know. Today, we power 25 million job seekers, 1 million+ employers, and 1,600 educational institutions.
In 2025, we started Handshake AI and built the fastest-growing AI data business in history. We work directly with frontier AI lab researchers to create evaluations, publish benchmarks, and push the boundary of data. We’ve grown from $0 to ~$1B run rate and pay ~$60M to over 30K individuals every month.
What We Offer
~1 min readHuman data is the core infrastructure to AI advancement. Frontier AI labs currently improve model capabilities with various data-intensive post-training techniques. We believe that data spend for AI training will increase by 3-5x in the next few years and continue for much longer as models take on new domains. Handshake AI supports all of the frontier AI labs, working on their most complex data at the largest scale.
About the Role
~1 min readHandshake AI builds the data engines that power the next generation of large language models. Our research team works at the intersection of cutting-edge model post-training, rigorous evaluation, and data efficiency. Join us for a Student Researcher engagement this fall where your work can ship directly into our production stack and become a publishable research contribution. The role can take place full time in person in San Francisco or potentially part time remote.) The target window of time is September - December 2026.
LLM Post-Training: Novel RLHF / GRPO pipelines, instruction-following refinements, reasoning-trace supervision.
LLM Evaluation: New multilingual, long-horizon, or domain-specific benchmarks; automatic vs. human preference studies; robustness diagnostics.
Data Efficiency: Active-learning loops, data value estimation, synthetic data generation, and low-resource fine-tuning strategies.
Each intern owns a scoped research project, mentored by a senior scientist, with the explicit goal of an archive-ready manuscript or top-tier conference submission.
Current PhD student in CS, ML, NLP, or related field.
Publication track record at top venues (NeurIPS, ICML, ACL, EMNLP, ICLR, etc.).
Hands-on experience training and experimenting with LLMs (e.g., PyTorch, JAX, DeepSpeed, distributed training stacks).
Strong empirical rigor and a passion for open-ended AI questions.
Prior work on RLHF, evaluation tooling, or data selection methods.
Contributions to open-source LLM frameworks.
Public speaking or teaching experience (we often host internal reading groups).
Location & Eligibility
Listing Details
- Posted
- April 8, 2026
- First seen
- May 6, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 14%
- Scored at
- May 6, 2026
Signal breakdown
Please let handshake know you found this job on Jobera.
4 other jobs at handshake
View all →Explore open roles at handshake.
Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.