Technical Project Manager (Red Team)
Quick Summary
About FAR.AI FAR.AI is a non-profit AI research institute working to ensure advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response.Since our…
Strong candidates typically have many (but not all) of the following: Substantial programme or engagement management experience (>5y) in a high-velocity technical environment (frontier labs, AI safety organisations, AISIs, technical consultancies,…
FAR.AI is a non-profit AI research institute working to ensure advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response.
Since our founding in July 2022, we've grown to 45+ staff, published 40+ academic papers, and convened leading AI safety events. Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times, Nature News and MIT Technology Review. We conduct pre-deployment testing on behalf of frontier developers such as OpenAI and independent evaluations for governments including the EU AI Office. We help steer and grow the AI safety field through developingresearchroadmaps with renowned researchers such as Yoshua Bengio; running FAR.Labs, an AI safety-focused co-working space in Berkeley housing 40 members; and supporting the community through targeted grants to technical researchers.
FAR.AI’s red team is building toward a simple outcome: materially raising the bar for safety and security of the most widely deployed and capable AI systems in the world. We intend to be the tip of the spear in AI safety: the team that consistently finds the failures others miss, resulting in real mitigations, and setting the standard that labs and governments converge on. We also leverage our in-depth understanding of weaknesses in frontier models to advise frontier developers on mitigations, to guide our own research and grantmaking for improving model security, and to inform the public of key AI risks.
We are already one of the leading independent red-teaming organizations. Our work has helped most Western frontier model developers improve safeguards through pre- and post-deployment testing e.g., we have directly influenced safeguards at major frontier developers like OpenAI and Anthropic. Our work is also assisting high-leverage government efforts e.g., leading a consortium building CBRN evaluations for the European Commission/EU AI Office, and collaborating with the UK AI Security Institute.
In 2026, we are scaling from a strong team with standout wins into a new level of impact for any AI red team globally:
Red-teaming all major frontier model releases (closed and open-weight) within days/weeks of release;
Expanding strategic engagements with governments and conducting pre-deployment testing with most frontier labs;
Deepening our testing of key risk areas like CBRN, cyber, and agents, and exploring new ones like AI control and alignment;
Building tools, agents, and insights that raise the global standard for red-teaming;
About the Role
~2 min readFAR.AI is hiring a Technical Project Manager to be the delivery backbone of one of the world's most impactful frontier AI red-teaming programmes. You will own the delivery of some of our highest-stakes engagements with governments and frontier AI companies, support technical engagements/outcomes, assist our red-team recruiting, and be the operational glue that lets our red-team succeed.
This is a force-multiplier role. You will report to Edward Yee (Head of Growth & Strategy, who also leads the red-team), dotted line to Kellin Pelrine (co-leading and technical lead for the red-team). You will work alongside our red-teamers, researchers, and the rest of the team to turn a fast-growing team and portfolio of engagements into a reliable, high-velocity delivery team.
The red-team has ambitious goals and is evolving rapidly, and we expect this role to evolve to fit the priorities of the team. Our team is scaling from 2 to 15+ this year, with candidate profiles that require active sourcing and careful shepherding in order to hire the very best talent. Our engagement portfolio across frontier labs and governments is already complex and will grow further. You will be moderately technical — comfortable enough with red-teaming substance to read a report, engage credibly with technical colleagues, do technical writing/reviews, and understand what we are hiring for. You do not need to jailbreak models yourself, but you should be able to meet our red-teamers on their terms. Our best guess for the 2026 shape:
Have shipped complex, multi-stakeholder technical projects on real deadlines, ideally involving government counterparts or frontier technology.
Enjoy recruiting and see it as core strategic work, not a box to tick. You get energy from finding the right person, pulling them through a process, and getting them to say yes.
Are motivated by impact over recognition, publishing papers, or building a personal policy brand.
Are excited to be a force multiplier for one of the most impactful teams in the world.
Low ego and drive to do whatever work most advances the team’s goals—even when it’s behind the scenes, challenging tasks and schedules, or outside a narrow role definition.
Are comfortable being moderately technical.
Enjoy moving fast in ambiguous environments where priorities shift and resourcefulness matters more than process.
Take ownership of outcomes, not tasks — you ask whether the thing we care about is actually going to happen, and if not, you act.
Prefer to write specs and hand them to engineers — this role requires you to be close to the work.
See recruiting as administrative overhead rather than strategic work.
Need a highly structured environment with stable problem definitions and clear playbooks.
Are motivated mainly by compensation, title, or visible authority over senior relationships.
Are uncomfortable working closely with a technical team and engaging with technical material.
Are not willing to be relentless.
Strong candidates typically have many (but not all) of the following:
Substantial programme or engagement management experience (>5y) in a high-velocity technical environment (frontier labs, AI safety organisations, AISIs, technical consultancies, government programme offices, or scaling technical startups).
A track record of delivering complex multi-party programmes to hard deadlines, with clear evidence of the judgment calls you made and the trade-offs you owned.
Comfort with the technical substance of our work. You do not need to be a researcher or engineer, but you should be able to read technical reports (red-teaming, security/vulnerability etc.) and form a view on what matters, engage credibly with our technical team, and be curious enough to dig in when it's relevant.
Experience running or materially contributing to technical recruiting — sourcing, pipeline management, work-trial design, or end-to-end hiring. You don't need to have been a full-time recruiter, but you should have shown you can own hiring outcomes.
Experience drafting proposals, responses to RFPs, or similar written artefacts for government or frontier technology counterparts.
Strong written communication — the ability to write status updates, risk memos, outreach to candidates, and external briefs with minimal editing.
Demonstrated ability to operate with significant autonomy and good judgment in high-stakes settings.
It is a plus (but not required) if you have:
Familiarity with AI safety as a field — the risk models, the landscape of labs and institutes, and the case for independent testing.
Experience working with governments, frontier AI companies, or AI Safety organisations.
A technical background in ML, cybersecurity, software engineering, or a related field.
Previous exposure to AI evaluations, red-teaming, or AI governance work.
Experience running grantmaking or RFP processes.
If based in the USA or Singapore, you will be an employee of FAR.AI (501(c)(3) research non-profit / non-profit CLG). Outside the USA or Singapore, you will be employed via an EOR organisation (employment contract) on behalf of FAR.AI or as a contractor.
Location: Remote globally. We can sponsor US or Singapore visas.
Hours: Full-time. Expect up to one trip per month for convenings, government meetings, or team gatherings.
Compensation: USD 125,000–190,000, depending on experience. Exceptional candidates may be offered more.
We know these roles are rare and the skill combination is unusual. If you're uncertain whether your background fits but are excited by the mission and challenges, we encourage you to apply – we're looking for excellence and potential, not a perfect resume match.
Location & Eligibility
Listing Details
- Posted
- May 14, 2026
- First seen
- May 14, 2026
- Last seen
- May 14, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 59%
- Scored at
- May 14, 2026
Signal breakdown
Please let far.ai know you found this job on Jobera.
3 other jobs at far.ai
View all →Explore open roles at far.ai.
Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.