$151,800 – $189,000/yr

Grantmakers & Senior Generalists, Global Catastrophic Risks

Remote - GlobalRemotefull-timesenior
Other
0 views0 saves0 applied

Quick Summary

Overview

About Coefficient Giving Coefficient Giving (formerly Open Philanthropy) is a philanthropic funder and advisor. Since 2014, we’ve directed over $5 billion in grants as part of our mission to help others as much as we can with the resources available to us.

Technical Tools
cybersecuritymachine-learning

Coefficient Giving (formerly Open Philanthropy) is a philanthropic funder and advisor. Since 2014, we’ve directed over $5 billion in grants as part of our mission to help others as much as we can with the resources available to us. We work with a range of donors who share our commitment to cost-effective, high-impact giving. Our current funds include Science and Global Health R&D, Navigating Transformative AI, Biosecurity & Pandemic Preparedness, Abundance & Growth, Farm Animal Welfare, and more. In 2025, we recommended more than $1 billion to high-impact causes.

We’re proud of our track record:

  • We jump-started the field of AI safety and security and have played a vital role in addressing other existential threats, such as mirror bacteria.

  • Our grants to evidence-backed global health programs have saved over 100,000 lives, and our farm animal welfare grants have improved the lives of over 3 billion animals.

  • We supported late-stage clinical trials for the R21 malaria vaccine, now being scaled to protect millions of kids globally.

  • We were the earliest major funder of the YIMBY movement to build more housing. Our grantees have led the charge on major wins like City of Yes in New York, and SB 79 in California, which will enable hundreds of thousands of new housing units.

Coefficient Giving’s Global Catastrophic Risks (GCR) division houses our teams working on Navigating Transformative AI (spanning technical AI safety research, AI governance and policy, capacity building, and short-timelines special projects) and Biosecurity and Pandemic Preparedness. The GCR team expects to move around $1 billion in grants across these funds in 2026, and we expect this figure to grow significantly in the coming years. A core premise of our work is that if a global catastrophe caused by transformative AI or biotechnology could be prevented by funding and isn't, we consider it our responsibility.

Responsibilities

~1 min read
  • Put our mission first, and act with urgency to help us realize our ambitious goals.

  • Work to model our operating values of ownership, openness, calibration, and inclusiveness.

The ideal candidates for these positions will possess many of the skills and experiences described above and in the role-specific sections below. We don’t require candidates to meet all these criteria, and firmly believe there is no such thing as a “perfect” candidate. If you are on the fence about applying because you are unsure whether you are qualified, we strongly encourage you to apply.

The AI Governance & Policy (AIGP) team, led by Luke Muehlhauser, funds work to shape the norms, policies, laws, and institutions that govern how the most capable AI systems are developed and deployed. In 2025, the AIGP team moved over $140 million to impactful organizations and projects, making it one of the largest philanthropic funders of frontier AI policy. Its work includes AI governance research to improve our collective understanding of how to achieve effective AI governance, and AI governance policy and practice to improve the likelihood that good ideas are actually implemented by companies, governments, and other actors. Our portfolio spans U.S. and international policy research and advocacy, field-building, fundamental strategic research, technical governance (e.g. evaluations, verification mechanisms), and more.

Compared to the other teams hiring in this round, AIGP grantmaking tends to involve a strong understanding of key institutions like frontier AI companies, governments, international bodies, and the political dynamics within them. Familiarity with relevant technical domains like machine learning or hardware can be valuable, as can knowledge of policy specifics.

While we’re open to hiring talented generalists, the AIGP team is particularly interested in people with:

  • Prior experience in policy, for example in a government role or at a think tank. Experience in U.S. policy would be particularly valuable.

  • A technical background relevant to AI safety (e.g. in ML engineering or frontier AI hardware).

  • Experience working on AI governance or policy at a frontier AI company.

We are also hiring for one or more grantmakers focused specifically on grantmaking for U.S. policy. We're open to people from a wide variety of backgrounds, though we have a strong preference for candidates based in Washington, D.C.

Specific profiles we're interested in include:

  • A technical AI governance specialist who can bridge deep knowledge of AI capabilities and risks with the D.C. policymaking world.

  • A coalition-building expert with knowledge of the political economy of AI who can map how political coalitions form and shift, and invest in building durable cross-partisan support for AI governance.

  • A political and campaign strategist who has experience running or funding issue-area advocacy.

We are also hiring for a China specialist to help us find ways to support Chinese contributions to AI safety and useful cooperation between China and the West, including through forums like the International Dialogues on AI Safety. Applicants for this role should be fluent in Mandarin and have a strong educational and/or professional background in China studies, particularly Chinese politics, policy, business, economics, and/or recent history.

We are also hiring for an AI information security grantmaker. Our work on "AI infosec" includes safeguarding model weights and algorithmic breakthroughs, preventing system poisoning or sabotage, securing training data and compute resources, addressing vulnerabilities across the full machine learning supply chain (from compute resources to MLOps), enabling secure third-party access for audits and evaluations, and ensuring high security standards for other governance-enabling techniques (e.g. international agreement verification mechanisms).

This portfolio will likely span technical research, policy development, field-building initiatives, and ecosystem support. Our previous grants have supported RAND's Meselson Center (which authored Securing Model Weights), security fieldbuilding projects such as Heron, and benchmarks like Cybench, CVEbench, and BountyBench.

We are also hiring for a Chief of Staff, who will serve as a close thought partner to Luke Muehlhauser, produce nuanced recommendations for senior leadership, and design organizational infrastructure for the rapidly growing AIGP team. This is a senior generalist role, not a grantmaking role.

The desired traits for this role will be similar to the grantmaker positions, but responsibilities will be different, and likely include:

  • Acting as a force multiplier for Luke by helping him focus on AIGP’s top priorities, advising him on critical decisions, and owning initial drafts or full segments of his portfolio.

  • Project managing large initiatives across AI teams, such as strategy refreshes, public communications, and process updates that require significant stakeholder coordination.

  • Working closely with Luke to shape the strategy and structure of the AIGP team, including by identifying new priority areas and deciding who should own them.

  • Reviewing internal processes to accelerate the team’s grantmaking and raise the ambition and impact of our grantees.

  • Designing and leading hiring rounds end-to-end, from crafting the JD to recommending the final decision.

  • Managing grantmakers and other staff at various levels of seniority.

We are open to a wide range of experience levels, from early-career to very senior, and the shape of the role will be tailored to the successful candidate. Strong candidates will combine sharp strategic judgment, a clear track record of ownership and execution, and the ability to effectively support teams as they scale. There is no location requirement for this role.

The Biosecurity and Pandemic Preparedness (BPP) team, led by Andrew Snyder-Beattie, funds work to reduce the risk of catastrophic biological events, particularly those arising from the deliberate misuse of biotechnology. BPP’s work focuses on prevention (including biosecurity policy and governance, and safeguards to mitigate AI-enabled bio risks), response in the event of a catastrophe (especially personal protective equipment, biohardening, and detection), as well as field-building to support the broader ecosystem working on these problems.

Compared to the other teams hiring in this round, BPP focuses specifically on catastrophic biological threats, while still supporting a wide range of approaches within that space. The team’s work is often entrepreneurial, with a strong emphasis on identifying gaps and helping to launch new organizations or initiatives to address them. While familiarity with biosecurity, public health, or biotechnology can be helpful, it is not required, and many of BPP’s most effective grantmakers come from non-bio backgrounds. That said, for some areas of work, particularly at the intersection of AI and biology, relevant technical or domain expertise can be a significant asset.

For this round, BPP is particularly interested in hiring individuals to work at the intersection of AI and biology or biosecurity field-building.

We are also hiring a Chief of Staff, who will serve as a close thought partner to Andrew Snyder-Beattie, produce recommendations for senior leadership, and design organizational infrastructure for the rapidly growing BPP team. This is a senior generalist role, not a grantmaking role.

The desired traits for this role will be similar to the grantmaker positions, but responsibilities will be different, and likely include:

  • Acting as a force multiplier for Andrew by helping him focus on BPP's top priorities, advising him on critical decisions, and owning key workstreams on his behalf.

  • Project managing large initiatives across the team, such as process updates, public communications, and cross-team coordination.

  • Working closely with Andrew to shape the strategy and structure of the BPP team, including by identifying new priority areas and deciding who should own them.

  • Reviewing internal processes to accelerate the team's grantmaking and raise the ambition and impact of our grantees.

  • Designing and leading hiring rounds end-to-end, from crafting the job description to recommending the final decision.

  • Managing grantmakers and other staff at various levels of seniority.

  • Ensuring Andrew can focus on the areas of highest impact by ruthlessly managing priorities, delegating and owning tasks, and serving as a sounding board for the toughest problems.

Over time, exceptional hires could grow in various directions, including taking on a more senior operational leadership role while managing a number of staff, transitioning into senior grantmaking, leading ambitious special projects with a high degree of autonomy, and/or co-founding a spinout organization.

We are open to a wide range of experience levels, from early-career to very senior, and the shape of the role will be tailored to the successful candidate. A biology or biosecurity background is not required. Strong candidates will combine sharp strategic judgment, a clear track record of ownership and execution, high emotional intelligence, and the ability to effectively support teams as they scale.

Responsibilities

~1 min read

U.S.-based staff are typically employed by Coefficient Giving LLC, which is not a 501(c)(3) tax-exempt organization. As such, this role is unlikely to be eligible for public service loan forgiveness programs.

We may use AI to assist in the initial screening of applications, including to detect whether candidates have used AI models in drafting their application. Decisions are always made by a human on our team.

If you have any questions about our use of AI tools, you can email jobs@coefficientgiving.org.

Location & Eligibility

Where is the job
Worldwide
Fully remote, anywhere in the world
Who can apply
Open to applicants worldwide

Listing Details

Posted
April 29, 2026
First seen
May 6, 2026
Last seen
May 7, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
37%
Scored at
May 6, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

coefficientgivingGrantmakers & Senior Generalists, Global Catastrophic Risks$152k–$189k