Anthropic
Anthropic15d ago
USD 210000-290000/yr

Technical Program Management, Alignment

OtherProgram Management
0 views0 saves0 applied

Quick Summary

Key Responsibilities

Scope, plan, and drive model evaluation projects end-to-end: understand the goal with researchers, coordinate contributors across teams & externally, staff efforts,

Requirements Summary

Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However,

Technical Tools
OtherProgram Management

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

The Alignment Special Ops team identifies and executes some of the most neglected, high-leverage projects across Anthropic’s Alignment org and beyond. We’re a small team with a broad mandate and our work takes us across the entire company (and often, the broader safety research ecosystem). You will accelerate technical research, incubate new research efforts, and drive high-priority initiatives that don’t have a natural home elsewhere (e.g., the Anthropic Fellows Program).

About the Role

~1 min read

You’ll own 3–4 special projects at a time. These are generally ambiguous, cross-functional problems that need someone to define the goal and approach, build the plan, coordinate the team, and drive to a result.

This role is in-person in San Francisco, CA.

Responsibilities

~1 min read

The team’s project list changes often, and some work is confidential. Below is representative of the kinds of things you’d do:

  • Scope, plan, and drive model evaluation projects end-to-end: understand the goal with researchers, coordinate contributors across teams & externally, staff efforts, and produce deliverables on a tight deadline

  • Manage external research collaborators—onboarding, expectation-setting, contracts, and handling edge cases as they arise

  • Synthesize complex information into decision-relevant inputs for leadership so they can move quickly

  • Identify new projects the company should take on, make the case in writing, get buy-in, and execute

  • Run the Alignment team offsite and similar events, including managing logistics, agendas, and delegation

  • Have 5+ years in chief-of-staff, program management, operations, or similar roles in a research, technical, or fast-moving environment (e.g., consulting, startups)

  • Can take a loosely-scoped problem, define a goal, break it into concrete steps, and execute without waiting for direction

  • Have built and managed teams, programs, or functions from scratch

  • Write clearly and concisely—you default to a one-page doc over a five-page one

  • Are comfortable making decisions with incomplete information

  • Can hold the details of multiple workstreams simultaneously while context-switching between them

  • Are deeply motivated by Anthropic’s mission of ensuring the world safely manages the transition through transformative AI

  • Experience working directly with researchers, especially in AI safety or machine learning

  • Familiarity with the AI safety research landscape, key organizations, and ongoing debates

The annual compensation range for this role is listed below. 

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:
$210,000$290,000 USD

Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience

Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience

Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Location & Eligibility

Where is the job
San Francisco, United States
On-site at the office
Who can apply
US
Listed under
United States

Listing Details

Posted
April 14, 2026
First seen
April 14, 2026
Last seen
April 29, 2026

Posting Health

Days active
15
Repost count
0
Trust Level
47%
Scored at
April 29, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Anthropic
Anthropic
greenhouse

Anthropic is an AI safety and research company dedicated to building reliable, interpretable, and steerable artificial intelligence systems. Founded by former OpenAI members, the company develops the Claude family of large language models with a primary focus on ensuring AI's long-term benefit to humanity.

Employees
3k+
Founded
2021
View company profile
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

AnthropicTechnical Program Management, Alignment USD 210000-290000