
Anthropic AI Security Fellow
Quick Summary
About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We believe we are at an inflection point for AI’s impact on cybersecurity. Models are now useful for cybersecurity tasks in practice: for example, Claude can now outperform human teams in some cybersecurity competitions and help us discover vulnerabilities in our own code.
We are looking for researchers and engineers to help us accelerate defensive use of AI to secure code and infrastructure.
The Anthropic Fellows Program is designed to accelerate AI security and safety research, and foster research talent. We provide funding and mentorship to promising technical talent - regardless of previous experience - to research the frontier of AI security and safety for four months.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In our previous cohorts, over 80% of fellows produced papers (more below).
We run multiple cohorts of Fellows each year. This application is for cohorts starting in July 2026 and beyond.
- Direct mentorship from Anthropic researchers
- Access to a shared workspace (in either Berkeley, California or London, UK)
- Connection to the broader AI safety research community
- Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD & access to benefits (benefits vary by country)
- Funding for compute (~$15k/month) and other research expenses
Fellows will undergo a project selection & mentor matching process. Potential mentors include:
- Nicholas Carlini
- Keri Warr
- Evyatar Ben Asher
- Keane Lucas
- Newton Cheng
On our Alignment Science and Frontier Red Team blogs, you can read about some past Fellows projects, including:
- AI agents find $4.6M in blockchain smart contract exploits: Winnie Xiao and Cole Killian, mentored by Nicholas Carlini and Alwin Peng
- Strengthening Red Teams: A Modular Scaffold for Control Evaluations: Chloe Loughridge et al., mentored by Jon Kutasov and Joe Benton
- Are motivated by reducing catastrophic risks from advanced AI systems
- Are excited to transition into full-time empirical AI safety research and would be interested in a full-time role at Anthropic
Requirements
~1 min readImportant: Below are Anthropic's policies for full time roles. Please note that these expectations (regarding visas and location) are not applicable for the fellows program. Please go straight to the link above.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Listing Details
- Posted
- April 3, 2026
- First seen
- March 23, 2026
- Last seen
- April 8, 2026
Posting Health
- Days active
- 15
- Repost count
- 0
- Trust Level
- 81%
- Scored at
- April 8, 2026
Signal breakdown

Anthropic is an AI safety and research company dedicated to building reliable, interpretable, and steerable artificial intelligence systems. Founded by former OpenAI members, the company develops the Claude family of large language models with a primary focus on ensuring AI's long-term benefit to humanity.
View company profilePlease let Anthropic know you found this job on Jobera.
4 other jobs at Anthropic
View all →Explore open roles at Anthropic.
Similar Anthropic AI Security Fellow jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.