A
Aisafety43mo ago
USD 140000–200000/yr

Research Engineer

United StatesSan FranciscoFull-Timemid
Research EngineerAi Research EngineerData & AI
0 views0 saves0 applied

Quick Summary

Overview

The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We address AI’s toughest challenges through technical research,

Technical Tools
Research EngineerAi Research EngineerData & AI
The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We address AI’s toughest challenges through technical research, field-building initiatives, and policy engagement, along with our sister organization, Center for AI Safety Action Fund.
 
As a Research Engineer here, you'll work at the intersection of cutting-edge ML research and reliable engineering. You'll design and run experiments on large language models, build the tooling needed to train and evaluate models at scale, and turn results into publishable research. You'll collaborate closely with CAIS researchers and external academic and commercial partners, using our compute cluster to run large-scale training and evaluation. The work spans areas like AI honesty, robustness, transparency, and trojan/backdoor behaviors, aimed at reducing real-world risks from advanced AI systems.
  • Own end-to-end research experiments.
  • Train and fine-tune large transformer models across domains.
  • Build and maintain datasets and benchmarks.
  • Run distributed training and evaluation at scale.
  • Write and ship research, collaborating with co-authors, and supporting submissions of papers to top conferences.
  • Collaborate with researchers and external partners while contributing to shared research direction and responding quickly in research cycles.
  • Support research infrastructure as needed, such as internal tooling, documentation, and reproducibility practices for the team.
  • Are a current PhD student or researcher in machine learning or a related field. Exceptional candidates with a strong publication record may be considered regardless of degree level.
  • Have co-authored at least one paper published at a top ML conference venue (e.g., NeurIPS, ICML, ICLR, ACL, CVPR). Workshop papers are considered, though peer-reviewed conference publications are strongly preferred. Publications in journals such as IEEE or Springer Nature are typically given less weight. 
  • Have a track record of empirical research in AI or ML, particularly in AI safety-relevant areas (e.g. adversarial robustness, calibration, benchmarking). We weight empirical research heavily; candidates with primarily theoretical backgrounds are generally not a strong fit.
  • Alternatively, have made meaningful research contributions at a leading AI lab.
  • Are able to read an ML paper, understand the key result, and understand how it fits into the broader literature.
  • Are comfortable setting up, launching, and debugging ML experiments.
  • Are familiar with relevant frameworks and libraries (e.g., PyTorch).
  • Communicate clearly and promptly with teammates.
  • Take ownership of your individual part in a project.
  • Listing Details

    Posted
    October 7, 2022
    First seen
    March 26, 2026
    Last seen
    April 21, 2026

    Posting Health

    Days active
    26
    Repost count
    0
    Trust Level
    34%
    Scored at
    April 21, 2026

    Signal breakdown

    freshnesssource trustcontent trustemployer trust
    Newsletter

    Stay ahead of the market

    Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

    A
    B
    C
    D
    Join 12,000+ marketers

    No spam. Unsubscribe at any time.

    A
    Research EngineerUSD 140000–200000