I
Ifm Us3mo ago
USD 150000–350000/yr
Foundation Model DevOps Engineer
EngineeringDevOps & InfrastructureDevops EngineerInfrastructure & Cloud
0 views0 saves0 applied
Quick Summary
Key Responsibilities
You own the standard of our public presence. You ensure that every release (weights, code, training logs, data) is reproducible, meticulously documented,
Requirements Summary
You understand the lifecycle of training large models (LLMs or Diffusion). You know what a checkpoint is, you understand the difference between pre-training and inference, and you
Technical Tools
EngineeringDevOps & InfrastructureDevops EngineerInfrastructure & Cloud
About the Institute of Foundation Models
We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research, nurture the next generation of AI builders, and drive transformative contributions to a knowledge-driven economy.
As part of our team, you’ll have the opportunity to work on the core of cutting-edge foundation model training, alongside world-class researchers, data scientists, and engineers, tackling the most fundamental and impactful challenges in AI development. You will participate in the development of groundbreaking AI solutions that have the potential to reshape entire industries. Strategic and innovative problem-solving skills will be instrumental in establishing MBZUAI as a global hub for high-performance computing in deep learning, driving impactful discoveries that inspire the next generation of AI pioneers.
About the Institute of Foundation Models
We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research, nurture the next generation of AI builders, and drive transformative contributions to a knowledge-driven economy.
As part of our team, you’ll have the opportunity to work on the core of cutting-edge foundation model training, alongside world-class researchers, data scientists, and engineers, tackling the most fundamental and impactful challenges in AI development. You will participate in the development of groundbreaking AI solutions that have the potential to reshape entire industries. Strategic and innovative problem-solving skills will be instrumental in establishing MBZUAI as a global hub for high-performance computing in deep learning, driving impactful discoveries that inspire the next generation of AI pioneers.
The Role
We are seeking a Foundation Model DevOps Engineer focused on Operational Stability to serve as the backbone of our AI research infrastructure.
You will be designing the friction-free environment that allows our models to be built. Your mandate is to build the tooling, release pipelines, and storage policies that remove drag on our research team. You will own the "foundational layer", ensuring that our researchers have immediate, secure, and reliable access to the tools, data, and compute they need.
Key Responsibilities
Model Release Engineering
· High-Fidelity Release Management: You own the standard of our public presence. You ensure that every release (weights, code, training logs, data) is reproducible, meticulously documented, and packaged with the polish of a top-tier open-source product. CI/CD for Research: Design and implement pipelines that automate the testing and packaging of complex model releases, moving us away from manual handovers to automated verification.
· Repo Administration: Administer the organization’s GitHub Enterprise account, ensuring branch protection and clean versioning practices are enforced across the lab.
Resource Management & Infrastructure Efficiency
· Compute Governance: Manage the efficiency of our large-scale GPU resources. You track utilization to identify idle nodes, "zombie jobs," or inefficient scheduling, ensuring we extract maximum value from our compute clusters.
· Storage Strategy & Hygiene: Manage the lifecycle of petabyte-scale datasets and checkpoint storage. You implement intelligent aging policies to solve the "disk full" bottleneck without risking critical data loss.
· Quota & Access Logic: Proactively manage storage and compute quotas across research teams to prevent resource contention before it blocks a training run.
Research Tooling & Orchestration
· Experiment Management Systems: Build and maintain the internal CLI tools and dashboards that allow researchers to launch, track, and organize jobs across thousands of GPUs.
· Resource Telemetry: Set up real-time monitoring for interconnect throughput, GPU memory, and file system latency to catch performance degradation instantly.
· Job Orchestration: Work closely with infrastructure teams to optimize how we run synthetic data pipelines and large-scale evaluations, ensuring our tooling scales with our compute.
Research Environment Provisioning
· Automated Workspace Setup: Build the scripts and tooling that instantly provision compute environments, permissions, and storage namespaces for researchers (automating away the manual work).
· Cluster Access Architecture: Streamline SSH and node access protocols to ensure friction-free entry to our massive-scale compute clusters while maintaining security boundaries.
Academic Qualifications
A bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.
Professional Experience - Minimum (The Bar)
· 3+ years of experience in DevOps, Release Engineering, or MLE, specifically within AI/ML or HPC environments.
· Foundation Model Fluency: You understand the lifecycle of training large models (LLMs or Diffusion). You know what a checkpoint is, you understand the difference between pre-training and inference, and you are familiar with the artifacts required for a model release.
· Linux/Unix Fluency: You live in the command line. You have deep expertise in bash scripting, file system permissions, and SSH configuration.
· Version Control Admin: Expert-level administration of GitHub Enterprise (managing teams, API limits, and repository security).
· Scripting & Automation: Proficiency in Python or Bash to automate repetitive administrative tasks.
Professional Experience - Preferred (The Fit)
· "Gold Standard" Open Source: Experience contributing to or managing high-profile open-source releases (Hugging Face libraries, model families, datasets).
· HPC Schedulers: Deep understanding of Slurm job scheduling and troubleshooting.
· Cloud Storage: Familiarity with cloud storage buckets (S3/GCP) and efficient data transfer tools.
Listing Details
- Posted
- January 16, 2026
- First seen
- March 26, 2026
- Last seen
- April 24, 2026
Posting Health
- Days active
- 28
- Repost count
- 0
- Trust Level
- 42%
- Scored at
- April 24, 2026
Signal breakdown
freshnesssource trustcontent trustemployer trust
Salary
USD 150000–350000
per year
External application · ~5 min on Ifm Us's site
Please let Ifm Us know you found this job on Jobera.
Similar Devops Engineer jobs
View all →Site Reliability Engineer (SRE) / DevOps Engineer
Site Reliability Engineer (SRE) / DevOps Engineer
R
RothesaylifeSenior Infrastructure Engineer
Senior DevOps Engineer
$192k–$240k/yr
M
MachinifyincRemoteStaff Platform Engineer | Expansion
Remote
I
IcapitalnetworkSite Reliability Engineer - Vice President
Browse Similar Jobs
DevOps & Infrastructure2.9kSecurity1.9kData Engineering1.3kBackend Engineering1.3kEngineering Manager912Frontend Engineering829Backend Developer540Fullstack Developer501IT & Administration477Software Architect443Qa Engineer393Security Engineer375Mechanical Engineer371Mobile Development356Electrical Engineer296Mobile Developer257Frontend Developer237Design Engineer213Automation Engineer175Embedded Engineer143
Newsletter
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
A
B
C
D
No spam. Unsubscribe at any time.
I
Foundation Model DevOps EngineerUSD 150000–350000