Principal Engineer, Inference Cloud

United StatesSunnyvale · Sunnyvale Ca Or Toronto Canadamid
EngineeringData ScienceManagement
0 views0 saves0 applied

Quick Summary

Overview

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip,

Technical Tools
EngineeringData ScienceManagement

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. 

Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

Location: Sunnyvale 

We're hiring a Principal Engineer for our Inference Cloud Platform. This team owns the cloud layer behind our Inference Service, including availability, latency, reliability, and multi-region scale. 

This is one of the most senior IC roles on the team, for someone who can identify the highest-leverage platform problems, set direction across multiple teams, define long-term architecture, and write production code on critical paths. 

Many of the key decisions are ambiguous at the outset; you’ll need to frame the problem, make tradeoffs, and drive execution without a clear spec. 

The scope includes multi-region traffic architecture, graceful degradation under bursty AI workloads, high-QPS performance, and the operating model for a platform that needs to remain fast and available under changing demand. You'll partner closely with ML, Product and Infrastructure teams. 

Responsibilities 

  • Problem Definition & Prioritization. Identify the most important technical problems for the platform, often before there's a clear ask. Make explicit tradeoff decisions about what the platform will and won't support, with reasoning that holds up under scrutiny from senior engineering leadership. 
  • Platform Direction. Set the long-term technical direction for the Inference Cloud Platform, including multi-region topology, failure domains, service boundaries, and system evolution over time. 
  • Reliability & Performance. Architect active-active systems with rapid failover and graceful degradation (circuit breaking, backpressure, load shedding) with clear SLOs. Drive improvements in latency, throughput, capacity efficiency, and resilience under unpredictable demand. 
  • Code & Design Reviews. Contribute production code in critical paths, review designs and implementations, and make architectural decisions including build-vs-buy tradeoffs with long-term operational consequences. 
  • Production Leadership. Lead on the hardest production issues and cross-system bottlenecks. Drive observability, incident response, capacity planning, and post-incident improvement with a high standard for operational rigor. 
  • Technical Strategy Beyond Your Team. Drive platform-wide decisions across adjacent teams on reliability, API design, capacity planning, and deployment strategy through strong technical judgment. Translate product and business requirements into scalable system designs and drive alignment on shared infrastructure decisions. 
  • Mentorship. Raise the quality of technical decision-making across teams through design feedback, pairing, and clear engineering standards. 

Skills & Qualifications 

  • 10+ years of experience in software engineering, with substantial individual contributor experience building and operating large-scale distributed systems or cloud infrastructure.  
  • Deep expertise in distributed systems architecture in cloud environments, including networking, compute orchestration, container platforms, and multi-region production services. 
  • Strong track record of making sound architectural decisions for highly available, latency-sensitive systems at scale, demonstrated through systems you built directly. 
  • Experience optimizing latency, throughput, and efficiency in high-QPS systems. Experience with TTFT and tail-latency reduction is a strong plus. 
  • Strong proficiency in backend or systems languages such as Go, C++, or Python, with the expectation that you can contribute production code directly. 
  • Experience designing observability and reliability practices, including metrics, logging, tracing, alerting, incident response, and SLI/SLO/SLA-driven operations. 
  • Ability to influence senior engineers, technical leads, and cross-functional partners through technical credibility, communication, and judgment. 
  • Experience with ML inference infrastructure, model serving systems, or GPU-accelerated workloads is a plus. 

 

 

 

What We Offer

~1 min read
Build a breakthrough AI platform beyond the constraints of the GPU.
Publish and open source their cutting-edge AI research.
Work on one of the fastest AI supercomputers in the world.
Enjoy job stability with startup vitality.
Our simple, non-corporate work culture that respects individual beliefs.

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.


This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Listing Details

Posted
April 13, 2026
First seen
March 26, 2026
Last seen
April 13, 2026

Posting Health

Days active
18
Repost count
0
Trust Level
68%
Scored at
April 13, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trustcandidate experience
Cerebras Systems

Cerebras Systems is revolutionizing AI acceleration with its innovative hardware solutions designed to enhance deep learning capabilities.

Employees
350
Founded
2016
View company profile
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

Cerebras SystemsPrincipal Engineer, Inference Cloud
Principal Engineer, Inference Cloud | Cerebras Systems | Sunnyvale | April 2026