Senior/Staff AI Graph Compiler Engineer
Quick Summary
Design, implement, and maintain frontend and graph-level compiler components using MLIR Develop and optimize graph-level transformations such as operator fusion, constant folding, operator sinking, graph partitioning, and other performance-critical…
Master’s or PhD in Computer Science or a related technical field 3-5 years of experience in a Software Engineering role, thereof ideally 2 years of experience with Deep Learning frameworks and AI systems Experience with MLIR, including dialects,…
Axelera AI is not your regular deep-tech startup. We are creating the next-generation AI platform to support anyone who wants to help advancing humanity and improve the world around us.
In just four years, we have raised a total of $370 million and have built a world-class team of 220+ employees (including 49+ PhDs with more than 40,000 citations), both remotely from 18 different countries and with offices in Belgium, France, Switzerland, Italy, the UK, headquartered at the High Tech Campus in Eindhoven, Netherlands.
We have also launched our Metis™ AI Platform, which achieves a 3-5x increase in efficiency and performance, and have visibility into a strong business pipeline exceeding $100 million.
Our unwavering commitment to innovation has firmly established us as a global industry pioneer.
Are you up for the challenge?
We are looking for a Frontend / Graph-level Compiler Engineer to join our growing compiler team at Axelera AI. In this role, you will play a key part in developing and optimizing our MLIR-based compiler stack, enabling efficient execution of AI workloads on cutting-edge heterogeneous hardware architectures.
You will work closely with AI researchers, compiler engineers, and hardware architects, collaborating with a talented team of engineers across Europe. This is your chance to work on cutting-edge AI acceleration architectures, advance compiler technology, and make a real impact in a fast-moving startup environment.
Responsibilities
~1 min read- →
Design, implement, and maintain frontend and graph-level compiler components using MLIR
- →
Develop and optimize graph-level transformations such as operator fusion, constant folding, operator sinking, graph partitioning, and other performance-critical optimizations
- →
Extend and maintain MLIR dialects, passes, and infrastructure to support AI workloads
- →
Integrate and lower AI models from frameworks such as PyTorch, ONNX, and TensorFlow into internal compiler representations
- →
Collaborate with hardware and backend compiler teams to ensure efficient mapping of AI workloads to heterogeneous architectures
- →
Support and mentor team members in adopting and effectively using MLIR infrastructure
- →
Analyze model graphs and implement optimizations to improve performance, memory efficiency, and execution efficiency
- →
Contribute to the design and evolution of the overall compiler architecture and tooling
- →
Debug, profile, and improve compiler performance and correctness
Requirements
~1 min readMaster’s or PhD in Computer Science or a related technical field
3-5 years of experience in a Software Engineering role, thereof ideally 2 years of experience with Deep Learning frameworks and AI systems
Experience with MLIR, including dialects, passes, and compiler infrastructure
Solid understanding of compiler design principles and intermediate representations, especially graph-based Irs
Experience working with AI model frameworks such as PyTorch (preferred), ONNX, or TensorFlow
Proven experience implementing graph-level optimizations such as operator fusion, constant folding, graph partitioning, or similar transformations
Strong programming skills in C++ and Python
Experience working in collaborative engineering teams
Excellent communication skills and willingness to share knowledge and mentor others
Nice to Have
~1 min readExperience working with custom AI accelerators or specialized hardware
Background in computer architecture, especially heterogeneous systems (e.g., CPU + NPU, GPU, or dedicated accelerators)
Experience with AI compiler stacks such as Torch-MLIR, TVM, XLA, Glow, or similar
Experience optimizing AI workloads for performance and efficiency
Familiarity with frontend model ingestion, graph lowering, and compiler pipelines
We offer a flexible working arrangement, with options to:
Work from one of our Axelera AI offices (Leuven in Belgium, Amsterdam and Eindhoven in the Netherlands, Zurich in Switzerland, Florence and Milan in Italy or Bristol in the United Kingdom) if you're already based in the vicinity.
Work fully remotely from any European country (incl. the UK) you are already in.
Relocate with us and work from Italy (Florence or Milan) or the Netherlands (Amsterdam or Eindhoven).
What We Offer
~1 min readThis is your chance to shape and be part of a dynamic, fast-growing, international organization. We offer an attractive compensation package, including a pension plan, extensive employee insurances and the option to get company shares.
An open culture that supports creativity and continual innovation is awaiting you. Collaborative ownership and freedom with responsibility is characteristic for the way we act and work as a team.
At Axelera AI, we wholeheartedly embrace equal opportunity and hold diversity in the highest regard. Our steadfast commitment is to cultivate a warm and inclusive environment that empowers and celebrates every member of our team. We welcome applicants from all backgrounds to join us in shaping the future of AI.
Location & Eligibility
Listing Details
- Posted
- March 13, 2026
- First seen
- May 5, 2026
- Last seen
- May 6, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 47%
- Scored at
- May 6, 2026
Signal breakdown
Please let axelera know you found this job on Jobera.
3 other jobs at axelera
View all →Explore open roles at axelera.
Similar Compiler Engineer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.