Member of Technical Staff - Image / Video Generation

Freiburg (germany)lead
OtherMember Of Technical Staff
0 views0 saves0 applied

Quick Summary

Overview

About Black Forest Labs We’re the team behind Latent Diffusion, Stable Diffusion, and FLUX—foundational technologies that changed how the world creates images and video.

Technical Tools
OtherMember Of Technical Staff

We’re the team behind Latent Diffusion, Stable Diffusion, and FLUX—foundational technologies that changed how the world creates images and video. We’re creating the generative models that power how people make images and video—tools used by millions of creators, developers, and businesses worldwide. Our FLUX models are among the most advanced in the world, and we’re just getting started.

Headquartered in Freiburg, Germany with a growing presence in San Francisco, we’re scaling fast while staying true to what makes us different: research excellence, open science, and building technology that expands human creativity.

You'll train large-scale diffusion models for image and video generation, exploring new approaches while maintaining the rigor that helps us distinguish meaningful progress from incremental tweaks. This isn't about following established recipes—it's about running the experiments that clarify which architectural choices matter and which are less impactful.

  • Trains large-scale diffusion transformer models for image and video data, working at the scale where intuitions break and empirical evidence matters
  • Rigorously ablates design choices—running experiments that isolate variables, control for confounds, and produce insights you can actually trust—then communicating those results to shape our research direction
  • Reasons about the speed-quality tradeoffs of neural network architectures in production settings where both constraints matter simultaneously
  • Fine-tunes diffusion models for specialized applications like image and video upscalers, inpainting/outpainting models, and other tasks where general-purpose models aren't enough

You've trained large-scale diffusion models and developed strong intuitions about what matters. You know that at research scale, every design choice has tradeoffs, and the only way to know which ones are worth making is through careful ablation. You're comfortable debugging distributed training issues and presenting research findings to the team.

  • Hands-on experience training large-scale diffusion models for image and video data, with practical knowledge of common failure modes and what matters most in training
  • Experience fine-tuning diffusion models for specialized applications—upscalers, inpainting, outpainting, or other tasks where understanding the domain matters as much as understanding the architecture
  • Deep understanding of how to effectively evaluate image and video generative models—knowing which metrics correlate with quality and which are just convenient proxies
  • Strong proficiency in PyTorch, transformer architectures, and the full ecosystem of modern deep learning
  • Solid understanding of distributed training techniques—FSDP, low precision training, model parallelism—because our models don't fit on one GPU and training decisions impact research outcomes
  • Have experience writing forward and backward Triton kernels and ensuring their correctness while considering floating point errors
  • Bring proficiency with profiling, debugging, and optimizing single and multi-GPU operations using tools like Nsight or stack trace viewers
  • Know the performance characteristics of different architectural choices at scale
  • Have published research that contributed to how people think about generative models

We’re a distributed team with real offices that people actually use. Depending on your role, you’ll either join us in Freiburg or SF at least 2 days a week (or one full week every other week), or work remotely with a monthly in-person week to stay connected. We’ll cover reasonable travel costs to make this possible. We think in-person time matters, and we’ve structured things to make it accessible to all. We’ll discuss what this will look like for the role during our interview process.

  • Obsessed: We build beautifully crafted, scientifically rigorous products by deeply understanding problems from first principles; and never shipping anything we’re not proud of.
  • Low Ego: Prioritizing the best idea over personal ownership, where titles hold no authority, credit is shared, and no task is beneath anyone.
  • Bold: We ship bold ideas early, improve fast, and take ambitious bets, without sacrificing quality for speed.
  • Kind: We treat each other with genuine care, speaking directly and kindly even when conversations are hard.

If this sounds like work you’d enjoy, we’d love to hear from you.

Listing Details

Posted
April 13, 2026
First seen
March 26, 2026
Last seen
April 14, 2026

Posting Health

Days active
18
Repost count
0
Trust Level
52%
Scored at
April 14, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trustcandidate experience
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

B
Member of Technical Staff - Image / Video Generation