Staff Data Engineer – Data Modeling
Quick Summary
Our Data team consists of highly skilled senior software and data professionals who collaborate to solve complex data challenges. We process billions of records daily from multiple sources using multi-stage pipelines with intricate data structures and advanced queries.
Our Data team consists of highly skilled senior software and data professionals who collaborate to solve complex data challenges. We process billions of records daily from multiple sources using multi-stage pipelines with intricate data structures and advanced queries.
We are responsible for building data pipelines end to end—from raw data ingestion to the creation of actionable datasets—following the bronze, silver, and gold paradigm. This includes business logic, infrastructure, ETLs, optimization, and ongoing maintenance.
The data we deliver drives insights and decision-making across the organization and enhances our product offerings. We leverage technologies such as AWS, Snowflake, Iceberg, Airflow, Spark, and more.
Responsibilities
~2 min read- →
Lead the translation of business and product requirements into scalable data models, transformations, and pipelines.
- →
Design and own datasets across bronze, silver, and gold layers, including defining grain, aggregations, and data contracts.
- →
Develop and maintain SQL-heavy data pipelines and Airflow DAGs (workflow logic, dependencies, backfills, python, and lots of SQL).
- →
Own data correctness for key business metrics (e.g., ARR), including deep root cause analysis and resolution of data issues.
- →
Define and drive best practices for SQL, data modeling, and pipeline design across the team.
- →
Optimize queries and data models for performance, scalability, and cost efficiency.
- →
Collaborate closely with product managers, analysts, and BI developers to refine requirements and ensure high-quality data delivery.
- →
Develop AI-agents to accelerate data analysis by internal and external users.
- →
Work with complex data inputs (e.g., JSON, schemas, logs) and incorporate them into robust data pipelines.
7+ years of experience as a Data Engineer or Architect or in a similar data-focused role, with clear ownership of end-to-end data solutions.
Strong expertise in writing and optimizing complex SQL queries (advanced joins, aggregations, performance tuning).
Proven experience building and maintaining Airflow DAGs (or similar orchestration tools), focused on workflow logic, code, and SQL rather than infrastructure.
Deep understanding of data modeling principles, including designing datasets at the correct grain and preventing data inconsistencies. Use of the medallion model.
Strong ability to understand business needs and translate them into scalable, maintainable data solutions.
Demonstrated experience debugging data issues and tracing discrepancies in critical business metrics across pipelines.
Proficient in Python for orchestration and data workflows.
Comfortable reading and reasoning about existing code, SQL, DAGs, schemas, and input data formats (e.g., JSON).
Experience working with cloud data warehouses such as Snowflake, BigQuery, or Databricks.
Experience in Snowflake and its extended SQL and nuances is a strong advantage.
Location & Eligibility
Listing Details
- Posted
- May 7, 2026
- First seen
- May 7, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 70%
- Scored at
- May 7, 2026
Signal breakdown
Please let Cloudinary know you found this job on Jobera.
3 other jobs at Cloudinary
View all →Explore open roles at Cloudinary.
Similar Staff Data Engineer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.
