Quick Summary
Who We AreOPLOG is the tech engine behind seamless e‑commerce fulfillment for top brands across Türkiye, Europe, and the US. By fusing in‑house software with robotics and automation, we erase the line between the physical and digital worlds—delivering post‑purchase experiences that turn customers…
Design, maintain and optimize batch and streaming data pipelines using Databricks, Apache Spark, and Fivetran. Build and implement data models, products, and platforms with high quality using Databricks ecosystem and dbt for data transformation.
Bachelor/Master's degree in related engineering departments such as Computer Engineering, Software Engineering, or Data Science.
OPLOG is the tech engine behind seamless e‑commerce fulfillment for top brands across Türkiye, Europe, and the US. By fusing in‑house software with robotics and automation, we erase the line between the physical and digital worlds—delivering post‑purchase experiences that turn customers into fans and giving our clients an unfair competitive edge.
About the Role
We are looking for a highly experienced, talented, and self-starter Senior Data Engineer who will be responsible for designing and implementing scalable and robust data platforms and solutions using cutting-edge technologies.
- Design, maintain and optimize batch and streaming data pipelines using Databricks, Apache Spark, and Fivetran.
- Build and implement data models, products, and platforms with high quality using Databricks ecosystem and dbt for data transformation.
- Develop MLOps pipelines and AI-driven solutions to enhance our fulfillment operations and predictive analytics.
- Work with product management and product teams to build data-driven products, extract, interpret and present insights using Qlik.
- Contribute to the development of the analytical data warehouse and related big data ecosystem on AWS platform.
- Implement real-time data processing and streaming architectures using Databricks structured streaming.
- Build and maintain dbt models for data transformation and analytics engineering.
- Provide support to data analysts and data scientists for their data engineering and ML infrastructure requirements.
- End-to-end experience of software development lifecycle with focus on MLOps best practices.
- Bachelor/Master's degree in related engineering departments such as Computer Engineering, Software Engineering, or Data Science.
- 6+ years of professional experience in data engineering with a proven track record of delivering complex data solutions in a production environment.
- High proficiency in Python for data engineering, AI/ML development, and SQL.
- Experience with dbt (Data Build Tool) for data transformation and analytics engineering.
- Deep experience in AWS cloud services (S3, EMR, Glue, Lambda, EC2).
- Extensive hands-on experience with Databricks platform and Apache Spark for large-scale data processing.
- Experience with Fivetran or similar ELT tools for data integration and pipeline automation.
- Proven experience building MLOps pipelines and deploying machine learning models in production using Databricks MLflow.
- Experience with data processing frameworks and tools such as RDBMS, NoSQL, and High Scale Databases.
- Proven experience building data pipelines using Databricks, Fivetran, and related modern data stack tools.
- Experience in real-time and streaming architectures using Databricks Structured Streaming and related technologies.
- Strong knowledge of Data Warehouse concepts and modern data lake architectures on AWS.
Nice-to-haves
- Familiarity with Qlik Sense/QlikView for business intelligence and data visualization.
- Advanced experience with dbt for complex data transformations and data modeling.
- Experience with Databricks Delta Lake for data lake management and ACID transactions.
- Experience with Databricks MLflow for machine learning lifecycle management.
- Knowledge of Apache Airflow or Databricks Workflows for orchestration.
- Experience with AWS data services and infrastructure.
- Experience with data governance and data quality frameworks within the Databricks ecosystem.
- Knowledge of containerization technologies (Docker, Kubernetes) for ML deployment.
- Experience with Databricks Unity Catalog for data governance and security.
- AI‑assisted coding & LLM licenses – build smarter, faster
- Paid vacation in your first year – no waiting period
- Birthday day off – celebrate you
- Flexible hours & open kitchen – fuel creativity on your schedule
- Private health insurance – from day one
- Shuttle service or monthly gas card – your commute, covered
- Meal card – lunches on us
- Learning budget – for courses, books, and conferences
- Massage Fridays – at our Cyberpark office
- Unlimited fun at the office – anytime
- Mac or PC – your choice of tools
- English courses – grow your communication skills globally
Apply now.
Location & Eligibility
Listing Details
- First seen
- May 6, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 51%
- Scored at
- May 6, 2026
Signal breakdown
Please let oplog-talent know you found this job on Jobera.
3 other jobs at oplog-talent
View all →Explore open roles at oplog-talent.
Similar Data Engineer jobs
View all →Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.