Tractian
Tractian25d ago

Data Engineer - Data Foundry Engineer

São PauloRemoteFull-timemid
Data EngineeringData EngineerDataData & AI
0 views0 saves0 applied

Quick Summary

Overview

Data Science at TRACTIAN The Data Science team at TRACTIAN focuses on extracting valuable insights from vast amounts of industrial data. Using advanced statistical methods, algorithms,

Technical Tools
Data EngineeringData EngineerDataData & AI
The Data Science team at TRACTIAN focuses on extracting valuable insights from vast amounts of industrial data. Using advanced statistical methods, algorithms, and data visualization techniques, this team transforms raw data into actionable intelligence that drives decision-making across engineering, product development, and operational strategies. The team constantly works on optimizing prediction models, identifying trends, and providing data-driven solutions that directly enhance the company’s operational efficiency and the quality of its products.


Responsibilities

~3 min read
We're looking for a Data Engineer with a strong engineering foundation and comfort with AI workflows to join our Data Foundry team. In this role, you'll be the bridge between our model training and data annotation teams, building the pipelines and infrastructure that turn raw, messy data into gold-standard datasets ready for AI consumption.
  • Design and maintain robust data pipelines to ingest from a wide range of sources, including APIs, documents, websites, and raw sensor data
  • Integrate and optimize ETL/ELT processes developed by MLE colleagues, improving performance, reliability, and long-term maintainability
  • Own the full dataset lifecycle, from raw ingestion through cleaning, validation, and delivery as training-ready data
  • Define and enforce data quality standards and governance practices across the Data Foundry team
  • Build and maintain labeling pipeline infrastructure for ML applications, working closely with the annotation team
  • Participate in architectural decisions, code reviews, and technical mentorship within the team
  • Document data sources, pipeline logic, and processing decisions for reproducibility and team alignment
  • 3+ years of experience in data engineering
  • Degree in Computer Science, Data Engineering, Computer Engineering, Information Systems, or equivalent technical background
  • Solid understanding of the ML training lifecycle and what properties make a dataset suitable for model training
  • Familiarity with layered data architecture patterns such as Medallion Architecture (Bronze/Silver/Gold) or Data Mesh
  • Proficiency in Python, with focus on data manipulation, pipeline development, and automation
  • Workflow orchestration using code-based tools such as Temporal, Airflow, Prefect, Dagster, or equivalent
  • Distributed data processing with Spark, Databricks, or similar
  • REST and gRPC API integration
  • Strong SQL skills, both for data modeling and query optimization
  • Experience with streaming systems and event-driven pipelines (Kafka, Kinesis, or equivalent)
  • Comfortable jumping into ongoing codebases and optimizing work built by others, without needing to start from scratch
  • Technology-agnostic: you evaluate tools based on what the project needs, adopt new ones quickly, and don't get attached to a specific stack
  • At ease in fast-moving environments where priorities shift and the right answer isn't always obvious
  • Engineering-first mindset: you think in pipelines, own outcomes, and care about the quality of what you ship
  • Driven by curiosity and innovation, not by comfort with a known toolset
  • Experience making architectural decisions and contributing to the technical growth of a team, formally or informally
  • Go, for high-performance pipeline components
  • dbt for transformation layer modeling
  • Open table formats: Delta Lake, Apache Iceberg, or Hudi
  • Data quality frameworks such as Great Expectations or Soda
  • Cloud experience, preferably OCI (our current migration target). AWS, GCP, or Azure background is also valued
  • Rapid prototyping with Streamlit or similar tools. The use of LLMs and GenAI to speed up internal tooling and experimentation is actively encouraged
  • Experience with data annotation workflows or training dataset pipelines
  • Location & Eligibility

    Where is the job
    Worldwide
    Fully remote, anywhere in the world
    Who can apply
    Same as job location
    Listed under
    Worldwide

    Listing Details

    Posted
    April 2, 2026
    First seen
    April 3, 2026
    Last seen
    April 28, 2026

    Posting Health

    Days active
    24
    Repost count
    0
    Trust Level
    39%
    Scored at
    April 28, 2026

    Signal breakdown

    freshnesssource trustcontent trustemployer trust
    Tractian
    Employees
    350
    Founded
    2019
    View company profile
    Newsletter

    Stay ahead of the market

    Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

    A
    B
    C
    D
    Join 12,000+ marketers

    No spam. Unsubscribe at any time.

    TractianData Engineer - Data Foundry Engineer