Data Engineer Pleno

Londrinamid
Data EngineerData
0 views0 saves0 applied

Quick Summary

Key Responsibilities

Serve as a developer for our data analytics platform, informing technical direction, standards, and long-term strategy for data ingestion, storage, processing, and consumption.

Technical Tools
Data EngineerData

At Platform Science, we’re working to connect everything that moves.

Founded in 2015, we are an open IoT platform that partners with innovative fleets, application developers, vehicle manufacturers, and equipment providers in the transportation industry to deliver revolutionary solutions to supply chain professionals across the globe.

Our employees are an engaging, diverse group of people who believe in the power of great ideas. We hire people with different experiences and perspectives to build a company culture that fuels growth through innovation.

We value thoughtful actions and empathy for others. We approach challenges with resiliency and creativity, while encouraging transparency because, no matter our backgrounds or responsibilities, we are one team.cker, and Kubernetes.

Responsibilities

~1 min read
  • Serve as a developer for our data analytics platform, informing technical direction, standards, and long-term strategy for data ingestion, storage, processing, and consumption.
  • Partner with business stakeholders, product owners, engineers, analysts, and data scientists to design and develop highly scalable, resilient, and performant software and data architectures for both batch and streaming data pipelines.
  • Collaborate closely with product and engineering teams to define and enforce governance around data models that enable complex analysis, visualization, and data science across the organization.
  • Leverage our modern data stack (e.g., Snowflake/dbt), to continue to buildout, data model, cost optimize, and manage our data warehouse.
  • Partner with Data Scientists to design, build, and productionize robust Machine Learning pipelines and feature stores.
  • Establish data quality standards and processes to ensure data is properly filtered, cleaned, and transformed from a variety of sources to enable accurate and trustworthy analysis.
  • Work directly with management and executive teams to prioritize and scope platform initiatives that align with the company's highest business and information needs.
  • Mentor and coach team members on best practices, complex system design, and career growth, serving as an authoritative voice for data engineering excellence.
  • 3+ years of progressive experience as a Data Engineer, focusing on building high-scale, production-grade data analytics platforms.
  • Proficiency (3+ years) in software development, including the ability to write expert-level, maintainable, and robust code using Python and SQL.
  • 3+ years designing, managing, and maintaining data warehouses (especially modern columnar databases), including expertise in dimensional data modeling and maintaining complex ETL/ELT pipelines.
  • Experience working with both OLTP/OLAP relational databases and distributed data systems.
  • 3+ years working with AWS services such as EC2, Lambda, S3, RDS, ECR, EKS, IAM, and IoT, specifically in the context of data infrastructure.
  • 3+ years of experience working with streaming data technologies such as Kinesis, Kafka, or Storm, and designing real-time data processing architectures.
  • Advanced SQL skills including performance tuning, window functions, and complex query optimization.
  • Proven ability to design and manage CI/CD pipelines using tools like Jenkins, TravisCI, or GitLab Runners for data platform code deployment.
  • Demonstrated experience using Terraform (or similar IaC tool like CloudFormation) to provision, manage, and scale cloud infrastructure, data pipelines, and platform resources.
  • Hands-on experience with AWS Database Migration Service (DMS) for large-scale database migrations and continuous data replication from various data sources.
  • Demonstrated technical leadership regarding all things data: mining, modeling, transformation, cleansing, validation, security, and governance.
  • Exceptional communication and presentation skills, with the ability to articulate complex technical trade-offs to both technical and non-technical audiences.
  • Experience building and managing a workflow management system such as Airflow, Luigi, or Prefect.
  • Expertise with containerization and orchestration using Docker/Kubernetes/EKS.
  • Proven experience building and optimizing serverless data pipelines (e.g., AWS Lambda, Step Functions).
  • Experience with Data Build Tool (dbt) for transformation and modeling.
  • Experience with Snowflake or other major columnar/cloud data warehouses.
  • BS/MS in Computer Science, Engineering, or equivalent practical experience.
  • Experience with Feature Store architectures (e.g., Feast, Tecton) and developing standardized feature engineering pipelines.

Location & Eligibility

Where is the job
Londrina
On-site at the office
Who can apply
Same as job location

Listing Details

Posted
April 27, 2026
First seen
April 27, 2026
Last seen
May 3, 2026

Posting Health

Days active
5
Repost count
0
Trust Level
45%
Scored at
May 3, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Platform Science

We aim to connect everything that moves your fleet. As your partner, we strive to understand your business, build a strong relationship, and provide the service and support you need. Because when you succeed, we succeed.

Employees
350
Founded
2015
View company profile

3 other jobs at Platform Science

View all →

Explore open roles at Platform Science.

Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

Platform ScienceData Engineer Pleno