Data Engineer: Data Pipeline - Vietnam

VietnamHo Chi Minh Citymid
Data EngineeringData EngineerData & AI
0 views0 saves0 applied

Quick Summary

Overview

Company Introduction Our mission is to enable every organization to be climate conscious and positive with the world’s best climate software and data products.

Technical Tools
Data EngineeringData EngineerData & AI

Our mission is to enable every organization to be climate conscious and positive with the world’s best climate software and data products. Persefoni is creating an all-in-one platform that allows organizations to measure, analyze, and reduce their Enterprise Carbon Footprint. Our goal is to provide our customers unprecedented visibility and insights into the impact their organization has on the environment. Leveraging the latest breakthroughs in data science and software, our technology will empower teams and leaders to mobilize their organizations to continuously improve their greenhouse gas emissions metrics.

About the Role

~1 min read

The Data Engineer will be a key member to assist in building out our web services and backend persistence layer. Successful candidates should have a minimum of three years of recent professional programming in positions requiring the skills listed below with an emphasis on data integrity and data pipelines. Our project entails implementing our pre-approved development targets and developing a robust and reusable code framework to deliver a variety of new features across our product lines according to our preferred architecture design and best practices. Our backend stack is a combination of Golang, Redis, MySQL, and Snowflake while following the Clean Architecture Design pattern. APIs are documented and managed using the Open API framework and maintained in ReadMe. Our data pipelines for data loading and machine learning pipelines are managed via Databricks while our data transformation occurs with DBT against Snowflake within AWS cloud services. All development work is managed via JIRA tracked sprints, Github Git-flow branch management, and infrastructure is managed in AWS with monitoring in Datadog.

Responsibilities

~1 min read
  • Collaborate on the design of data models, technical architecture, data flows, schemas, and API contracts.
  • Relay updates on existing work via Jira ticket status, push well documented pull requests for features, and collaborate through review and comments on your fellow developer pull requests.
  • Building scalable and robust data flows using Databricks, DBT and Snowflake.
  • Add/Update to our backend persistence model in MySQL by writing SQL based migration scripts.
  • Write/Update data test using native DBT tests and frameworks like Great Expectations to ensure adequate test coverage of our code base.

Requirements

~1 min read
  • At least 3+ years experience building/managing data pipelines within Databricks.
  • At least 3+ years experience transforming data in Snowflake with tools like DBT.
  • At least 3+ years of experience specifically building data and ML pipelines
  • At least 3 years of experience writing SQL against normalized data structures.
  • Proficiency with code debugging and uncovering performance issues utilizing developer tooling.
  • Proficiency writing data tests.
  • ExperienceProficiency in English language is highly preferred. 
  • Within your first day, you should have been able to clone the repository and install the necessary tooling to build and run our projects in cloud based IDEs such as Databricks and DBT.
  • Within your first week, you will be familiarizing yourself with the code base and coordinating work on development tickets as requested.
  • Within your first month, you should be successfully operating within the development team, submitting PRs, providing feedback on team member PRs, and advancing your assigned projects with your contributions.
  • Within your first year, you should be contributing to the data and software engineering teams to provide your personal insights as to how to improve our processes and architectures. 

Full-time position located in Vietnam governed by Vietnam labour regulations, with the Persefoni entity being established in Vietnam.   

  • Step 1: Initial screen with talent acquisition
  • Step 2: Technical Interview
  • Step 3: Interview with Vietnam Country Director
  • Step 2: Interview with Engineering Leadership team
  • Step 4: Offer extended
  • Step 5: Background check and onboarding

We are committed to valuing and respecting people and organizations of all backgrounds. We proudly bring this to life by fostering a culture of innovation, creativity, diversity of thought, and inclusion.

We strive for each of our team members to be able to show up for work every day as their genuine selves. Similar to the reverence given to Earth’s biodiversity, we recognize the vast potential that exists when all of the facets of diversity within our team are appreciated and illuminated. This policy extends to all aspects of our employment practices.

 

Listing Details

Posted
August 14, 2025
First seen
March 26, 2026
Last seen
April 22, 2026

Posting Health

Days active
27
Repost count
0
Trust Level
23%
Scored at
April 22, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust

3 other jobs at Persefoniaiinc

View all →

Explore open roles at Persefoniaiinc.

Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

P
Data Engineer: Data Pipeline - Vietnam