AWS Data Engineer (Mid-Level)
Quick Summary
Responsibilities
~1 min read- →
Design and develop robust data pipelines across streaming and batch environments
- →
Lead engineering best practices including CI/CD, testing, and automation
- →
Contribute to architecture discussions and cloud migration strategies
- →
Collaborate with clients to define requirements and deliver innovative solutions
- →
Support internal capability development by sharing your expertise and experience
-
Strong hands-on experience with Python, Java, or Scala
-
Proficiency in AWS cloud environments and big data tech (Spark, Hadoop, Airflow)
-
Solid understanding of SQL, ETL/ELT approaches, and data modelling techniques
-
Experience building CI/CD pipelines with tools like Jenkins or CircleCI
-
Knowledge of data security protocols and distributed system design
Nice to Have
~1 min read-
Experience with messaging systems (Kafka, Spark Streaming, Kinesis)
-
Familiarity with schema design and semi-structured data formats
-
Exposure to containerisation, graph databases, or machine learning concepts
-
Proficiency with cloud-native data tools (BigQuery, Redshift, Snowflake)
-
Enthusiasm for learning and experimenting with new technologies
What We Offer
~1 min readWhat We Offer
~1 min readWhat We Offer
~2 min readListing Details
- Posted
- February 25, 2026
- First seen
- March 26, 2026
- Last seen
- April 21, 2026
Posting Health
- Days active
- 25
- Repost count
- 0
- Trust Level
- 31%
- Scored at
- April 21, 2026
Signal breakdown

Capco, a Wipro company, is a global technology and management consultancy specializing in driving digital transformation in the financial services and energy sectors.
View company profilePlease let Capco know you found this job on Jobera.
4 other jobs at Capco
View all →Explore open roles at Capco.
Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.