Acceldata
Acceldata5mo ago

Staff Product Support Engineer - Hadoop Operations

United StatesCaliforniaFull-Timelead
OtherProduct Support Engineer
0 views0 saves0 applied

Quick Summary

Overview

ABOUT US Acceldata is the market leader in Enterprise Data Observability. Founded in 2018 and backed by top investors including Insight Partners, March Capital, Lightspeed, Sorenson Ventures,

Technical Tools
OtherProduct Support Engineer
Acceldata is the market leader in Enterprise Data Observability. Founded in 2018 and backed by top investors including Insight Partners, March Capital, Lightspeed, Sorenson Ventures, Industry Ventures, and Emergent Ventures, we are a Series-C funded company headquartered in Silicon Valley.
 
Our Enterprise Data Observability Platform—the first of its kind—helps enterprises build and operate world-class data products by ensuring data is reliable, trusted, and ready to power today’s most critical technologies, including AI, LLMs, Analytics, and DataOps.
 
Delivered as a SaaS solution, Acceldata is trusted by leading global organizations such as HPE, HSBC, Visa, Freddie Mac, Manulife, Workday, Oracle, PubMatic, PhonePe (Walmart), Hershey’s, Dun & Bradstreet, and many more.

About the Role

~2 min read
Staff Product Support Engineers at Acceldata are Hadoop Operations SMEs responsible for designing, optimizing, migrating, and scaling Hadoop- and Spark-based data processing systems. This role involves hands-on experience with Hadoop and other core data operations, focusing on building resilient, high-performance distributed data systems.
 
You will collaborate with Customer Engineering teams to deliver high-throughput Hadoop, NiFi, and Spark applications and solve complex data challenges in migration, upgrades, reliability, and optimize post-migration system performance.
 
This role requires flexibility to work in rotational shifts, based on team coverage needs and customer demand. Candidates should be comfortable supporting operations in a 24/7 environment and be willing to adjust their working hours accordingly.
  • Design and optimize distributed Hadoop-based applications, ensuring low-latency, high-throughput performance for big data workloads.
  • Troubleshooting: Provide expert-level support for data or performance issues in Hadoop, NiFi, and Spark jobs and clusters.
  • Data Processing Expertise: Work extensively with large-scale data pipelines using Hadoop, NiFi, and Spark's core components.
  • Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimization of NiFi, Impala, and Spark jobs to reduce processing time and resource consumption.
  • Cluster Management: Collaborate with DevOps and infrastructure teams to manage NiFi, Impala, and Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.).
  • Migration: Provide dedicated support to ensure the stability and reliability of the new ODP Hadoop environment during and after migration. Promptly address evolving technical challenges and optimize system performance following the ODP migration.
  • Strong hands-on experience with distributed data processing frameworks, including Hadoop, Spark, and NiFi, with a deep understanding of their core components and architecture.
  • Proven ability to analyze, troubleshoot, and optimize large-scale data pipelines and jobs (Spark, NiFi, Impala) to improve performance, reduce latency, and increase throughput.
  • Demonstrated experience diagnosing and resolving complex data and performance issues across Hadoop ecosystems, including cluster-level and application-level debugging.
  • Extensive experience building and maintaining scalable data pipelines that process large volumes of data efficiently in distributed environments.
  • Familiarity with managing and operating clusters using technologies such as Hadoop/YARN, Kubernetes, and cloud-based platforms like AWS EMR or GCP Dataproc.
  • Experience working cross-functionally with DevOps and infrastructure teams to support deployment, scaling, and reliability of distributed systems.
  • Experience supporting platform migrations (e.g., Hadoop distributions or ODP environments), ensuring system stability, performance optimization, and issue resolution during and post-migration.
  • Strong analytical mindset with the ability to perform deep-dive investigations into system performance and implement effective, scalable solutions.
  • Requirements

    ~1 min read
  • Experience working with scripting languages (Scala, Python, Bash, PowerShell).
  • Bachelor's degree required, Master’s degree preferred. 
  • Familiarity with virtual machine technologies and multi-node environment (50+ nodes).
  • Proficient with Linux, NFS, and Windows, including application installation, scripting, and working with the command line.
  • Working knowledge of application, server, and network security management concepts. Certification on any of the leading Cloud providers (AWS, Azure, GCP ) and/or Kubernetes.
  • Knowledge of databases like MySQL and PostgreSQL.
  • Be involved with and work on other support-related activities - Performing POC & assisting with Onboarding deployments of Acceldata & Hadoop distribution products.
  • Listing Details

    Posted
    November 4, 2025
    First seen
    March 26, 2026
    Last seen
    April 24, 2026

    Posting Health

    Days active
    28
    Repost count
    0
    Trust Level
    31%
    Scored at
    April 24, 2026

    Signal breakdown

    freshnesssource trustcontent trustemployer trust
    Acceldata
    Employees
    350
    Founded
    2018
    View company profile
    Newsletter

    Stay ahead of the market

    Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

    A
    B
    C
    D
    Join 12,000+ marketers

    No spam. Unsubscribe at any time.

    AcceldataStaff Product Support Engineer - Hadoop Operations