External

Data Engineer

🏢 Delhivery  •  📍 India

Sign up to view full application details.

Job Description

About Delhivery: Delhivery is India’s leading fulfillment platform for digital commerce. With a vast logistics network spanning 18,000+ pin codes and over 2,500 cities, Delhivery provides a comprehensive suite of services including express parcel transportation, freight solutions, reverse logistics, cross-border commerce, warehousing, and cutting-edge technology services. Since 2011, we’ve fulfilled over 550 million transactions and empowered 10,000+ businesses, from startups to large enterprises. Vision : To become the operating system for commerce in India by combining world-class infrastructure, robust logistics operations, and technology excellence . About the Role: Data Engineer / Sr. Data Engineer: We're looking for a Data Engineer who can design, optimize, and own our high-throughput data infrastructure. Are you a passionate Kafka and Kafka Connect ecosystem data engineer looking for an exciting opportunity to work on cutting-edge big data projects? Look no further! Delhivery is seeking a talented and motivated Streaming Data Engineering Expert to join our dynamic team. Are you a passionate Spark and Scala developer looking for an exciting opportunity to work on cutting-edge big data projects? Look no further! Delhivery is seeking a talented and motivated Spark & Scala Expert to join our dynamic team. Responsibilites: Develop and optimize Spark applications to process large-scale data efficiently. Collaborate with cross-functional teams to design and implement data-driven solutions. Troubleshoot and resolve performance issues in Spark jobs. Stay up-to-date with the latest trends and advancements in Spark and Scala technologies. Requirem ents: Proficient in Redshift, data pipelines, Kafka, Real time streaming,etc. Strong experience with Apache Spark, Spark Streaming, and Spark SQL. Solid understanding of distributed systems, Databases, System design, and big data processing frameworks. Familiarity with Hadoop ecosystem components (HDFS, Hive, HBase) is a plus.
View Full Description & Requirements →