Still Struggling with Hiring? Discover Olibr's Solution Now!

GCP Data Engineer - Spark/Hive|Pune|5+Years

  • HuQuo
  • India, P...
  • 5 - 7 Yrs

Job Description

  • 5+ Years of Experience in Data Engineering and building and maintaining large-scale data pipelines.
  • Experience with designing and implementing a large-scale Data-Lake on Cloud Infrastructure
  • Strong technical expertise in Python and SQL
  • Extremely well-versed in Google Compute Platform including BigQuery, Cloud Storage, Cloud Composer, DataProc, Dataflow, Pub/Sub.
  • Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark)
  • Experience Developing DAGs in Apache Airflow 1.10.x or 2. x
  • Good Problem-Solving Skills
  • Detail Oriented
  • Strong Analytical skills working with a large store of Databases and Tables
  • Ability to work with geographically diverse teams.

Job Responsibilities

  • Build Data and ETL pipelines in GCP.
  • Support migration of data to the cloud using Big Data Technologies like Spark, Hive, Talend, Java
  • Interact with customers on daily basis to ensure smooth engagement.
  • Responsible for timely and quality deliveries.
  • Fulfill organization responsibilities : Sharing knowledge and experience with the other groups in the organization, and conducting various technical training sessions.

Location

Pune, Maharashtra, India