5+ Years of Experience in Data Engineering and building and maintaining large-scale data pipelines.
Experience with designing and implementing a large-scale Data-Lake on Cloud Infrastructure
Strong technical expertise in Python and SQL
Extremely well-versed in Google Compute Platform including BigQuery, Cloud Storage, Cloud Composer, DataProc, Dataflow, Pub/Sub.
Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark)
Experience Developing DAGs in Apache Airflow 1.10.x or 2. x
Good Problem-Solving Skills
Detail Oriented
Strong Analytical skills working with a large store of Databases and Tables
Ability to work with geographically diverse teams.
Job Responsibilities
Build Data and ETL pipelines in GCP.
Support migration of data to the cloud using Big Data Technologies like Spark, Hive, Talend, Java
Interact with customers on daily basis to ensure smooth engagement.
Responsible for timely and quality deliveries.
Fulfill organization responsibilities : Sharing knowledge and experience with the other groups in the organization, and conducting various technical training sessions.
Location
Pune, Maharashtra, India
About Company
HuQuo (derived from Human Quotient) is the next stage in the evolution of HR. In an industry thirsty for innovation, HuQuo is a fresh perspective towards people and organizations. At HuQuo,we do not look at human beings as mere resources, but as an ocean of untapped potential. Human Quotient goes beyond IQ and EQ. It’s the holistic representation of all human potential. We strategically manage this Human quotient. We believe that every organization has a human like individuality which can help nurture and flourish those individuals that resonate with it. Our endeavor is to bring such high resonance individuals and organizations together. We will achieve this through extensive use of analytics and deep immersion to understand the core of the organizations and people we work with.