We are looking for a candidate with 2+ years of experience in a Data Engineer role for Mumbai location, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
Advanced SQL and relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
Strong analytic skills related to working with structured and unstructured datasets
Performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
Manipulating, processing and extracting value from large disconnected datasets
Build processes supporting data transformation, data structures, metadata, dependency and workload management
Big data tools like Hadoop, Spark, Kafka, etc.
Message queuing, stream processing, and highly scalable - big data- data stores
Building and optimizing - big data- data pipelines, architectures and data sets
Relational SQL and NoSQL databases, including Postgres and Cassandra
Development of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
AWS cloud services: EC2, EMR, RDS, Redshift
Stream-processing systems: Storm, Spark-Streaming, etc.
Object-oriented/object function scripting languages: Python, Scala, etc.
Visualization tools- Tableau, Power BI, Qlik etc.
We emphasize career development, providing our data engineers with constructive feedback and mentorship through their careers. Our goal is to help our people be successful professionals and grow with their teams.
Rapid enhancement of skills with constant exposure to new industries and projects
Diverse, dynamic and engaged data science environments
To influence project direction and make significant contributions
Opportunities to work closely with our industry and functional leads to elevate project delivery
Constructive mentorship and career progression options
Job Responsibilities
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS - big data technologies
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
Work with data and analytics experts to strive for greater functionality in our data system
Location
Gurgaon, Haryana, India
About Company
HuQuo (derived from Human Quotient) is the next stage in the evolution of HR. In an industry thirsty for innovation, HuQuo is a fresh perspective towards people and organizations. At HuQuo,we do not look at human beings as mere resources, but as an ocean of untapped potential. Human Quotient goes beyond IQ and EQ. It’s the holistic representation of all human potential. We strategically manage this Human quotient. We believe that every organization has a human like individuality which can help nurture and flourish those individuals that resonate with it. Our endeavor is to bring such high resonance individuals and organizations together. We will achieve this through extensive use of analytics and deep immersion to understand the core of the organizations and people we work with.