Job Description
Roles & Responsibilities
Data Infrastructure & Pipeline Development
Design, build, and maintain data pipelines, lakes, and warehouses.
Develop ETL jobs and stored procedures/functions/packages.
Ensure robust data storage and high-quality data delivery.
Technical Guidance & Integration
Own and implement ETL operating standards.
Provide guidance on infrastructure for analytics and AI use cases.
Collaborate with backend developers and IT architecture teams for seamless integration.
Product & Stakeholder Collaboration
Work with Product Owners to understand and meet data needs.
Translate product requirements into engineering plans and roadmaps.
Gather stakeholder requirements and build architectures aligned with business goals.
Quality & Process Excellence
Promote a culture of process and data quality.
Implement best practices for data management and quality assurance.
Support bug fixing and performance optimization.
Agile Delivery & Team Collaboration
Desired Candidate Profile
At least 3-6 years of experience in Data Engineering roles
Bachelor’s Degree in a quantitative field (Mathematics, Engineering, Statistics, Science, Data Science/ AI); Advanced degrees preferred.
Experience in Big Data platform like Databricks, Spark, Hive, Palantir Foundry etc.
Experience in implementing AI solutions including OCR, Large Language Models, etc.
Experienced with data modelling, design patterns, building highly scalable and secured analytical solutions.
Proficiency in Python, SQL, PL/SQL, Spark and similar languages
Strong analytical and problem-solving skills
Experience working in a tech/ product environment is a plus
Behavioural Competencies
High-energy, ownership-driven mindset
Strong problem-solving and analytical skills
Excellent communication and collaboration abilities
Comfortable navigating ambiguity and driving clarity
Experience in agile/start-up environments preferred
Successful track record of leading complex projects is a plus