Jobs

Be a part of it. Join the #AccelFamily

Senior Data Engineer

Niyo

Niyo

Data Science
Bengaluru, Karnataka, India
Posted on Wednesday, May 15, 2024

About Us :

Niyo is an Indian Fintech company leading innovations in #Travel and #Finance. With over 6 million users already on board, our aim is to make both travel and managing finance simpler, smarter and safer. Since its inception in 2015, Niyo has been at the forefront of revolutionizing banking with continued digital innovation. Let’s just say – we help you save more, be more and do more. Take your banking experience a notch higher with:

Niyo Global: Our flagship product that makes travelling abroad stress-free and a must have for all Indian passport holders. You also get a VISA Signature debit card that gives you complimentary airport lounge access at international and domestic airports in India, a Savings Account with upto 5% interest along with a host of other helpful travel features.

NiyoX: A 2-in-1 account in partnership with Equitas Small Finance Bank, that takes care of both - your savings and investments with zero commissions on mutual funds and upto 7% interest p.a on savings.

Role Purpose :

As a Senior Data Engineer, you will play a crucial role in our data infrastructure, ensuring the availability and reliability of data for our organization. You will work closely with cross- functional teams to design, develop, and maintain data pipelines, optimize data storage and retrieval processes, and contribute to the overall data architecture. Your expertise in Python, Spark, SQL, Airflow, and AWS will be essential in building scalable and efficient data solutions.

Key Accountabilities and Decision Ownership :

 Data Pipeline Development: Develop and maintain robust data pipelines to ingest, transform, and deliver data from various sources to downstream systems, data lakes, and data warehouses.

 Data Transformation: Perform data cleansing, enrichment, and transformation using Python, Spark, and SQL to ensure data quality, consistency and distributed data processing.

 Data Modeling: Design and implement data models and schemas to support analytical and reporting requirements.

 Performance Optimization: Optimize data processing and storage solutions for efficiency and scalability, working with large volumes of data.

 Data Integration: Collaborate with data scientists, analysts, and other teams to integrate data into analytics and machine learning workflows.

 Monitoring and Maintenance: Implement monitoring and alerting systems to ensure data pipeline reliability, and perform routine maintenance tasks.

 Documentation: Maintain documentation for data pipelines, schemas, and processes to ensure knowledge sharing and best practices.

Core Competencies, Knowledge and Experience :

 Bachelor's degree in Computer Science, Information Technology, or a related field (Master's preferred).

 At least 6 years of experience as a Senior Data Engineer or similar role.

 At least 2 years of experience as team lead or team manager or similar role.

 Familiarity with data warehousing concepts and technologies (e.g., Delta Lake).

 Experience in building Big Data Architectures leveraging Spark, Delta Lake, Hadoop or similar

 Strong programming skills in Python for data manipulation and transformation.

 Proficiency in Apache Spark for distributed data processing – Batch & Real Time.

 Advanced SQL skills for data querying and optimization.

 Experience with workflow management tools like Apache Airflow.

 Understanding of data security and privacy principles.

 Excellent problem-solving and analytical abilities.

 Strong communication and collaboration skills to work effectively in a cross-functional team.

 Ability to work in a fast-paced environment and manage multiple projects simultaneously.

 Continuous learning mindset to stay updated with the latest industry trends and technologies.

Preferred Skills :

 Proficiency in AWS services such as AWS Glue, AWS EMR, AWS Redshift, AWS S3, AWS Athena, and AWS Lambda.

 Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).

 Experience with data streaming technologies (e.g., Kafka, Kinesis).

 Data pipeline orchestration and automation expertise.

 Experience with version control systems (e.g., Git).