We are seeking a highly skilled and experienced Azure Data Engineer to join our data team. The ideal candidate will have over five years of professional experience and possess deep expertise in building, managing, and optimizing scalable data pipelines and solutions within the Microsoft Azure ecosystem. This role requires a strong focus on DatabricksPython, and SQL to deliver high-quality, reliable, and performant data products.

Responsibilities:
  • Design and Development: Design, development, and implementation of robust and scalable ETL/ELT processes using Azure services and Databricks.
  • Data Platform Expertise: Act as a subject matter expert for Databricks, leveraging its capabilities for large-scale data processing, advanced analytics, and machine learning workloads.
  • Coding and Scripting: Write, optimize, and maintain high-quality code primarily in Python and SQL for data transformation, cleaning, and aggregation.
  • Azure Integration: Utilize a comprehensive suite of Azure services including Azure Data Lake Storage (Gen2)Azure Synapse AnalyticsAzure Data Factory, and Azure Key Vault to build and manage end-to-end data solutions.
  • Microsoft Fabric: Demonstrate and apply strong working knowledge of Microsoft Fabric to unify data, analytics, and AI workloads, contributing to the modernization of our data platform.
  • Code Quality and Maintenance: Refactor legacy code for improved performance, readability, and maintainability. Write and execute comprehensive unit tests to ensure the reliability and integrity of all data pipelines and code.
  • Optimization: Implement optimization techniques to significantly improve the performance and reduce the cost of existing and new data solutions, especially within Databricks and Synapse.
  • DevOps and Versioning: Apply best practices for code versioning using tools like Git (e.g., GitHub, Azure DevOps) within a structured CI/CD environment.
  • Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical specifications.

Required Qualifications:
  • Experience: 5+ years of hands-on experience as a Data Engineer, primarily focused on the Microsoft Azure data stack.
  • Expert Proficiency: Expert-level proficiency in Databricks (Spark SQL/PySpark), Python, and SQL.
  • Azure Services: Strong, practical knowledge of core Azure data services, including Azure Data Lake Storage (Gen2) and Azure Synapse Analytics (or Azure SQL Data Warehouse).
  • ETL/ELT: Deep understanding and experience with modern ETL/ELT principles and tools (e.g., Azure Data Factory).
  • Microsoft Fabric: Solid understanding of the capabilities and architecture of Microsoft Fabric.
  • Software Engineering Practices: Proven experience with code versioning (Git), unit testing frameworks, and principles of writing production-ready, clean, and well-documented code.
  • Optimization: Demonstrated ability to identify and implement performance and cost optimization techniques across data storage and processing layers.
  • Problem-Solving: Excellent analytical and problem-solving skills with a track record of successfully refactoring complex or legacy data infrastructure.

Bonus Qualifications (Nice-to-Haves)
  • Certifications such as Azure Data Engineer Associate (DP-203).
  • Experience with streaming data technologies (e.g., Kafka, Azure Event Hubs).
  • Knowledge of Data Governance and Security best practices in Azure.

Join our upcoming Mega Tech Walk-in Drive in Hyderabad (13 December 2025) and Bengaluru (20 December 2025).
Secure your spot by registering via the link below.