Responsibilities:
- Design, build, and maintain robust data pipelines and ETL processes in Azure, leveraging tools such as Databricks, Azure Data Factory, and Azure Synapse Analytics
- Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions
- Optimize data workflows and infrastructure for performance, scalability, and reliability
- Implement data governance and security best practices to ensure data integrity and compliance
- Develop and maintain documentation for data pipelines, systems, and processes
- Stay updated on the latest Azure technologies and best practices in data engineering and share knowledge with the team
Requirements:
- Bachelor's degree or higher in Computer Science, Engineering, or a related field
- Minimum of 5 years of experience in data engineering, with a strong focus on Azure cloud technologies
- Expertise in Azure tools such as Databricks, Azure Data Factory, Azure Synapse Analytics, and Azure SQL Database
- Proficiency in programming languages such as Python, SQL, or Scala
- Solid understanding of data management concepts, including data modeling, ETL, and data warehousing
- Strong problem-solving skills and attention to detail
- Excellent communication and collaboration skills
- Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively
Preferred Qualifications:
- Experience with other cloud platforms such as AWS or GCP
- Knowledge of big data technologies such as Apache Spark and Hadoop
- Certification in Azure or related technologies
- Experience working in Agile/Scrum environments