SansarTec

Sr. Data Engineer

Job description

Job Overview:

Were seeking an experienced Senior Data Engineer specializing in Azure and Databricks to drive data engineering initiatives, create scalable data pipelines, and support our Enterprise Data Lake and Data Warehouse (Synapse). The ideal candidate will be well-versed in Azure Data Factory, Databricks (DLT), and have hands-on experience with PySpark, SQL, and Azure DevOps. This role requires deep technical knowledge in data pipeline orchestration, data transformation, and optimizing performance within Azure cloud services.

You must reside in the US; No 3rd parties.

Responsibilities:

  • Data Pipeline Development: Build new data pipelines and enhance existing ones using Databricks (Data Engineering, Delta Lake), Azure Data Factory, and Synapse to load data into the Enterprise Data Lake, Delta Lake, and Enterprise Data Warehouse.
  • Data Transformation and Loading (ETL/ELT): Implement complex data transformations to parse, cleanse, and load data according to business requirements and best practices.
  • Orchestration and Automation: Utilize Azure Data Factory to orchestrate data flows and automate data ingestion processes.
  • Testing and Quality Assurance: Perform unit testing, coordinate integration testing, and manage User Acceptance Testing (UAT) to ensure high-quality data.
  • Documentation and Runbooks: Develop and maintain high-level design (HLD), detailed design (DD), and runbooks to support production operations and improve data pipeline transparency.
  • Compute Configuration and Maintenance: Configure and optimize compute resources, apply Data Quality (DQ) rules, and manage regular maintenance schedules.
  • Performance Tuning: Identify and implement performance improvements for data pipelines, ensuring efficiency and scalability.
  • Production Support: Provide ongoing support for production environments, troubleshoot issues, and ensure optimal system performance.


Technical Skills:

  • Core Technologies: Databricks (Data Engineering, DLT), Azure Data Factory, Synapse (Dedicated SQL Pool), SQL, PySpark, Python
  • Azure Services: Azure DevOps, Azure Function Apps, Azure Logic Apps
  • Nice to Have: Experience with Precisely for data integration and governance


Qualifications:

  • Bachelors or Masters degree in Computer Science, Data Engineering, or related field.
  • 5+ years of experience in data engineering with Azure-focused data solutions.
  • Proven track record in building and orchestrating data pipelines in a cloud environment, with expertise in Databricks and Azure Data Factory.
  • Strong skills in SQL, Python, and PySpark for data transformations.
  • Excellent problem-solving skills with a focus on performance optimization and production support.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.