Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Kms Technology

Guadalajara, Jalisco, Mexico

Hace 4 horas

Ninguna postulación

Sobre

  • Company Description
  • At KMS Technology, we are dedicated to delivering cutting-edge solutions and services that empower businesses to achieve their goals. Our team is composed of highly skilled professionals who are passionate about technology and innovation. We provide a dynamic and collaborative work environment where you can grow your career and make a significant impact.
  • Job Description
  • We are seeking a highly experienced Senior Data Engineer specializing in the Microsoft Fabric platform. This role is responsible for leading the setup, design, and development of scalable, high-performance data transformation solutions across a unified Lakehouse architecture. The ideal candidate will excel in building dynamic ELT pipelines, optimizing workspace performance, and enforcing governance standards using Fabric Data Pipelines, Synapse SQL, and OneLake.
  • You will play a key role in shaping foundational architecture, implementing best practices, and ensuring reliable, high-quality data operations across Fabric environments.

Responsibilities

  • Design and develop dynamic, parameterized, and reusable data pipelines within Microsoft Fabric for efficient data movement and control flow.
  • Implement advanced data transformations by developing complex ELT logic using T-SQL within Synapse SQL endpoints and stored procedures.
  • Lead the technical setup and foundational configuration of Fabric environments, including resource provisioning, security setup, and governance best practices.
  • Define and enforce metadata management standards, including lineage, tagging, and discoverability requirements across Fabric.
  • Architect OneLake ingestion and organization patterns, ensuring data is correctly structured (e.g., Delta format) for downstream Fabric workloads.
  • Ensure workspace compatibility and performance optimization across both Dedicated Premium and Shared Fabric capacity environments.
  • Translate complex business data models into scalable, resilient architectural solutions using Fabric components such as Lakehouse, Warehouses, and Notebooks.
  • Optimize T-SQL logic and pipeline execution to reduce latency, improve performance, and manage compute resource costs.
  • Implement robust monitoring, alerting, and error-handling frameworks to ensure operational reliability and data integrity for production pipelines.
  • (Nice-to-have) Integrate PySpark/Spark notebooks into Fabric pipelines for advanced data processing or machine learning workloads.
  • Qualifications
  • 5+ years of professional software development experience, with 3+ years focused on large-scale data engineering.
  • Expert-level proficiency in Microsoft Fabric, including Fabric Pipelines (control flow, activities, parameters, variables), OneLake, and Synapse SQL.
  • Advanced experience with SQL/T-SQL, specifically for developing complex transformation logic within the Synapse SQL environment.
  • Proven experience designing and implementing high-throughput ELT and batch ingestion pipelines.
  • Strong understanding of data warehousing methodologies (Kimball / Inmon) applied within Lakehouse architectures.
  • Familiarity with Azure services such as Azure Data Lake Storage Gen2 and Azure DevOps.
  • Relevant Azure certifications such as Azure Data Engineer Associate (DP-203).
  • Experience with data modeling, including medallion architecture patterns within OneLake.
  • Experience developing Python/PySpark notebooks within Fabric for advanced data processing.
  • Additional Information
  • Location: Guadalajara, Jalisco, Mexico (working from home - office won't be mandatory all the time, rather it will required from time to time).