Overview
Skills
Job Details
We re looking for an experienced Senior Data Engineer to join our growing Data Insights & Analytics team. You ll play a critical role in designing, building, and scaling the data infrastructure that powers our core products and client-facing insights.
In this role, you ll architect data solutions using modern Azure technologies, including Microsoft Fabric, Synapse, Azure SQL, and Data Factory. You'll develop robust pipelines to process, transform, and model complex insurance data into structured, reliable datasets that fuel analytics, dashboards, and data science products.
In addition to your technical responsibilities, your daily routine will include participating in standup meetings, managing work items based on your capacity, collaborating with the team s growing technology team to define new projects or initiatives, and engage in development activities. In addition to traditional data engineering tasks, you will directly interact with the teams developing the tools we utilize, enabling you to provide direct product feedback and witness your input driving changes in the products over time.
Key Responsibilities
- Design and build robust, scalable ETL/ELT pipelines using Azure Data Factory, Synapse, and Microsoft Fabric
- Model and transform raw insurance data into structured datasets for reporting and analytics use cases
- Collaborate with analysts, engineers, and business stakeholders to align data solutions with company goals
- Implement orchestration logic, metadata-driven processing, and Spark-based transformations (PySpark, Spark SQL)
- Optimize performance of data workflows and pipelines to support real-time and batch processing scenarios
- Drive best practices in data governance, documentation, code quality, and DevOps automation
- Monitor production workloads, troubleshoot pipeline failures, and support live environments
- Evaluate new Azure data services and tools for potential adoption
Key Skills & Expertise
- Data Engineering: Advanced ETL/ELT experience with large, complex data sets
- Azure Stack: Strong knowledge of Azure Data Factory, Synapse Analytics, Azure SQL, and Microsoft Fabric
- Spark Ecosystem: Experience with Spark development using PySpark and Spark SQL (especially in Fabric notebooks)
- Data Modeling: Expertise in dimensional modeling, normalization, and schema design
- Coding Proficiency: Fluent in Python, SQL, Spark SQL, and scripting for orchestration and automation
- Performance Tuning: Familiarity with optimizing query performance, parallel processing, and cloud-based workloads
Qualifications
- Bachelor s degree in Computer Science, Data Engineering, or related field (Master s a plus)
- 5+ years of experience in data engineering or analytics engineering roles
- 3+ years working with Azure data services and cloud-native platforms
- Experience with Microsoft Fabric is highly desirable
- Proven ability to transform business requirements into scalable, maintainable data workflows
- Experience working with Lloyd s of London bordereaux data is a strong plus, particularly in contexts involving ingestion, validation, and transformation of complex insurance data sets.