Data Engineer (SQL, AWS, Redshift, and Python)

Overview

On Site
Depends on Experience
Contract - W2

Skills

Amazon EC2
Amazon Redshift
Amazon S3
Amazon Web Services
Analytics
Attention To Detail
Business Operations
Collaboration
Computer Science
Conflict Resolution
Continuous Delivery
Continuous Integration
Data Architecture
Data Governance
Data Manipulation
Data Modeling
Data Processing
Data Quality
Data Storage
Docker
ELT
Extract
Transform
Load
Git
Management
Orchestration
Problem Solving
Python
Regulatory Compliance
Reporting
SQL
Scripting
Step-Functions
Version Control
Warehouse
Workflow

Job Details

Job Title: Data Engineer

Location: McKinney TX

Job Summary:

We are looking for a skilled and motivated Data Engineer with expertise in SQL, AWS, Redshift, and Python to join our growing data team. The ideal candidate will be responsible for building and optimizing data pipelines, supporting data infrastructure, and ensuring the availability and integrity of data for analytics and business operations.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL processes.
  • Develop and optimize data models and warehouse structures in Amazon Redshift.
  • Write advanced SQL queries for data transformation, analysis, and reporting.
  • Leverage Python for scripting, automation, and custom data workflows.
  • Collaborate with data analysts, data scientists, and business stakeholders to deliver reliable data solutions.
  • Manage and monitor data infrastructure using AWS services (S3, Lambda, Glue, Step Functions, EC2).
  • Ensure data quality, integrity, and governance across all pipelines and storage systems.
  • Troubleshoot and resolve data-related issues and performance bottlenecks.

Required Skills & Qualifications:

  • Bachelor s degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience as a Data Engineer or similar role.
  • Strong proficiency in SQL for data manipulation and analysis.
  • Hands-on experience with AWS services, especially Redshift, S3, Lambda, Glue, and CloudWatch.
  • Solid experience in Python for scripting, data processing, and automation.
  • Understanding of data architecture, data modeling, and best practices in ETL/ELT.
  • Strong problem-solving skills and attention to detail.

Preferred Qualifications:

  • Experience with orchestration tools like Airflow or AWS Step Functions.
  • Familiarity with CI/CD pipelines and version control (Git).
  • Exposure to data governance, security, and compliance standards.

Experience with containerization tools like Docker is a plus.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.