Data Engineer with Aws/-.Python, Redshift

Overview

Hybrid
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)

Skills

AWS
python
etl
redshift

Job Details

Position Overview:

As a Data Engineer , you will be responsible for designing, developing, and maintaining scalable data pipelines and infrastructure. You will work with a variety of data sources and tools to ensure data is efficiently ingested, processed, and stored, supporting our data-driven decision-making processes.

Key Responsibilities:

  • Data Pipeline Development: Design, build, and maintain robust and scalable data pipelines using Python and AWS services.
  • Data Storage Solutions: Implement and manage data warehousing solutions, particularly Amazon Redshift, to support business intelligence and analytics needs.
  • ETL Processes: Develop and optimize ETL (Extract, Transform, Load) processes to ensure high-quality and timely data delivery.
  • Data Integration: Integrate data from various sources, including databases, APIs, and third-party services, into a unified data platform.
  • Performance Optimization: Monitor and tune the performance of data pipelines and Redshift clusters to ensure optimal efficiency and reliability.
  • Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs.
  • Documentation: Create and maintain comprehensive documentation for data pipelines, data models, and system configurations.
  • Troubleshooting: Identify and resolve data-related issues, including data inconsistencies, performance bottlenecks, and system failures.
  • Security & Compliance: Ensure data handling and storage practices comply with industry standards and regulatory requirements.

Qualifications:

  • Education: Bachelor s degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees or relevant certifications are a plus.
  • Experience:
    • Proven experience as a Data Engineer or in a similar role.
    • Extensive experience with Python for data manipulation and pipeline development.
    • Hands-on experience with AWS services, including but not limited to S3, Glue, Lambda, and Redshift.
    • Familiarity with SQL and experience with query optimization in Amazon Redshift.
  • Skills:
    • Strong knowledge of data warehousing concepts and architecture.
    • Proficiency in data modeling and ETL best practices.
    • Experience with version control systems like Git.
    • Ability to work in a fast-paced environment with minimal supervision.
    • Strong analytical and problem-solving skills.
    • Excellent communication and teamwork abilities.
  • Desirable:
    • Experience with additional AWS services such as DynamoDB, EMR, or Kinesis.
    • Knowledge of data visualization tools (e.g., Tableau, Looker) is a plus.
    • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.