Overview
Skills
Job Details
Job Title: AWS Data Engineer with Python and Spark
Location: Columbus, OH (Onsite)
Mode: Full Time/Long Term Contract.
Duties and responsibilities:
? Collaborate with the team to build out features for the data platform and consolidate data assets
? Build, maintain and optimize data pipelines built using Spark
? Advise, consult, and coach other data professionals on standards and practices
? Work with the team to define company data assets
? Migrate CMS' data platform into Chase's environment
? Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives
? Build libraries to standardize how we process data
? Loves to teach and learn, and knows that continuous learning is the cornerstone of every successful engineer
? Has a solid understanding of AWS tools such as EMR or Glue, their pros and cons and is able to intelligently convey such knowledge
? Implement automation on applicable processes
Mandatory Skills:
? X+ years of experience in a data engineering position
? Proficiency is Python (or similar) and SQL
? Strong experience building data pipelines with Spark
? Strong verbal & written communication
? Strong analytical and problem solving skills
? Experience with relational datastores, NoSQL datastores and cloud object stores
? Experience building data processing infrastructure in AWS
? Bonus: Experience with infrastructure as code solutions, preferably Terraform
? Bonus: Cloud certification
? Bonus: Production experience with ACID compliant formats such as Hudi, Iceberg or Delta Lake
? Bonus: Familiar with data observability solutions, data governance frameworks
Requirements
Bachelor's Degree in Computer Science/Programming or similar is preferred Right to work
Must have legal right to work in the USA