AWS DevOps/MLOps Engineer Onsite Plano Tx.

Overview

On Site
Depends on Experience
Contract - W2
Contract - Independent
Contract - 2 Year(s)

Skills

Databricks
DevOps
Data security
Node.js
Amazon S3
Amazon Route 53
Amazon Redshift
API
Machine Learning Operations (ML Ops)
Amazon Web Services
Amazon Lambda
Amazon RDS
Amazon EFS
Amazon EC2
Kubernetes
Python

Job Details

 

AWS DevOps/MLOps Lead to develop platforms for big data and data science on

AWS. As models, apps, and data pipelines are created and operationalized, the bigdata and data science team

requires engineers with understanding of cloud native technology to develop, manage, automate, and facilitate the operational capabilities of the big data and data science team.

Required Skills:

 

 Experience in AWS system and network architecture design, with specific focus on AWS Sagemaker and

AWS ECS

 Experience developing and maintaining ML systems built with open source tools

 Experience developing with containers and Kubernetes in cloud computing environments

 Experience with one or more data-oriented workflow orchestration frameworks (KubeFlow, Airflow, Argo)

 Design the data pipelines and engineering infrastructure to support our clients’ enterprise machine

learning systems at scale

 Develop and deploy scalable tools and services for our clients to handle machine learning training and

inference

 Support model development, with an emphasis on auditability, versioning, and data security

 Experience with data security and privacy solutions such as Denodo, Protegrity, and synthetic data

generation.

 Ability to develop applications using Python and deploy to AWS Lambda and API Gateway

 Ability to develop Jenkins pipelines using the groovy scripting.

 . Good understanding in testing frameworks like Py/Test.

 Ability to work with AWS services like S3, DynamoDB, Glue, Redshift and RDS

 Proficient understanding of Git and version control systems

 Familiarity with continuous integration and continuous deployment.

 Develop the terraform modules to deploy the standard infrastructure.

 Ability to develop the deployment pipelines using the Jenkins, XL Release

 Experience in Python boto3 to automate the cloud operations.

 Experience in documenting technical solutions and solution diagrams

 Good understanding of the simple python applications which can be deployed as a docker container.

 Experiencing in creating workflows using AWS step functions

Create the docker images using the custom python libraries.

Required Skills:

 AWS (experience mandatory): S3, KMS, IAM, EC2, ECS, BATCH, ECR, Lambda, Data Sync, EFS,

IAM Roles, Policies, Cloud Trail, Cost Explorer, ACM, AWS Route53, SNS, SQS, ELB, CloudWatch,

Lambda and VPC, Service Catalog

 Automation (experience mandatory): Terraform, Python (boto3), serverless, Jenkins (Groovy), NodeJs

 Bigdata (Knowledge): Redshift, DynamoDB, Databricks, Glue, and Athena.

 Data science (Experience): Sagemaker, Athena, Glue, DynamoDB, Databricks, MWAA (Airflow),

 DevOps (experience mandatory): Python, Terraform, Jenkins, GitHub, Make files, and Shell scripting.

 Data Virtualization (Knowledge) : Denodo

Data Security (Knowledge): Protegrity

 

Qualifications:

 Bachelor’s degree from a reputed institution/university.

 14+ years of building end-to-end systems as a Platform Engineer, ML DevOps Engineer, or Data

Engineer.

 4+ Years of experience in python, groovy, and java programming.

 Experience working in the SCRUM Environment.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.