Machine Learning Engineer

Overview

Remote
Hybrid
Depends on Experience
Contract - W2
Contract - Independent
Contract - 9 Month(s)
No Travel Required

Skills

API
Algorithms
Amazon Web Services
Analytics
Apache Avro
Apache HBase
Apache Hadoop
Apache Hive
Apache Parquet
Apache Spark
Artificial Intelligence
Bash
Big Data
Calculus
Cloud Computing
Cloudera
Clustering
Computer Vision
Data Processing
Data Science
Data Structure
Database
Decision Trees
Deep Learning
Django
Docker
Eclipse
File Formats
Flask
Git
IT Architecture
IT Management
Issue Resolution
Java
Keras
Kubernetes
Linear Algebra
Linux
Machine Learning (ML)
Management
Natural Language Processing
NoSQL
NumPy
Pandas
Predictive Modelling
Probability
Production Support
PyTorch
Python
RDBMS
RESTful
Regression Analysis
SQL
Scala
Scripting
Software Design
Solaris Volume Manager
Spring Framework
Statistics
Support Vector Machine
TensorFlow
UDF
Unix
Web Services
Writing
scikit-learn

Job Details

Must Haves-

  1. Strong project experience in Machine Learning, Big Data, NLP, Deep Learning, RDBMS is must.
  2. Strong project experience with Amazon Web Services and Cloudera Data Platform is must.
  3. 4-5 experience building data pipelines using Python, MLLib, PyTorch, TensorFlow, Numpy/Scipy/Pandas, Spark, Hive,
  4. 4-5 years of programming experience in AWS, Linux and Data Science notebooks is must.
  5. Strong experience with REST API development using Python frameworks (Django, Flask etc.).
  6. Micro Services/Web service development experience using Spring framework is highly desirable

  1. Deliverables or Tasks:

Deliverables

The tasks for the AI/ML Engineer include, but are not limited to, the following:

  • Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.
  • Design, build and scale Machine Learning systems across multiple domains.
  • Design and implement NLP algorithms
  • Design and implement an integrated Big Data platform and analytics solution
  • Design and implement data collectors to collect and transport data to the Big Data Platform.

Technical Knowledge and Skills:

Consultant resources shall possess most of the following technical knowledge and experience:

  • Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.
  • 4-5 years of Strong programming experience in Python, Java, Scala, SQL.
  • Proficient in Machine Learning Algorithms: Supervised Learning (Regression, Classification, SVM, Decision Trees etc.), Unsupervised Learning (Clustering) and Reinforcement Learning
  • Strong Hands-on Experience in building, deploying and productionizing ML models using MLLib, TensorFlow, PyTorch, Keras, Python Scikit-learn etc.
  • Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Spark, Spark SQL, HBase.
  • Data Processing and Analysis experience with Pandas, NumPy, Matplotlib/Seaborn etc. and using Big Data technologies (Hadoop/Spark)
  • Must have Natural Language Processing (NLP) and Computer Vision experience.
  • Ability to evaluate and choose best suited ML algorithms, perform feature engineering and optimize Machine Learning Models is mandatory
  • Strong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is must
  • Strong Experience with Data Science Notebooks like Jupyter, Zeppelin, RStudio. PyCharm etc.
  • Strong Mathematics and Statistics Background (Linear Algebra, Calculus, Probability and Statistics)
  • 4+ years of hands-on Development, Deployment and production Support experience in Hadoop environment.
  • Proficient in Big Data, SQL, relational database and NoSQL database for data retrieval and analysis.
  • Must have experience with developing Hive QL, UDF s for analyzing semi structured/structured datasets.
  • Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.
  • Experience with AWS and other cloud platforms.
  • Experience using Git and Eclipse.
  • Experience in creating and managing RESTFul API s using Python and Java frameworks.
  • Experience in Docker and Kubernetes containerization.
  • Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc.
  • Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.

Preferred Skills:

  • Machine Learning, Big Data, NLP, Deep Learning, Python, MLLib, PyTorch, TensorFlow, Numpy/Scipy/Pandas, Spark, Hive, Data Science Notebooks, SQL, API, Unix/Linux, AWS
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.