Sr BigData Engineer_Hybrid_W2

  • Sunnyvale, CA
  • Posted 2 days ago | Updated 2 days ago

Overview

Hybrid
Depends on Experience
Contract - W2

Skills

Agile
Amazon Web Services
Apache Cassandra
Apache Flume
Apache HBase
Apache Hadoop
MapReduce
Mentorship
Microservices
OOD
Offshoring
Python
Replication
Software Development Methodology
Development Testing
Data Processing
Big Data
Automated Testing
Apache Spark
Apache Kafka

Job Details

Please share resumes to

Position: Sr BigData Engineer

Location: Sunnyvale, CA-Hybrid

Duration: 12+ Months

USC-Only w2

Requirements

BS degree in computer science, computer engineering or equivalent

Proficient in Java/Scala, Spark, Kafka, Python, Google Cloud Platform Cloud technologies

Must have active current experience with Scala, Java, Python, Oracle, Cassandra, Hbase, Hive

3+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, Cassandra, HBase, Hive, Flume, Sqoop, Spark, Kafka, Scala Familiarity with AWS scripting and automation

Flair for data, schema, data model, how to bring efficiency in big data related life cycle

Must be able to quickly understand technical and business requirements and can translate them into technical implementations

Experience with Agile Development methodologies

Experience with data ingestion and transformation

Solid understanding of secure application development methodologies

Experienced in developing microservices using spring framework is a plus

Understanding of automated QA needs related to Big data

Strong object-oriented design and analysis skills

Excellent written and verbal communication skills

Responsibilities

Utilize your software engineering skills including Java, Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services

Integrate new data sources and tools

Implement scalable and reliable distributed data replication strategies

Ability to mentor and provide direction in architecture and design to onsite/offshore developers

Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases

Perform analysis of large data sets using components from the Hadoop ecosystem

Own product features from the development, testing through to production deployment

Evaluate big data technologies and prototype solutions to improve our data processing architecture

Automate everything

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.