Overview
Remote
Depends on Experience
Full Time
Skills
Amazon Web Services
Google Cloud Platform
Data Management
Python
Apache Spark
Apache Airflow
Snow Flake Schema
Database Design
Data Processing
Data Integration
Job Details
What you will do
- Collaborate with engineers, architects, data scientists, analytics teams, and business stakeholders in an Agile setup;
- Build, and support both Cloud and On-Premise enterprise data infrastructure;
- Design and implement robust, reusable, and scalable data pipelines to automate ingestion, processing, and delivery of both batch and streaming data (structured and unstructured);
- Develop data APIs and delivery services to support critical business processes, analytical models, and ML applications;
- Assist with selecting and integrating data tools, frameworks, and applications to extend platform capabilities;
- Apply best practices for enterprise data management, including master data, reference data, metadata, data quality, and lineage.
Key competencies
- Strong problem-solving and critical-thinking abilities;
- Proactive, ownership-driven approach to work;
- High attention to detail and quality;
- Ability to break down complex tasks and prioritize effectively;
- Comfortable working both independently and in a team setting;
- Clear and structured communication skills.
Must haves
- 4+ years of experience as a Data Engineer;
- Hands-on experience with Python;
- Proven experience building data lakes, cloud data platforms (using Google Cloud Platform or AWS), ETL/ELT, and data integration;
- Proficiency with Snowflake and Apache Airflow;
- Expertise in developing distributed data processing and streaming frameworks (e.g., Apache Spark, Apache Beam, Apache Flink);
- Strong knowledge of NoSQL database technologies (e.g., MongoDB, BigTable, DynamoDB);
- Proficiency in build and deployment tools (e.g., Visual Studio, PyCharm, Git/Bitbucket, Bamboo, Maven, Jenkins, Nexus);
- Experience with containerized microservices and REST/GraphQL API development.
Nice to haves
- Experience with Kafka, Cloud Pub/Sub;
- Expertise in database design methodologies (e.g., RDBMS, Document DBs, Star Schema, Kimball Model);
- Familiarity with integration and service frameworks (e.g., API Gateways, Apache Camel, Swagger, Zookeeper, messaging tools, microservices);
- Exposure to CI/CD tools and processes (e.g., Jenkins, Docker, Containers, OpenShift, Kubernetes);
- Experience working with financial services or wealth/asset management projects.
The benefits of joining us
- Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.
- Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.
- A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.
- Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office, whatever makes you the happiest and most productive.
Your application doesn't end here! To unlock the next steps, check your email and complete your registration on our Applicant Site. The incomplete registration results in the termination of your process.
Good luck! We're rooting for you!
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.