Hybrid in New York, New York
•
Yesterday
Job Overview: We seek a highly skilled and motivated Data Engineer with strong expertise in Python, PySpark, AWS, Databricks, and Snowflake to join our dynamic team. The ideal candidate should have hands-on experience with Spark optimization, SQL data processing, and AWS tools and technologies. Exposure to Informatica and data streaming tools is a plus. Key Responsibilities: Design, develop, and optimize scalable data processing pipelines using Spark and PySpark. Implement Spark optimization tec
Easy Apply
Full-time
Depends on Experience