Data Engineer - Credit Technology

Overview

On Site
USD 350,000.00 per year
Full Time

Skills

Extract
Transform
Load
Data Quality
Storage
API
Collaboration
Dimensional Modeling
Scalability
Data Engineering
SQL
Relational Databases
Python
Data Manipulation
Pandas
PySpark
Debugging
Conflict Resolution
Problem Solving
Redis
Messaging
Caching
Data Lake
Warehouse
Data Storage
Apache Parquet
Code Optimization
Performance Tuning
Communication
Grafana
Apache Kafka
Streaming
Apache Flink
Finance
Research
Risk Management
Portfolio Management
Order Management
Snow Flake Schema

Job Details

Description Summary:

A leading hedge fund is seeking a skilled Data Engineer to join their Credit Technology team. This team is responsible for building, owning, and supporting a world-class data platform for portfolio managers and their teams. They are developing the suite of core components that will underpin the offering of this team for years to come. We are looking for an experienced and eager engineer to join our team in supporting this mandate.

Role Overview:
  • Design, build, and grow a modern data platform and data-intensive applications, from ingestion through ETL, data quality, storage, and consumption/API's.
  • Work closely with quantitative engineers and researchers.
  • Collaborate in a global team environment to understand, engineer, and deliver on business requirements.
  • Strike a balance along the dimensions of feasibility, stability, scalability, and time-to-market when delivering solutions.

Qualifications & Requirements:
  • 5+ years of work experience in a data engineering or similar data-intensive capacity.
  • Demonstrable expertise in SQL and relational databases.
  • Strong skills in Python and at least one data-manipulation library/framework (e.g., Pandas, Polars, Dask, Vaex, PySpark).
  • Strong debugging skills at all levels of the application stack and proven problem-solving ability.
  • Strong knowledge of the data components used in distributed applications (e.g., Kafka, Redis, or other messaging/caching tools).
  • Experience architecting and building data platforms / ETLs, ideally batch as well as streaming, data lake/warehouse/lakehouse patterns.
  • Experience with column-oriented data storage and serialization formats such as Parquet/Arrow.
  • Experience with code optimization and performance tuning.
  • Excellent communication skills.

Additional experience in the following areas is a plus:
  • Experience building application-level code (e.g., REST APIs to expose business logic).
  • Prior usage of tooling such as Prometheus, Grafana, Sentry, etc. for distributed tracing and monitoring metrics.
  • Experience with distributed stateful stream processing (e.g., Kafka Streams, Flink, Arroyo).
  • Work with financial instruments/software in areas such as research, risk management, portfolio management, reconciliation, order management, etc.
  • Prior experience with ClickHouse, Snowflake, or KDB

Can pay up to 350,000 Total Comp.... APPLY BELOW!
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.