Data Engineer (Remote)

Remote
Apply
AI Summary

Join a high-performance engineering team as a remote Data Engineer to design, implement, and operate resilient data pipelines and platform components.

Key Highlights
Design, build, and maintain scalable batch and streaming data pipelines
Implement and maintain cloud data platform components
Collaborate with data scientists, analysts, and SREs to define data schemas and validation rules
Technical Skills Required
Python SQL Apache Spark Apache Airflow Snowflake AWS Terraform dbt Apache Kafka
Benefits & Perks
Fully remote, US-based role with flexible work policies
Focus on professional growth: technical mentorship, learning budget, and opportunities to influence platform design

Job Description


Industry & Sector: Financial services — investment risk analytics, portfolio engineering, and enterprise data platforms. We build scalable data infrastructure and analytics pipelines that power risk signals, regulatory reporting, and client-facing analytics for institutional customers.

Primary Title: Data Engineer (Remote, United States)

About The Opportunity

We are recruiting a remote Data Engineer to join a high-performance engineering team focused on operationalizing large-scale ETL/ELT and streaming data solutions. You will design, implement, and operate resilient data pipelines and platform components that deliver timely, accurate analytics for trading, risk, and reporting use-cases.

Role & Responsibilities

  • Design, build, and maintain scalable batch and streaming data pipelines to ingest, transform, and deliver high-quality datasets for analytics and ML.
  • Author and optimize reusable ETL/ELT workflows using managed orchestration (e.g., Airflow) and Spark-based compute for performance and cost-efficiency.
  • Implement and maintain cloud data platform components (data warehouses, storage, access controls) to support ad-hoc analytics and production reporting.
  • Collaborate with data scientists, analysts, and SREs to define data schemas, validation rules, monitoring, and SLAs for production datasets.
  • Drive data engineering best practices: modular code, CI/CD pipelines, automated testing, observability, and infrastructure-as-code.
  • Troubleshoot production incidents, perform root-cause analysis, and implement long-term reliability improvements.

Skills & Qualifications

Must-Have

  • Python
  • SQL
  • Apache Spark
  • Apache Airflow
  • Snowflake
  • AWS

Preferred

  • dbt
  • Apache Kafka
  • Terraform

Qualifications: Proven experience building production data pipelines for analytics or risk workflows; strong troubleshooting and system-design ability; familiarity with data governance, lineage, and observability practices. Candidates should be authorized to work in the United States.

Benefits & Culture Highlights

  • Fully remote, US-based role with flexible work policies and distributed engineering teams.
  • Focus on professional growth: technical mentorship, learning budget, and opportunities to influence platform design.
  • High-impact environment where engineering ownership and data quality drive business outcomes.

This role is keyword-optimized for data engineering searches (ETL, ELT, data pipelines, streaming, Snowflake, Spark, Airflow, AWS) and is ideal for hands-on engineers who enjoy building reliable data platforms for mission-critical financial analytics.

Skills: python,apache spark,snowflake,sql,aws,terraform,dbt,apache kafka

Subscribe our newsletter

New Things Will Always Update Regularly