Design, build, and maintain data infrastructure, pipelines, and architecture for a data-first team. Collaborate with cross-functional teams to enable data-driven insights and real-time customer engagement. Utilize modern data tools and technologies to drive business growth and innovation.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
About the Role
As a Data Engineer, you will contribute to building and optimizing the company’s data infrastructure. You’ll work on both batch and streaming pipelines, maintain the cloud data lakehouse, and support data-driven products across marketing automation, personalization, and customer insights. You’ll collaborate closely with Product, Analytics, and ML teams to enable data-driven insights, predictive models, and real-time customer engagement. This role involves hands-on work with data pipelines, data modeling, cloud data warehousing, and integration with modern data tools.
Location: LATAM
Time Zone: Team operates on U.S. East/West Coast hours
How You’ll Make an Impact
- Build and maintain reliable batch and streaming pipelines using Airflow, dbt, Fivetran, Pub/Sub, and similar tools.
- Model and transform data in BigQuery, ensuring high-quality, well-structured datasets for analytics and ML.
- Develop and maintain data lake architecture with Apache Iceberg on GCS.
- Collaborate with analytics and ML teams to prepare data for segmentation, scoring, and personalization use cases.
- Contribute to Customer Data Platform (CDP) development and real-time data activation layers.
- Ensure governance, compliance, and data quality standards are met.
- Support product initiatives by translating business requirements into reliable data solutions.
- Promote adoption of modern data engineering practices and tools across the team.
What We’re Looking For
- 3–5 years of experience in data engineering or analytics engineering.
- Strong skills in SQL and Python.
- Experience with modern data tools such as dbt, Fivetran, and Airflow.
- Familiarity with cloud data platforms, preferably GCP and BigQuery.
- Exposure to streaming technologies like Pub/Sub, Kafka, or Kinesis is a plus.
- Understanding of data modeling and transformation best practices.
- Curious, proactive, and willing to grow into more advanced responsibilities (e.g., architecture, ML pipelines).
What You’ll Love
- 100% Remote, flexible work environment.
- High-impact role on a data-first team with a clear roadmap.
- Work at the intersection of SaaS, AI, and customer experience.
- Modern GCP stack with tools like Vertex AI, dbt, and Iceberg.
- Collaborative, supportive team culture.
- Opportunities to grow and take ownership of advanced data initiatives.