Mid to Senior-Level Data Engineer and Data Analyst
We are hiring mid to senior-level Data Engineers and Data Analysts to build modern data pipelines, work on GenAI-enabled data workflows, and deliver analytics solutions on AWS, Azure, or GCP. The roles support multiple enterprise clients across different locations and projects. Positions may be remote, hybrid, or onsite depending on client requirements.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
We are actively hiring mid to senior-level Data Engineers and Data Analysts with hands-on experience building modern data pipelines, working on GenAI-enabled data workflows, and delivering analytics solutions on AWS, Azure, or GCP. These roles support multiple enterprise clients across different locations and projects. Positions may be remote, hybrid, or onsite depending on client requirements.
Summary
We're hiring a mid to senior level Data Engineer with strong analytical skills to build pipelines, prepare analytical datasets, and support GenAI projects. The role blends data engineering, analysis, and cross-team collaboration.
Responsibilities
- Design, build, and maintain scalable ETL/ELT data pipelines using Python and SQL
- Ingest data from APIs, cloud storage, databases, files, and streaming platforms
- Develop analytics-ready and ML-ready datasets for reporting and advanced use cases
- Implement data quality checks, validation, monitoring, and lineage
- Collaborate with business stakeholders, analysts, and ML teams to define metrics and ensure data accuracy
- Optimize data models to support dashboards and self-service analytics
- Prepare structured and unstructured data for GenAI use cases (embeddings, vector databases, RAG pipelines)
- Improve performance, reliability, scalability, and cost efficiency
- Document pipelines, data models, and operational processes clearly
Searching for Devops roles that provide visa sponsorship? Connect with international employers through Devops Jobs with Visa Sponsorship opportunities actively seeking talented professionals.
- Python (Pandas, PySpark), strong SQL
- ETL/ELT design, modeling, and pipeline optimization
- Experience with cloud data warehouses (Snowflake, BigQuery, Redshift, Synapse)
- Tools like Airflow, DBT, ADF, Glue, Dataflow, Databricks, or similar
- Exposure to GenAI data workflows
- Git, CI/CD, basic DevOps awareness
Explore our comprehensive directory of visa sponsorship jobs from employers worldwide who are ready to sponsor talented international professionals.
- Strong Python experience (Pandas, PySpark)
- Advanced SQL skills
- Hands-on experience designing ETL/ELT pipelines and data models
- Experience with cloud data platforms:
- AWS (Redshift, Glue, S3, Athena)
- Azure (ADF, Synapse, Databricks)
- GCP (BigQuery, Dataflow, Pub/Sub)
- Familiarity with orchestration and transformation tools such as:
- Airflow, dbt, Databricks, Glue, ADF, Dataflow
- Exposure to GenAI data workflows (vector embeddings, document ingestion, RAG pipelines)
- Experience with Git, CI/CD, and basic DevOps practices
Interested in opportunities specifically in United State? Discover our dedicated Visa Sponsorship Jobs in United State page featuring roles from top employers in this location.
- 5 15 years of experience in Data Engineering or hybrid Data Engineering/Data Analytics
- Bachelor's degree in Computer Science, Engineering, Data, or a related field (or equivalent experience)
Similar Jobs
Explore other opportunities that match your interests
Generative AI Platforms Architect
Jobs via Dice
RAZOR
AWS Systems Architect