Mid to Senior-Level Data Engineer and Data Analyst

interon it solutions • United State
Remote Visa Sponsorship
Apply
AI Summary

We are hiring mid to senior-level Data Engineers and Data Analysts to build modern data pipelines, work on GenAI-enabled data workflows, and deliver analytics solutions on AWS, Azure, or GCP. The roles support multiple enterprise clients across different locations and projects. Positions may be remote, hybrid, or onsite depending on client requirements.

Key Highlights
Design, build, and maintain scalable ETL/ELT data pipelines using Python and SQL
Ingest data from APIs, cloud storage, databases, files, and streaming platforms
Implement data quality checks, validation, monitoring, and lineage
Key Responsibilities
Design, build, and maintain scalable ETL/ELT data pipelines using Python and SQL
Ingest data from APIs, cloud storage, databases, files, and streaming platforms
Implement data quality checks, validation, monitoring, and lineage
Collaborate with business stakeholders, analysts, and ML teams to define metrics and ensure data accuracy
Optimize data models to support dashboards and self-service analytics
Prepare structured and unstructured data for GenAI use cases (embeddings, vector databases, RAG pipelines)
Technical Skills Required
Python SQL Pandas PySpark ETL/ELT design ETL/ELT pipeline optimization Cloud data warehouses (Snowflake, BigQuery, Redshift, Synapse) Tools like Airflow, DBT, ADF, Glue, Dataflow, Databricks
Benefits & Perks
Remote work
Hybrid work
Onsite work
Visa sponsorship available
Nice to Have
Strong Python experience (Pandas, PySpark)
Advanced SQL skills
Hands-on experience designing ETL/ELT pipelines and data models
Experience with cloud data platforms: AWS (Redshift, Glue, S3, Athena), Azure (ADF, Synapse, Databricks), GCP (BigQuery, Dataflow, Pub/Sub)
Familiarity with orchestration and transformation tools such as: Airflow, dbt, Databricks, Glue, ADF, Dataflow
Exposure to GenAI data workflows (vector embeddings, document ingestion, RAG pipelines)
Experience with Git, CI/CD, and basic DevOps practices

Job Description


We are actively hiring mid to senior-level Data Engineers and Data Analysts with hands-on experience building modern data pipelines, working on GenAI-enabled data workflows, and delivering analytics solutions on AWS, Azure, or GCP. These roles support multiple enterprise clients across different locations and projects. Positions may be remote, hybrid, or onsite depending on client requirements.

Summary

We're hiring a mid to senior level Data Engineer with strong analytical skills to build pipelines, prepare analytical datasets, and support GenAI projects. The role blends data engineering, analysis, and cross-team collaboration.

Responsibilities

  • Design, build, and maintain scalable ETL/ELT data pipelines using Python and SQL
  • Ingest data from APIs, cloud storage, databases, files, and streaming platforms
  • Develop analytics-ready and ML-ready datasets for reporting and advanced use cases
  • Implement data quality checks, validation, monitoring, and lineage
  • Collaborate with business stakeholders, analysts, and ML teams to define metrics and ensure data accuracy
  • Optimize data models to support dashboards and self-service analytics
  • Prepare structured and unstructured data for GenAI use cases (embeddings, vector databases, RAG pipelines)
  • Improve performance, reliability, scalability, and cost efficiency
  • Document pipelines, data models, and operational processes clearly

Core Skills

  • Python (Pandas, PySpark), strong SQL
  • ETL/ELT design, modeling, and pipeline optimization
  • Experience with cloud data warehouses (Snowflake, BigQuery, Redshift, Synapse)
  • Tools like Airflow, DBT, ADF, Glue, Dataflow, Databricks, or similar
  • Exposure to GenAI data workflows
  • Git, CI/CD, basic DevOps awareness

Nice-to-Have

  • Strong Python experience (Pandas, PySpark)
  • Advanced SQL skills
  • Hands-on experience designing ETL/ELT pipelines and data models
  • Experience with cloud data platforms:
    • AWS (Redshift, Glue, S3, Athena)
    • Azure (ADF, Synapse, Databricks)
    • GCP (BigQuery, Dataflow, Pub/Sub)
  • Familiarity with orchestration and transformation tools such as:
  • Airflow, dbt, Databricks, Glue, ADF, Dataflow
  • Exposure to GenAI data workflows (vector embeddings, document ingestion, RAG pipelines)
  • Experience with Git, CI/CD, and basic DevOps practices
Experience & Education

  • 5 15 years of experience in Data Engineering or hybrid Data Engineering/Data Analytics
  • Bachelor's degree in Computer Science, Engineering, Data, or a related field (or equivalent experience)

We consider candidates across all visa categories. Work-authorized applicants, as well as candidates who may require visa sponsorship now or in the future, will be considered in accordance with applicable laws. We are an Equal Opportunity Employer and do not discriminate on the basis of race, color, religion, sex, gender identity, sexual orientation, national origin, age, disability, veteran status, or any other protected characteristic.

Similar Jobs

Explore other opportunities that match your interests

Generative AI Platforms Architect

Devops
•
31m ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Jobs via Dice

United State
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

RAZOR

United State

AWS Systems Architect

Devops
•
1h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Jobs via Dice

United State

Subscribe our newsletter

New Things Will Always Update Regularly