Senior Cloud Engineer - Apache Spark, GCP, Python, Microservices

Visa Sponsorship
Apply
AI Summary

Lead Cloud Engineer for Model Risk & Finance platform in hybrid cloud environment. Building, supporting, and enhancing distributed data platforms and backend systems. Working on large-scale Spark/PySpark data processing and cloud migration.

Key Highlights
Hybrid work environment in Charlotte, NC
12+ month contract with 85hr W2 pay
Must have W2 eligibility, US visa sponsorship available
Key Responsibilities
Build, support, and enhance distributed data platforms and backend systems
Develop APIs, workflows, and platform components using Python
Work on large-scale Spark/PySpark data processing systems
Support and optimize Kubernetes/OpenShift-based environments
Contribute to CI/CD pipelines and platform automation
Technical Skills Required
Apache Spark Google Cloud Platform Kubernetes OpenShift Python PySpark CI/CD GitHub Actions Helm Harness
Benefits & Perks
85 per hour W2 pay
US visa sponsorship available
Nice to Have
AI/LLM integration experience
GPU or platform-level AI exposure

Job Description


STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!

This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.

“Beware of scams. S3 never asks for money during its onboarding process.”


Job title: Lead Cloud Engineer – Apache Spark / GCP / Python / Microservices

Location: Charlotte, NC

Hybrid work- some on site work

Contract Length: 12+ months

Pay: 85 an hr on W2


Senior hands-on engineer supporting the Model Risk & Finance platform in a hybrid cloud environment (GCP + OpenShift/Kubernetes).

This is a backend-focused role centered on distributed data processing and platform engineering — not UI development.

Key Responsibilities

  • Build, support, and enhance distributed data platforms and backend systems
  • Develop APIs, workflows, and platform components using Python
  • Work on large-scale Spark/PySpark data processing systems
  • Support and optimize Kubernetes/OpenShift-based environments
  • Contribute to CI/CD pipelines and platform automation
  • Debug, troubleshoot, and optimize distributed systems at scale
  • Support ongoing platform enhancements post cloud migration

Core Technologies

  • Apache Spark / PySpark (required)
  • Google Cloud Platform (GCP) (strongly preferred)
  • Kubernetes / OpenShift
  • Python (Django, APIs)
  • CI/CD: GitHub Actions, Helm, Harness

Key Initiative

  • Migration from Hadoop to GCP
  • Build and support hybrid cloud platform (PyFarm)
  • Ongoing platform engineering and optimization after migration

Top Skill Priorities

  1. Spark at scale (production experience)
  2. Hands-on GCP experience (not just exposure)
  3. Kubernetes / OpenShift
  4. Python and microservices development
  5. Debugging and performance tuning of distributed systems

Nice to Have

  • AI/LLM integration experience (building capabilities)
  • GPU or platform-level AI exposure
  • Hadoop migration experience

Team Environment

  • Platform and Application Development team
  • Works closely with Data and Support teams
  • US and India team presence

Target Candidate Profile

  • Platform Engineer (Data / ML platform)
  • Cloud Data Engineer (Spark-heavy)
  • Big Data Engineer with Kubernetes and GCP experience


Similar Jobs

Explore other opportunities that match your interests

Senior AI Architect/Manager on Google Cloud

Devops
2h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Deloitte

United State
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

Catapult Solutions Group

United State
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

mujin us

United State

Subscribe our newsletter

New Things Will Always Update Regularly