Senior AI/ML Engineer

TEKsystems • Texas Metropolitan Area
Remote
Apply
AI Summary

Design, develop, and deploy secure, scalable, and high-performance ML pipelines using AWS and/or GCP. Collaborate with data scientists and engineers to ensure secure data processing and model deployment. Develop and maintain CI/CD pipelines for ML workflows.

Key Highlights
Design and develop AI/ML components of various solutions
Collaborate with data scientists and engineers to ensure secure data processing and model deployment
Develop and maintain CI/CD pipelines for ML workflows
Technical Skills Required
Python PySpark SQL Jupyter Notebooks Distributed Computing TensorFlow PyTorch Scikit-learn AWS SageMaker AWS Lambda GCP Vertex AI BigQuery ML Kubeflow MLflow Docker Kubernetes Snowflake S3 Google Cloud Storage (GCS)
Benefits & Perks
Fully remote work
Up to 50% travel to client sites
Annual bonuses
Profit sharing
Health insurance

Job Description


Think of TEKsystems Global Services (TGS) as the growth solution for enterprises today. We unleash growth through technology, strategy, design, execution and operations with a customer-first mindset for bold business leaders. We deliver cloud, data and customer experience solutions. Our partnerships with leading cloud, design and business intelligence platforms fuel our expertise.


We value deep relationships, dedication to serving others and inclusion. We drive positive outcomes for our people and our business, and we stay true to our commitments and act in harmony with our words. We exist to create significant opportunities for people to achieve fulfillment through career success.


Ready to join us?


Here’s what the opportunity supported through our TGS Talent Acquisition Team requires:


Position Overview


We are seeking a highly skilled and motivated Senior AI/ML Engineer with 5 or more years of experience in data engineering and at least 3 years in AI/ML engineering. The ideal candidate will have hands-on expertise in designing, developing, and deploying secure, scalable, and high-performance ML pipelines ensuring full compliance with industry standard security and risk framework like RMF / NIST / CMMC frameworks. The ideal candidate should have proficiency in Amazon Web Service (AWS) and/or Google Cloud Platform (GCP) with a solid foundation in data engineering, Machine Learning and MLOps cloud-native tools, and data governance. The ideal candidate should be a team player, responsible for the development and orchestration of AI/ML components of various solutions delivered by Data & A/I Practice for our clients.


This is a fully remote role throughout the U.S. and entails up to 50% travel to client sites as per project need.


Responsibilities


  • Actively involved in requirement gathering workshops from customers, translating the functional requirements into technical solutions, and translating complex technical concepts into actionable insights for stakeholders.
  • Actively participate in architectural discussions independently or under guidance / supervision from Practice Architect and/or Lead Engineer to design and develop effective, efficient, reliable, secure, and scalable data engineering solutions as per the overall data management strategy.
  • Build end-to-end machine learning pipelines using AWS (e.g., SageMaker, Lambda, S3) or GCP (e.g., Vertex AI, Cloud Functions, BigQuery) for training, evaluation, and model lifecycle management and ensure scalability, reliability, and performance of ML models in production environments.
  • Build, train, and fine-tune models using frameworks like TensorFlow, PyTorch, or Scikit-learn and apply techniques such as hyperparameter tuning, feature engineering, and model evaluation to continuously improve accuracy and efficiency.
  • Design and implement robust data ingestion, transformation, and storage solutions using cloud-native tools (e.g., AWS Glue, GCP Dataflow) while ensuring data quality, governance, and compliance following industry and/or organizational standards.
  • Develop and maintain CI/CD pipelines for ML workflows using tools like AWS CodePipeline or GCP Cloud Build automating model deployment, monitoring, and rollback strategies to support continuous delivery.
  • Implement IAM roles, VPC configurations, and encryption protocols to safeguard data and models following best practices for cost optimization and cloud security.
  • Collaborate with data scientists, DevSecOps engineers, and cybersecurity SMEs to ensure secure data processing, model deployment and operationalize the deployed models.
  • Create prototypes and evaluate emerging tools and methodologies to drive innovation within the team.
  • Occasional support to sales and pre-sales partners to convert opportunity to revenue through thought leadership in the designated area of expertise (AI/ML)


Required Skills & Qualifications


  • Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or related field
  • 5 or more years of hands-on experience in data engineering (preferably in cloud environment) with 3 or more years of experience in Machine Learning engineering roles, preferably in secure or classified environments
  • Strong proficiency in Python, PySpark, SQL, Jupyter notebooks, and distributed computing and optionally R, Java, or Scala
  • Strong understanding of core machine learning, deep learning, and NLP
  • Deep understanding of cloud-native ML services like Amazon SageMaker, AWS Lambda, GCP Vertex AI, and BigQuery ML.
  • Proficiency in supervised, unsupervised, and deep learning techniques
  • Hands-on experience with TensorFlow, PyTorch, Scikit-learn, or similar libraries
  • Knowledge of CI/CD pipelines, model versioning, and automated deployment and experience with tools like Kubeflow, MLflow, Docker, and Kubernetes
  • Production level experience in dealing with structured, semi-structured, and unstructured data from APIs, RDBMS, and/or streaming sources into data lakes or storages [e.g., Snowflake, S3, Google Cloud Storage (GCS), etc.]
  • Ability to design robust evaluation metrics and monitor model performance post-deployment and experience with drift detection, retraining strategies, and alerting mechanisms
  • Solid understanding of data privacy, IAM roles, encryption, and compliance standards (e.g., GDPR, HIPAA) and ability to apply the knowledge to implement secure ML solutions in cloud environments
  • Strong analytical skills to translate business problems into ML solutions as well as troubleshoot complex issues across data, model, and infrastructure layers
  • Excellent verbal and written communication skills
  • Ability to work cross-functionally with product managers, data scientists, and engineering teams
  • Passion for staying updated with the latest in AI/ML research and cloud technologies and ability to evaluate and adopt emerging tools and methodologies


Preferred Skills & Qualifications


  • Familiarity with DoD data strategy, RMF / NIST / CMMC / FedRAMP frameworks
  • Experience with Generative AI, LLMs, transformer architecture, and prompt engineering
  • Knowledge of Agentic AI frameworks
  • Industry recognized associate or advanced level AI/ML certification from AWS/ GCP / Snowflake / Databricks Certification such as:
  • AWS Machine Learning Engineer – Associate
  • AWS Machine Learning – Specialty
  • GCP - Professional Machine Learning Engineer
  • Databricks Certified Machine Learning Associate
  • Databricks Certified Machine Learning Professional


******************************************


We reserve the right to pay above or below the posted wage based on factors unrelated to sex, race, or any other protected classification.


Additional earnings may be available through incentive programs like annual bonuses, profit sharing, etc.


Please click on the following link to learn more about our full-time internal employment benefits: https://www.teksystems.com/en/careers/benefits.


The expected posting close date is December 8, 2025.


Subscribe our newsletter

New Things Will Always Update Regularly