Join OptiCall Solutions as a Middle MLOps Engineer to scale AI systems, ensure reliable model deployment, and build robust pipelines for speech-to-text processing and performance analytics models.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
Middle MLOps Engineer
OptiCall Solutions | Remote | Full-time
About OptiCall Solutions
OptiCall Solutions is an innovative AI-powered call center analytics platform that revolutionizes quality control through automated call transcription and intelligent operator performance evaluation. We leverage cutting-edge machine learning to help businesses optimize their customer service operations at scale.
The Role
We're looking for a talented Middle MLOps Engineer to join our growing team and take ownership of our ML infrastructure. You'll play a crucial role in scaling our AI systems, ensuring reliable model deployment, and building robust pipelines that power our analytics platform.
What You'll Do
- Build and maintain ML pipelines for speech-to-text processing and performance analytics models
- Design CI/CD workflows for automated model training, validation, and deployment
- Implement monitoring solutions to track model performance, data drift, and system health
- Optimize infrastructure for handling large-scale audio processing and real-time analytics
- Develop containerized microservices using Docker and orchestrate with Kubernetes
- Create infrastructure-as-code solutions with Terraform for reproducible deployments
- Collaborate with ML Engineers to streamline the path from research to production
- Set up logging and observability systems (Grafana, Loki, Prometheus)
What We're Looking For
- 2-4 years of hands-on experience in MLOps, DevOps, or ML Engineering roles
- Strong Python programming skills and proficiency in bash/shell scripting
- Practical experience with Docker containerization and Kubernetes orchestration
- Solid understanding of CI/CD concepts and tools (GitLab CI, GitHub Actions)
- Experience with cloud platforms (AWS preferred, but GCP/Azure acceptable)
- Familiarity with ML frameworks like PyTorch, TensorFlow, or Scikit-learn
- Knowledge of monitoring and logging tools (Prometheus, Grafana)
- Understanding of ML lifecycle management and model versioning
Bonus Points
- Experience with speech/audio processing pipelines (ASR, TTS, voice biometrics)
- Knowledge of ML model serving frameworks (TorchServe, TensorFlow Serving, FastAPI)
- Understanding of distributed systems and microservices architecture
- Experience with data versioning tools (DVC, MLflow)
- Familiarity with GPU infrastructure and optimization
- Previous work in SaaS or analytics platforms
Why Join OptiCall Solutions
- Work on real-world AI problems with immediate business impact
- Flexible remote work - work from anywhere
- Opportunity to shape ML infrastructure from the ground up
- Collaborative team environment with experienced engineers
- Competitive salary package
- Professional development opportunities
- Modern tech stack and best practices
- Flat hierarchy and direct impact on product development
Our Tech Stack
Python, Docker, Kubernetes, Terraform, GitLab CI/CD, AWS, Prometheus, Grafana, Loki, PyTorch/TensorFlow, FastAPI, PostgreSQL
Location: Fully Remote (European time zones preferred)
Employment Type: Full-time
Compensation: Competitive salary based on experience + equity options
How to Apply
Send your CV and a brief introduction about your MLOps experience to [your email] or connect with us directly on LinkedIn.
Join us in transforming the call center industry with AI! 🚀