MLOps Engineer (Remote, Spain)

bark.com • Spain
Remote
Apply
AI Summary

Bark is seeking a proactive MLOps Engineer to build and operate a scalable data platform for real-time event tracking. You will bridge data engineering and ML infrastructure, transitioning models to production and implementing MLOps best practices. This fully remote role requires experience with Python, AWS, MLOps frameworks, and data warehousing, with a focus on enabling AI features and cost-effectiveness.

Key Highlights
Build and operate a scalable data platform for real-time event ingestion.
Transition ML models from experimentation to production and maintain ML infrastructure.
Drive the rollout of event tracking across teams, ensuring data quality and scalability.
Key Responsibilities
Build and operate a scalable data platform ingesting real-time events with a high-throughput rate.
Collaborate with Data Scientists to transition ML models from experimentation to production.
Build and maintain ML infrastructure for model serving (using FastAPI) and track model performance and lifecycle over time.
Collaborate with analysts, engineers and product managers to understand user needs and take ownership of producing new event tracking functionality.
Implement automations, high data quality controls, ensure data integrity in the ingested data, and create necessary monitoring alerts.
Technical Skills Required
Python AWS Kinesis Lambdas Glue Firehose Athena AWS Sagemaker MLFlow Databricks W&B FastAPI SQL Data Warehousing Data Lake S3 Datadog NewRelic Cloudwatch CI/CD GitHub Actions Gitlab Pipelines
Benefits & Perks
Fully remote
Nice to Have
Kinesis
Pub-Sub
Kafka
Data Modelling
Proto schemas
AVRO schemas
Schema evolution
Docker
Kubernetes
AWS ECS/Fargate
Terraform
Crossplane
LLMOps
LangGraph
PySpark
Flink
DBT
Airflow

Job Description


About Bark

Bark is an online services marketplace connecting customers with professionals across over 1,000 categories. Operating in nine countries including the UK, US, Australia, Canada, and New Zealand, we're transforming how people find trusted service providers for everything from home improvement to professional services.


Our platform uses cutting-edge technology to match customers with the right professionals quickly and efficiently. With a global team of over 220 people, we're currently undergoing an exciting transformation: migrating from a lead generation model to a full marketplace platform with subscription-based pricing.


As a profitable, PE backed scale up (EMK Capital), Bark offers the best of both worlds: the agility and innovation of a fast moving business combined with financial stability and resources for growth. We recently launched our new marketplace model in Australia (Q4 2025) and are preparing for rollout to the UK and US markets in 2026. You'll have genuine ownership, responsibility, and the opportunity to shape our commercial strategy during a pivotal transformation phase with the chance to make your own contribution to our journey.


*Please note you must be based in Spain to be considered for this fully remote role.


About the Role

We are looking for a proactive MLOps Engineer to join our staff data engineer and form a new squad. This role is for a forward-thinking engineer who wants to seamlessly bridge the gap between high-throughput data engineering and Machine Learning infrastructure.

You will work on our Python and AWS-hosted data streaming platform, owning the full data lifecycle for real-time event tracking to ensure scalability, reliability, and cost-effectiveness. While the basic components are built, you will drive a large rollout of event tracking across different teams, tackling significant data validation, data modelling, and scaling challenges.

Crucially, the events you process will directly fuel our AI feature stores and models. You will collaborate closely with analysts, engineers, and product managers to enable accurate reporting for new product features and business KPIs, while simultaneously laying the foundation for our ML lifecycle. As we expand our AI capabilities, you will introduce MLOps best practices to deploy and serve models, with future opportunities to shape our LLMOps architecture


Responsibilities

  • Build and operate a scalable data platform ingesting real-time events with a high-throughput rate.
  • Collaborate with Data Scientists to transition ML models from experimentation to production
  • Build and maintain ML infrastructure for model serving (using FastAPI) and track model performance and lifecycle over time
  • Collaborate with analysts, engineers and product managers to understand user needs and take ownership of producing new event tracking functionality.
  • Implement automations, high data quality controls, ensure data integrity in the ingested data, and create necessary monitoring alerts.


Required Skills and Experience

  • Experience deploying and maintaining Python services in a major cloud environment (AWS, GCP, Azure). Specific experience with the AWS stack, including Kinesis, Lambdas, Glue, Firehose, and Athena, is highly desirable.
  • Experience with MLOps frameworks and experiment tracking tools such as AWS Sagemaker, MLFlow, Databricks, or W&B
  • Basic knowledge of ML Inference REST APIs (FastAPI)
  • Great skills improving and operating Schema Registries and Data Catalogs (Glue, Databricks, etc.)
  • Relevant CI/CD experience (e.g., GitHub Actions, Gitlab Pipelines) automating tests and updates in schema registries and lambda releases.
  • Solid experience with SQL and data warehousing, data lake environments (e.g. DataBricks, BigQuery, Redshift, S3).
  • Familiarity with cloud observability tools (e.g. Datadog, NewRelic, or Cloudwatch).


Desired skills and experience

  • Hands-on experience in a cloud data platform for event streaming such as Kinesis, Pub-Sub or Kafka.
  • Knowledge and practical experience with data modelling, proto or AVRO schemas, and managing schema evolution.
  • Production experience with containerization (Docker), orchestration (Kubernetes, AWS ECS/Fargate, etc.) and IaC (Terraform, Crossplane)
  • Familiarity with LLMOps and frameworks for building LLM agents (e.g., LangGraph).
  • Large scale data processing with PySpark and Flink
  • Datamart creation with DBT
  • Airflow job orchestration


Interview Process

  • Screening Call with Talent Partner (30 mins)
  • 1st Stage - Hiring Manager Stage (30 mins)
  • 2nd Stage - Technical Interview (45/60 mins)
  • 3rd Stage - Values interview (30 mins)


Diversity Statement

At Bark, we are a platform for people, revolutionising the way professionals and individuals connect since 2014. Our culture is defined by excitement, ambition, and a commitment to raising the bar. We value diversity, equity, inclusion, and belonging (DEIB) and are dedicated to embedding these principles into everything we do. We are committed to fostering an inclusive environment where everyone can thrive, and our focus is on hiring, retaining and developing a globally diverse workforce that is passionate about excelling our platform and supporting our customers succeed. Be part of our dynamic team, where bold ideas thrive, and create a future worth shouting about.


Similar Jobs

Explore other opportunities that match your interests

DevSecOps Engineer

Devops
•
4h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

UST

Spain

Endpoint Engineer

Devops
•
1d ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Wave Mobile Money

Spain

Senior Site Reliability Engineer

Devops
•
2d ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Affirm

Spain

Subscribe our newsletter

New Things Will Always Update Regularly