Senior Data Engineer (Azure Data Factory, Databricks, Spark)
Join bp's Innovation & Engineering organization as a Senior Data Engineer to design, build, and maintain scalable data pipelines and data infrastructure. Work remotely from Poland and collaborate with data scientists, analysts, and business stakeholders.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
Job Title: Data Engineer – 100% Remote (Poland)
Contract Type: B2B / Self-Employed
Duration: 12 Months
Location: Remote (Candidates must be local to Poland)
Work Hours: Monday–Friday, 40 hours per week, GMT working hours
Role Overview
We are seeking an experienced Data Engineer to join bp’s Innovation & Engineering organization. The Data Engineer will play a key role in bp’s digital transformation and energy transition initiatives by designing, building, and maintaining scalable data pipelines and data infrastructure that support analytics, reporting, and enterprise decision-making.
Interview Process
First round: Online Coderbyte assessment
Second round: Advanced algorithmic coding interview (medium-complexity LeetCode problems, including tree-based questions)
Must-Have Skills
Medium-level SQL proficiency
Strong understanding of data modeling and data structures
Ability to solve medium-complexity algorithmic problems
Strong analytical and problem-solving skills
Nice-to-Have Skills
Experience with Apache Spark or Databricks
Exposure to the Apache ecosystem
Strong communication and collaboration skills
Key Responsibilities
Design, develop, and maintain scalable data pipelines using Azure Data Factory (ADF) and Databricks
Collaborate with data scientists, analysts, and business stakeholders to gather and understand data requirements
Ensure data quality, data integrity, data governance, and compliance across platforms
Optimize data pipelines and workflows for performance, reliability, and cost efficiency
Integrate structured and unstructured data from IoT platforms, cloud systems, and enterprise sources
Implement data security and compliance measures aligned with bp standards
Participate in Agile development processes and continuous improvement initiatives
Required Qualifications and Experience
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field
5+ years of experience in data engineering or related roles
Hands-on experience with Databricks for big data processing
Strong experience with Azure Data Factory (ADF)
Proficiency in SQL, Python, Spark, and data pipeline tools such as Airflow and Kafka
Experience with cloud platforms, preferably Azure
Experience with data warehousing technologies such as Snowflake or Redshift
Strong understanding of data architecture and big data technologies
Familiarity with DevOps practices and CI/CD pipelines
Excellent communication skills in a global and multicultural environment
Preferred Skills and Certifications
Azure Data Engineer Associate or equivalent certification
Experience with machine learning pipelines and data science workflows
Knowledge of GDPR, ISO 27001, and data privacy regulations
Experience in energy, oil and gas, or industrial domains
Keywords (ATS Optimized)
Data Engineer, Data Engineering, Azure Data Factory, ADF, Databricks, Spark, SQL, Python, Big Data, Cloud Data Engineer, Azure, Data Pipelines, ETL, Data Modeling, Data Architecture, CI/CD, DevOps, Remote Data Engineer, Poland, B2B Contract
#DataEngineer
#DataEngineering
#AzureDataFactory
#Databricks
#Spark
#BigData
#SQL
#Python
#CloudEngineering
#AzureJobs
#RemoteJobs
#PolandJobs
#B2BContract
#ETL
#DigitalTransformation
#EnergyTech