Senior Data Engineer

astro sirens llc • Bulgaria
Remote
Apply
AI Summary

We are seeking a Senior Data Engineer to join our team. As a Senior Data Engineer, you will design, implement, and maintain robust and scalable data pipelines using cloud technologies. You will work with data scientists, analysts, and other engineers to ensure seamless data flow and availability across the organization.

Key Highlights
Design and implement robust data pipelines
Collaborate with data scientists and analysts
Optimize data storage and retrieval performance
Key Responsibilities
Design, implement, and maintain robust and scalable data pipelines
Develop and maintain ETL/ELT processes
Collaborate with data scientists, analysts, and other engineers
Optimize data storage and retrieval performance
Monitor, troubleshoot, and optimize data processing pipelines
Automate manual data processing tasks and improve data quality
Technical Skills Required
AWS Azure Docker Kubernetes Python Java Scala Apache Spark Apache Kafka DBT SQL NoSQL Git
Benefits & Perks
Competitive salary
Flexible payment method
Opportunities for growth and professional development
Flexible working hours
Full remote work opportunity
Nice to Have
Experience with data governance and security best practices
Familiarity with infrastructure-as-code tools
Knowledge of data visualization tools

Job Description


About Us:

WorkGarden (formerly Astro Sirens) is a forward-thinking software consulting company specializing in innovative software and data solutions. We are looking for a senior Data Engineer to join our team. In this role, you'll work with cutting-edge cloud technologies like Databricks, Kafka, Spark, DBT and Python libraries to build robust data pipelines and scalable infrastructure.

Requirements

Responsibilities:

  • Design, implement, and maintain robust and scalable data pipelines using AWS, Azure, and containerization technologies
  • Develop and maintain ETL/ELT processes to extract, transform, and load data from various sources into data warehouses and data lakes
  • Collaborate with data scientists, analysts, and other engineers to ensure seamless data flow and availability across the organization
  • Optimize data storage and retrieval performance by utilizing cloud services like AWS Redshift, Azure Synapse, or other relevant technologies
  • Work with containerization tools like Docker and Kubernetes to ensure smooth deployment, scalability, and management of data pipelines
  • Monitor, troubleshoot, and optimize data processing pipelines for performance, reliability, and cost-efficiency
  • Automate manual data processing tasks and improve data quality by implementing data validation and monitoring systems
  • Implement and maintain CI/CD pipelines for data workflow automation and deployment
  • Ensure compliance with data governance, security, and privacy regulations across all data systems
  • Participate in code reviews and ensure the use of best practices and documentation for data engineering solutions
  • Stay up-to-date with the latest data engineering trends, cloud services, and technologies to continuously improve system performance and capabilities

Requirements:

  • Excellent communication skill, specifically fluent verbal English is mandatory to explain complex technical concepts to non-technical stakeholders and collaborate across teams
  • Proven experience as a Data Engineer, with hands-on experience building and managing data pipelines
  • Strong proficiency in cloud technologies, specifically AWS (e.g., S3, Redshift, Glue) and Azure (e.g., Data Lake, Azure Synapse)
  • Experience working with containerization and orchestration tools such as Docker and Kubernetes
  • Proficient in data engineering programming languages, such as Python, Java, or Scala
  • Solid experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra)
  • Familiarity with data processing frameworks like Apache Spark, Apache Kafka, or similar tools
  • Experience with workflow orchestration tools like Apache Airflow, DBT, or similar
  • Knowledge of data warehousing concepts and technologies (e.g., Snowflake, Amazon Redshift, or Google BigQuery)
  • Strong understanding of ETL/ELT processes and best practices
  • Experience with version control systems like Git
  • Strong problem-solving skills and a proactive approach to troubleshooting and optimization

Preferred Qualifications:

  • Experience with data governance and security best practices in cloud environments
  • Familiarity with infrastructure-as-code tools such as Terraform or CloudFormation
  • Experience in working with machine learning and analytics tools for data analysis and reporting
  • Knowledge of data visualization tools (e.g., Power BI, Tableau) is a plus
  • Previous experience working in agile development teams

Benefits

  • Competitive salary and flexible payment method
  • Opportunities for growth and professional development
  • Flexible working hours and full remote work opportunity
  • Work in a collaborative, innovative and inclusive environment
  • Be a part of a data-driven culture that is at the forefront of innovation

Similar Jobs

Explore other opportunities that match your interests

Mid-Level Product Data Analyst

Data Science
•
8m ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Not Applicable

Ruby Labs

Brazil

Senior Marketing Data Scientist

Data Science
•
1h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

The Sage Group

United State

Senior Business Analyst - EdTech

Data Science
•
1h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Associate

Themesoft Inc.

United State

Subscribe our newsletter

New Things Will Always Update Regularly