Design and build robust ETL pipelines, support Microsoft Fabric environment, and collaborate with data engineers, analysts, and business stakeholders.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
Location: New Bremen, OH (4 days a week onsite)
Salary: 85K-100K
Relocation: Full Relo packaged including temporary housing, full moving costs covered, etc.
Minimum Qualifications:
- 3+ years' experience (internships count)
- Bachelor’s degree (Computer Science, Management Information Systems or equivalent)
- Proficient in object-oriented and event-driven programming in at least Python with a know-how of popular frameworks (Pyspark, Pandas, NumPy, Flask, AsyncIO)
- ETL pipeline building
- Hands-on experience in writing and profiling SQL queries
Looking to advance your Data Science career with relocation support? Explore Data Science Jobs with Relocation Packages that include comprehensive packages to help you move and settle in your new role.
Plusses:
Discover our full range of relocation jobs with comprehensive support packages to help you relocate and settle in your new location.
- Experience with Microsoft Fabric experience
- Familiar with REST/SOAP API principles and methods
- Good understanding of Cloud technologies like AWS
- DevOps principles – owing the code from development to deployment.
Interested in relocating to United State? Check out our comprehensive Relocation Jobs in United State page with detailed relocation packages and benefits.
Responsibilities
In this role, the data engineer will spend approximately 80% of their day working hands-on with Python and SQL, developing, maintaining, and optimizing data solutions that support analytics and business processes. The primary focus will be on building and supporting robust ETL pipelines that ingest, transform, and move data across platforms while ensuring reliability, scalability, and performance.
A key responsibility will be supporting the organization’s transition into a Microsoft Fabric environment. This includes designing and building data pipelines within Fabric, as well as creating and maintaining Fabric notebooks using PySpark to perform large-scale data transformations and processing. The engineer will adapt existing workflows and help develop new solutions aligned with Fabric best practices.
On a daily basis, this individual will collaborate closely with other data engineers, analysts, and business stakeholders to translate requirements into technical implementations. They will write, profile, and optimize SQL queries, troubleshoot pipeline and data issues, and integrate data from internal and external sources, including APIs when needed. The role also involves participating in code reviews, source control, and CI/CD processes, as well as working within an Agile team environment to support continuous delivery and improvement.
Similar Jobs
Explore other opportunities that match your interests
Associate Director of Clinical Immunogenetics Laboratory
american society for clinical...
hippo insurance
Distinguished Data Engineer