Job Description
Scala Developer – Big Data & Analytics | Remote (U.S.-based) | 12-Month Contract (Extension Likely) | W2 Only!
There are no Corp-to-Corp options or Visa Sponsorship available for this position.
Optomi, in partnership with a leading consulting firm, is seeking a Scala Developer to support a critical data initiative for a large-scale public sector program. This fully remote role is part of a strategic Data Management and Analytics project and offers the opportunity to work on high-impact data solutions. Candidates will join a collaborative delivery team focused on processing high-volume healthcare encounter claims data using modern Big Data technologies.
What the right candidate will enjoy:
- Long-term contract with strong potential for extension!
- High-impact work with large-scale data initiatives in the public sector!
- Access to modern Big Data tools and enterprise-scale architecture!
- Collaborative, cross-functional team environment with both consulting and government stakeholders!
Experience of the right candidate:
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
- 5+ years of experience in software engineering, including strong functional programming expertise.
- Advanced experience working with Apache Spark using Scala for large-scale data processing.
- Proficiency with HDFS and SQL in a Big Data ecosystem.
- Solid understanding of Object-Oriented Programming principles.
- Experience working in distributed computing environments such as Hadoop or Cloudera.
- Familiarity with Drools and Java EE/J2EE frameworks.
Preferred Qualifications:
- Prior experience on public sector or healthcare data projects.
- Understanding of claims data or healthcare transaction processing (e.g., encounter data).
- Experience optimizing Spark jobs for performance and reliability.
- Familiarity with rule engines, enterprise service buses, and data governance concepts.
Responsibilities of the right candidate:
- Design, develop, and maintain Scala-based applications to process high-volume healthcare claims data.
- Build and optimize Spark-based data pipelines that align with business and regulatory requirements.
- Collaborate with cross-functional teams, including data architects, QA, and project stakeholders.
- Participate in validation, testing, and performance tuning of data workflows.
- Ensure code quality, reusability, and adherence to best practices in distributed computing.
- Support integration of systems across the Big Data ecosystem (Hadoop, Cloudera, HDFS, etc.).
What we’re looking for:
- Strong engineering fundamentals with a focus on functional and distributed programming.
- Hands-on experience with modern data platforms and Big Data frameworks.
- Ability to translate business requirements into scalable technical solutions.
- Effective communication and teamwork across technical and non-technical stakeholders.
- Self-motivated professional with a mindset for continuous learning and improvement.