Job Description
Bid:26-28 tys pln netto/month
Description of knowledge and experience:
- Experience in Scala and Spark,
- Knowledge of Hadoop stack: YARN, EMR, Sqoop, Hive,
- Kafka(Nice to have, not strictly required),
- Impala,
- Jenkins, JIRA, Bitbucket and Git
Key responsibilities include:
- Building distributed and highly parallelized BigData processing pipeline which process massive amount of data (both structured and unstructured) in near real-time
- Leverage Spark to enrich and transform corporate data to enable searching, data visualization, and advanced analytics
- Work closely with DevOps, QA and Product Management teams in a Continuous Delivery environment
Stages of recrutation:
- Short interview with recruiter
- Technical interview with Tech Lead (1 Hour)