Software Engineer - Spark
This job has now expired please search on the home page to find live IT Jobs.
You will use and contribute to existing frameworks like Spark,Flink,Storm etc and also invent in this space. You will be excited by the prospect of working collaboratively with other groups internal to Apple and also the open source community.
You will help build a world class control plane for provisioning and lifecycle management of large scale data processing clusters.
- Committers/Contributors to Apache Spark,Flink,Storm,or hadoop
- Deep understanding of Apache Spark including project tungsten and catalyst optimiser.
- Experience with development and maintenance of large scale Spark jobs.
- Experience with design and development of data connectors from Spark.
- Deep Understanding of Columnar database design and implementation.
- Experience with Scala strongly recommended.
- Deep understanding of core CS including data structures, algorithms and concurrent programming
- Strong background in systems level Java including garbage collection, concurrency models, native and async IO, off heap memory management etc.
- Passion for developing and testing clear, robust code
- Sound knowledge of UNIX and shell scripting
- Experience with virtualization and containerization
- Phenomenal communication skills in English both written and verbal
You will have a strong practical understanding of how to develop practical, fault-tolerant high-performance distributed systems. Applicants will have extensive experience in industry or research developing robust, server-side Java, Scala or C++ code and will be able to demonstrate creativity, initiative and able towork to deadlines. Our team needs more great standout colleagues with the ability to communicate technical concepts effectively to others.
BS or MS in CS or equivalent
To find out more and to apply, please click the APPLY button.