I am working with a revolutionary biotech business who are recruiting for a head of platform/platform architect to lead the platform development and become a key figure in the business as they scale.
Role & Responsibilities
- Be responsible for the long-term vision of the infrastructure.
- Join the senior management team to partake in the overall running of the business
- Be engaged in the day to day operations of the platform and work closely with the product manager to ensure customer and user requirements are actualised within the platform.
- Lead a talented team of data scientists, back-and front-end engineers to build and maintain a data platform, API and web application, as well as facilitating the provision of data for professional services work.
- Be engaged in continuous infrastructure analysis, optimization and consolidation, following the addition of new features.
- Supervise and be engaged in the implementation of CI/CD pipelines for multiple systems across platform structure provisioning and post provisioning automation using Terraform.
- Be engaged in data engineering practices, continuously optimizing our data pipelines and data warehousing practices.
- Oversee and be engaged in efficient development operations that balance service reliability and delivery speeds.
- Be a driver of DevOps culture, setting best practices across the company.
Skills & Qualifications
- Proven track record of at least 7 years professional software delivery with extensive knowledge of software development lifecycle, having built and maintained complex pipelines in various enterprise-level CI and CD software.
- Extensive experience with Linux, Python, Java, and Bash.
- Experience with Google Cloud Platform and overseeing operations of a cloud computing platform.
- Supervising and training junior members of staff, setting best practices, fostering a healthy workplace environment and culture.
- Working with the senior management team to continue the company's growth within vision.
Other experience required:
- Experience with container and container orchestration technologies such as Docker, Kubernetes, Helm, Terraform, Garden and having designed and implemented Kubernetes-native applications.
- Experience with maintaining database and data warehouse technologies. Extensive knowledge of data engineering, such as setting up data pipelines in GCP Dataflow/Dataproc and working with data warehouses such as BigQuery. Familiarity with beam jobs.
- Highly skilled in Infrastructure as Code practices, including infrastructure testing and immutable infrastructure.
- Recognise the value in testing and documentation in building scalable and stable platforms.
- Serious about automation, building single sources of truth and removing single points of failure.
- An eye for up-and-coming technologies and best practices.
- Experience managing staff user accounts (G-suite, atlassian, etc), and team permissions
- Ability to effectively interact with a diverse group of people (biologists, data scientists, developers).
- Ability to be productive in a distributed team through self-discipline and self-motivation, delivering according to a schedule, and to motivate and mentor others to do the same.
- Excellent communications skills in the English language, both verbal and written, especially in online environments such as slack, email, video calls, etc
- Attractive salary
- 100% remote working
- Share option