Principal Data Engineer
28th November 2023
£90000 - £149990
This job has now expired please search on the home page to find live IT Jobs.Salary £90,000 - £149,990 A Civil Service Pension with an average employer contribution of 27% Job grade SCS Pay Band 1 This is a Senior Civil Service Pay Band SCS 1 role. Existing Civil Servants will be appointed in line with the Civil Service pay rules in place on the date of their appointment. Contract type Fixed Term
Secondment Length of employment 20 month fixed term contract, this will be a 22 month contract for those who are transferring from another government department. Business area CO - Prime Minister's Office - Prime Minister's Delivery Group Type of role Digital
Information Technology Working pattern Flexible working, Full-time, Job share, Part-time Location Bristol, Glasgow, London, Manchester, York. We embrace flexible working. There will be a requirement for office working in line with new civil service guidelines. We are happy to discuss requirements at interview. If London is not your base location, we would expect you to come into the London office 1-2 times a month. About the job Job summary Key focus on building data pipelines that enable data from across government to be ingested and transformed to enable cutting-edge AI initiatives to leverage these data holdings About the team The Incubator for Artificial Intelligence (i.AI) - is an AI for public good programme to harness the opportunities presented by AI to improve the lives of citizens. It will focus on a number of priority projects in home affairs, health, education and government efficiency projects. The i.AI team will use ethical and secure methods at all times to help deliver better public services. Building on the work of No10 Data Science Team (10DS) and other leading technology teams across the civil service i.AI will experiment and prove what is possible in improving the use of AI across the government. You can see more about our work on ai.gov.uk Job description Role Responsibilities Lead the strategic design and build of Data pipelines for the ingestion and transformation of data from across government to support the wider AI initiatives You will be using and developing Data Engineering capability provided within the AWS environment. You will be required to review, document and streamline current datasets and ETL processes to integrate systems and bring multiple data sources together for analysis. You will deliver capability using an appropriate structured process. Agile and DevOps experience would be an advantage. You will be acquiring data sets from multiple sources at varying levels of maturity. You will curate and catalogue this data in partnership with the Data Scientists and analysts. Solutions will be developed using the Gov PaaS environment as well as AWS. You will be using and developing Data Engineering capability provided within the AWS environment. There is extensive use of infrastructure as code within a structured development. There is extensive use of infrastructure as code within a structured development environment. You will lead on delivering capability using an appropriate structured process. Agile and DevOps experience would be an advantage. You can produce data models and understand where to use different types of data models. You have experience of a wide range of data engineering tools. You understand industry-recognised data-modelling patterns and standards. Person specification Senior of experience in infrastructure engineering, DevOps or large scale data platforms. Strong coding background in Python or similar and strong SQL skills Experience in building data pipelines (ETL and/or analytical pipelines) ETL Pipeline development experience using Lambda, Glue, S3, Athena, SQL, EC2, Cloudwatch, EventBridge, Redshift, Sagemaker, IAM Expertise in Data Modelling Experience in cleansing, managing and transforming high volume data Strong understanding of the importance Data Observability and how to implement the same We welcome a broad range of applicants and encourage you to apply. Strong candidates come from many different backgrounds. Studies show that talented people, especially those from groups underrepresented in their field, are more likely to doubt themselves and feel like an "imposter". Unique perspectives enrich teams, so we urge you to have confidence in your potential contributions. If aspects of this role resonate with you, please apply. We look forward to your application highlighting your skills and experiences Behaviours We'll assess you against these behaviours during the selection process: Working Together Seeing the Big Picture Delivering at Pace Making Effective Decisions Communicating and Influencing We only ask for evidence of these behaviours on your application form: Working Together Technical skills We'll assess you against these technical skills during the selection process: Technical assessment at interview will be based on the listed essential criteria and will be confirmed to candidates at shortlisting stage Benefits Alongside your salary of £90,000, Cabinet Office contributes £24,300 towards you being a member of the Civil Service Defined Benefit Pension scheme. Find out what benefits a Civil Service Pension provides. Learning and development tailored to your role. An environment with flexible working options. A culture encouraging inclusion and diversity. A Civil Service Pension which provides an attractive pension, benefits for dependants and average employer contributions of 27%. A minimum of 25 days of paid annual leave, increasing by one day per year up to a maximum of 30. Selection process details This vacancy is using Success Profiles (opens in a new window), and will assess your Behaviours, Experience and Technical skills. To submit an application, you will need to provide an up to date CV. We would also like you to submit a 500 word personal statement. Please ensure your CV and personal statement demonstrates how you meet the following criteria: Senior of experience in infrastructure engineering, DevOps or large scale data platforms. Strong coding background in Python or similar and strong SQL skills Experience in building data pipelines (ETL and/or analytical pipelines) ETL Pipeline development experience using Lambda, Glue, S3, Athena, SQL, EC2, Cloudwatch, EventBridge, Redshift, Sagemaker, IAM Expertise in Data Modelling Experience in cleansing, managing and transforming high volume data Strong understanding of the importance Data Observability and how to implement the same Should a large number of applications be received, an initial sift may be undertaken using the lead Behaviour, Working Together. Assessment and interview process Your application will be reviewed by a panel. If you have been shortlisted, you will be invited to complete a technical assessment. Technical assessment Candidates will be sent through a problem and asked to solve it. You will present your solution technical panel for 30 minutes. We are looking to understand your thought process and hands-on experience. If you pass the technical assessment, the final stage will be an interview where you will be asked questions on behaviours: Working together Seeing the Big Picture, Delivering at pace Making Effective Decisions and Communicating & Influencing Candidates that do not pass the interview but have demonstrated an acceptable standard may be considered for similar roles at a lower grade. Depending on the amount of application we get there might be an additional technical assessment. Expected timeline (subject to change)
Expected sift date - 11th December 2023 Expected tech assessment - 18th December -1st January 2024 Expected interview date/s -8th January 2024 Apply before 11:55 pm on Sunday 10th December 2023.