Data Architect - AWS


Premium Job From Nigel Frank International

Recruiter

Nigel Frank International

Listed on

5th January 2021

Location

London

Salary/Rate

£70000 - £80000

Type

Permanent

Start Date

ASAP

This job has now expired please search on the home page to find live IT Jobs.

Here at Jefferson Frank, we're dedicated to AWS recruitment-it's all we do. We're the only global recruitment agency dedicated solely to AWS. Since June 2018, we've more than 1,600 AWS professionals across the world with great jobs working with AWS partners, ISVs and end users. No matter where you are in your career, or where you are in the country, we're in your corner making sure you land the right job with the right terms. At Jefferson Frank we have an incredible opportunity to join one of the worlds leading Data & Analytics company's and a long standing client of ours, to help continue building out their exponentially growing and talented team as a Data Architect. You will have to the opportunity to strengthen the evolution of their internal data platforms used for MI and Analytics providing new capability leveraging Cloud and Big Data technology. Within this role you'll you will also work closely with client's Data Science & Engineering teams to develop robust production solutions for their Data & Analytics focused projects. As a result of this you will help to develop and shape the next generation of data platforms used and the successful candidate should have prior experience designing and implementing scalable, reliable and secure big data/cloud data warehouse solutions and data integration/processing pipelines. Essential requirements of the role will include experience of leading teams of Engineers and working closely with senior business stakeholders is vital as is the ability to communicate effectively and clearly with both technical and non-technical members of the organisation. Desired skills:Cloud Platform Development (AWS)Cloud Based Big Data/Data Warehouse Solutions (Redshift) - including design, development, setup, configuration and monitoring of solutions running on these platformsKnowledge and experience of designing a Data Lake on a Cloud Platform (S3)A strong understanding of software development in SQL & Python or Scala. An understanding of designing and building Data Pipelines with AWS Glue or other ETL Tools. Experience with Hadoop ecosystems (Spark, Hive/Impala) - including design, development, setup, configuration and monitoring of solutions running on these platformsKinesis or Kafka (for both Real Time Data Pipelines and Stream Analytics - including design, development, setup, configuration and monitoring of solutions running on this platformExperience of Worked in an Agile team producing frequent deliverablesExperience of Testing and Automation processes associated with Big Data & Cloud solution development Any experience with BI & Visualisation tools such as the Tableau, Power BI, QlikView or QuickSight would be beneficial but not essential as would any experience or the R programming language. The successful candidate will have proven experience both designing and implementing big data and fast data solutions and should be capable of documenting and communicating to a wide range of stakeholders with differing levels of technical knowledge. You will also be used to working as part of an Agile delivery team in a fast-paced development environment with frequent delivery. To find out more please reach out to Elliott Collins by email - [email protected] or by phone on 07805492816. We can run through the role in detail and discuss your ideals further. If however this is not exactly what you're looking for please feel free to get in touch and we can talk about how we'll find the perfect opportunity for you.

You are currently using an outdated browser.

Please consider using a modern browser such as one listed below: