Senior Data Engineer
Recruiter
Listed on
Location
Salary/Rate
Salary Notes
Type
Start Date
This job has now expired please search on the home page to find live IT Jobs.
Job DescriptionThis is a Data Engineering role. The purpose of this role inside the Customer Platform team is to provide an interface/capability to deliver external data requirements - sourcing data that is not readily available inside the Customer Platform Team itself. These data requirements are related to marketing and support requirements and will be sourced from the Hive data platform.My client will want you to be reporting into the Customer Platform Technical Architect and will work closely with data scientists, data analysts, DevOps and team leads who own the Hive data platform, ensuring that the business receives the solutions they need.Role - AWS Data Engineer Rate - Negotiable (Outside IR35) Location - Remote (may have to travel once a week to Hive Hub) Start Date - ASAP Roles and Responsibilities Experience of working as a developer in a cross-functional team.At least 2 years of experience in the software engineering field, preferably within a data disciplineWriting production-quality code including extensive test coverage.Designing data systems that will scale to large numbers of users.Interfacing with Data Scientists and porting machine learning algorithms to production systems.Peer review other engineer's code to ensure quality.Deploy services to staging and production environments.Support production services including participation in rotation with others in the team
Role & ResponsibilitiesDrive best practices across teams and products for data-driven product development and deliveryUse Github when required to enable code re-use across the teamWork in an Agile environment tracking your tasks using JIRAWork alongside other members of the Data Team on large projects to meet deadlines and requirements set by our stakeholders
Skills NeededGood knowledge of Java or Scala or Python and concurrent programmingQuick learner with eagerness to learn new things and experiment with new technologiesWilling to learn Data Science algorithms and produce code to implement them at scaleExperience in AWS, Spark, Kafka, Kubernetes, RedshiftFamiliarity with deployment & container technologies including Jenkins, Docker, ServerlessInterest in Real-time and distributed systemsExcellent general problem-solving abilitiesStrong analytical mindProactive, initiating attitude - able to take prompt action to accomplish objectives and goals beyond what is required.Spark