Kafka Developer


Premium Job From SidTech

Recruiter

SidTech

Listed on

21st March 2021

Location

Edinburgh

Salary/Rate

£500 - £550

Type

Contract

Start Date

2021-03-21 00:00:00

This job has now expired please search on the home page to find live IT Jobs.

JOB DESCRIPTION Job Title:Kafka DeveloperInside IR35
Location:Edinburgh Department/Practice: Digital -Data and Analytics Job Purpose and primary objectives: Key responsibilities (please specify if the position is an individual one or part of a team): The associate should have good knowledge on Big data/Hadoop echo systems, excellent technical skills (Apache/Confluent Kafka, Big Data technologies, Spark/Pyspark ,Kafka experience is Mandatory. Key Skills/Knowledge: ? Primary Skill- Looking for Kafka developer should have knowledge and expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center, hands on experience working on AvroConverters, JsonConverters, and StringConverters, working on Kafka connectors such as MQ connectors,  Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms. experience on custom connectors using the Kafka core concepts and API. Working knowledge on Kafka Rest proxy Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices. ? Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.  Experience required: ?  Primary Skill- Looking for Kafka developer should have knowledge and expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center, hands on experience working on AvroConverters, JsonConverters, and StringConverters, working on Kafka connectors such as MQ connectors,  Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms. experience on custom connectors using the Kafka core concepts and API. Working knowledge on Kafka Rest proxy Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices. ? Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.  . Secondary Skill- Banking Domain Knowledge, Big Data.
   

You are currently using an outdated browser.

Please consider using a modern browser such as one listed below: