navigation

Building microservices with Netflix OSS, Apache Kafka and Spring Boot – Part 2

Building microservices with Netflix OSS, Apache Kafka and Spring Boot – Part 2

by
November 7, 2017
Dreamix, frontpage, Java
One Comment

After Part 1 of ‘Building microservices with Netflix OSS, Apache Kafka and Spring Boot’, here is what comes next:

Message Broker (Kafka & ZooKeeper)

Although we are not going to use the distributed features of Kafka for the test, it is still distributed system and is built to use Zookeeper to track status of its cluster nodes, topics, partitions etc. So before using Kafka it is necessary to have Zookeeper installed. The following commands are for installing Zookeeper and Kafka on Ubuntu 16.04

Install zookeeper

When installed, zookeeper will be automatically started as a daemon, and by default will be listening on port 2181

Ask zookeeper if it is ok

type in ruok and press enter and you should be answered: imokConnection closed by foreign host.

Download the latest Kafka

Go to https://kafka.apache.org/downloads and look for the latest binary release link (currently it is Kafka_2.12-0.11.0.1.tgz). Following the link will be navigated to a page suggesting the mirror site for your download

Configure the Kafka Server

You need to update server.properties file. By default deleting topics is disabled so it is good to enable it, adding delete.topic.enable at the end of the file.

Run the Kafka Server as a background process

Verify Kafka is running

User Service

Now we have Kafka running and we can continue with building the user microservice. As mentioned in Part 1 It will:

  • 1. Register itself to the Service registry (Eureka)
  • 2. Take its configuration from the Config server (Spring Cloud Config)
  • 3. Have two endpoints
    • /member – where with POST request will register the new users
    • /member – where with GET request will be able to take all registered users
  • 4. On every new registration the User service will send a message “USER_REGISTERED” to the message broker (Kafka)
  • 5. Store the registered users in memory H2 database for later reference

Let’s first create a new Spring boot project(ms-user) with SPRING INITIALIZR.

The following dependencies will be needed: Eureka Discovery; JPA; H2; Kafka; Config Client;

/pom.xml

Same as for the config server, to enable discovery client update the main Application file adding the @EnableEurekaClient annotation and in the application configuration file, add the name and the running port for the microservice. The new here is enabling cloud config discovery. It will make the microservice to look for the config server with the help of the Service registry only having the Config server id. And no hardcoded urls or ports are needed.

/bootstrap.yml

The configurations for the h2, datasource, and Kafka will be read from the config server so they go in the ms-config-properties folder under ms-user

/ms-user.yml

Firstly we will create simple Spring web project structure with User entity, UserRepository, UserService and UserController. We will not discuss them widely as they are common and the structure is frequently used for spring projects.

The User entity will be used to transfer the data. It has simple structure, just username and password. We will put the email to which will send a confirmation message as username.

/User. java

We will use Spring data to handle the CRUD operations on the User entity, so UserRepository will be simple too.

/UserRepository.java

In the UserService we will have the methods for registering a user and getting all users

/UserService.java

Finally in UserController create the GET /member and POST /member REST endpoints

Just calling the UserService and returning the result from it.

Let’s take a closer look at the sender configuration. To be able to produce messages for the Kafka topics we need KafkaTemplate which is the class responsible for executing high-level operations. The KafkaTemplate needs ProducerFactory that sets the strategy to produce a Producer instance(s). The ProducerFactory for its part needs a Map of configuration properties. The most important of which are BOOTSTRAP_SERVERS_CONFIG, KEY_SERIALIZER_CLASS_CONFIG and VALUE_SERIALIZER_CLASS_CONFIG.

/SenderConfig.java

Here we configured to send messages to Kafka server on localhost:9092 (bootstrapServers – taken from cloud config server). The String message payload is transformed from User object with the help of JsonSerializer. Finish with the implementation of Sender bean which will use the above configured KafkaTemplate

/Sender.java

The business logic to send message to Kafka when a new user is saved in the database goes in the UserService implementation

/UserServiceImpl.java

You can check Part 3 of this blog, in which we will build the email service and the gateway.

Don’t forget to share your opinion in the comments section below.

Iskren Ivanov

Java Expert at Dreamix

More Posts

Do you want more great blogs like this?

Subscribe for Dreamix Blog now!

  • Pritam Prasad

    loved the tutorial !!!!
    When is part 3 coming?