Kafka in action: 7 steps to real-time streaming from RDBMS to Hadoop

August 23, 2016 Rajesh Nadipalli

For enterprises looking for ways to more quickly ingest data into their Hadoop data lakes, Kafka is a great option. What is Kafka? Kafka is a distributed, scalable and reliable messaging system that integrates applications/data streams using a publish-subscribe model. It is a key component in the Hadoop technology stack to support real-time data analytics or monetization of Internet of Things (IOT) data. 

This article is for the technical folks. Read on and I’ll diagram how Kafka can stream data from a relational database management system (RDBMS) to Hive, which can enable a real-time analytics use case. For reference, the component versions used in this article are Hive 1.2.1, Flume 1.6 and Kafka 0.9.

If you're looking for an overview of what Kafka is and what it does, check out my earlier blog published on Datafloq.

Where Kafka fits: The overall solution architecture

The following diagram shows the overall solution architecture where transactions committed in RDBMS are passed to the target Hive tables using a combination of Kafka and Flume, as well as the Hive transactions feature.

Diagram.png

7 steps to real-time streaming to Hadoop

Now let’s dig into the solution details and I’ll show you how you can start streaming data to Hadoop in just a few steps.

1. Extract data from the relational database management system (RDBMS)

All relational databases have a log file that records the latest transactions. The first step for our streaming solution is to obtain these transactions in a format that can be passed to Hadoop. Walking through the exact mechanisms of this extraction could take up a separate blog post – so please reach out to us if you’d like more information pertaining to that process. 

2. Set up the Kafka producer

Processes that publish messages to a Kafka topic are called “producers.” “Topics” are feeds of messages in categories that Kafka maintains. The transactions from RDBMS will be converted to Kafka topics. For this example, let’s consider a database for a sales team from which transactions are published as Kafka topics. The following steps are required to set up the Kafka producer:

$ cd /usr/hdp/2.4.0.0-169/kafka
 
$ bin/kafka-topics.sh --create --zookeeper sandbox.hortonworks.com:2181 --replication-factor 1 --partitions 1 --topic SalesDBTransactions
Created topic "SalesDBTransactions".
 
$ bin/kafka-topics.sh --list --zookeeper sandbox.hortonworks.com:2181
SalesDBTransactions

3. Setting up Hive

Next we will create a Hive table that is ready to receive the sales team’s database transactions.  For this example, we will create a customer table:

[bedrock@sandbox ~]$ beeline -u jdbc:hive2:// -n hive -p hive
0: jdbc:hive2://> use raj;
create table customers (id string, name string, email string, street_address string, company string)
partitioned by (time string)
clustered by (id) into 5 buckets stored as orc
location '/user/bedrock/salescust'
TBLPROPERTIES ('transactional'='true');
 
To enable Hive to handle transactions, the following setting is required in Hive configuration:
hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
 
4. Set up a Flume agent to stream from Kafka to Hive

Now let’s look at how to create a Flume agent that will source from Kafka topics and send data to the Hive table.

Follow these steps to set up the environment before starting the Flume agent:

$ pwd
/home/bedrock/streamingdemo
$ mkdir flume/checkpoint
$ mkdir flume/data
$ chmod 777 -R flume

 

$ export HIVE_HOME=/usr/hdp/current/hive-server2

 

$ export HCAT_HOME=/usr/hdp/current/hive-webhcat
 
$ pwd
/home/bedrock/streamingdemo/flume
$ mkdir logs

Next create a log4j properties file as follows:

[bedrock@sandbox conf]$ vi log4j.properties

flume.root.logger=INFO,LOGFILE
flume.log.dir=/home/bedrock/streamingdemo/flume/logs
flume.log.file=flume.log


Then use the following configuration file for the Flume agent:

 

 

$ vi flumetohive.conf

 

 

flumeagent1.sources = source_from_kafka
flumeagent1.channels = mem_channel
flumeagent1.sinks = hive_sink

 

# Define / Configure source

 

flumeagent1.sources.source_from_kafka.type = org.apache.flume.source.kafka.KafkaSource
flumeagent1.sources.source_from_kafka.zookeeperConnect = sandbox.hortonworks.com:2181
flumeagent1.sources.source_from_kafka.topic = SalesDBTransactions
flumeagent1.sources.source_from_kafka.groupID = flume
flumeagent1.sources.source_from_kafka.channels = mem_channel
flumeagent1.sources.source_from_kafka.interceptors = i1
flumeagent1.sources.source_from_kafka.interceptors.i1.type = timestamp
flumeagent1.sources.source_from_kafka.consumer.timeout.ms = 1000
 
# Hive Sink
flumeagent1.sinks.hive_sink.type = hive
flumeagent1.sinks.hive_sink.hive.metastore = thrift://sandbox.hortonworks.com:9083
flumeagent1.sinks.hive_sink.hive.database = raj
flumeagent1.sinks.hive_sink.hive.table = customers
flumeagent1.sinks.hive_sink.hive.txnsPerBatchAsk = 2
flumeagent1.sinks.hive_sink.hive.partition = %y-%m-%d-%H-%M
flumeagent1.sinks.hive_sink.batchSize = 10
flumeagent1.sinks.hive_sink.serializer = DELIMITED
flumeagent1.sinks.hive_sink.serializer.delimiter = ,
flumeagent1.sinks.hive_sink.serializer.fieldnames = id,name,email,street_address,company

 

# Use a channel which buffers events in memory

 

flumeagent1.channels.mem_channel.type = memory
flumeagent1.channels.mem_channel.capacity = 10000
flumeagent1.channels.mem_channel.transactionCapacity = 100

 

# Bind the source and sink to the channel

 

flumeagent1.sources.source_from_kafka.channels = mem_channel
flumeagent1.sinks.hive_sink.channel = mem_channel

5. Start the Flume agent

Use the following command to start the flume agent:

$ /usr/hdp/apache-flume-1.6.0/bin/flume-ng agent -n flumeagent1 -f ~/streamingdemo/flume/conf/flumetohive.conf

Black_1.png

 

6. Start the Kafka stream

As an example, below is a simulation of the transactions messages, which in an actual system will need to be generated by the source database. For instance, the following could come from Oracle streams that replay the SQL transactions that were committed to the database, or they could come from GoldenGate.

$ cd /usr/hdp/2.4.0.0-169/kafka
$ bin/kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic SalesDBTransactions

 

1,"Nero Morris","porttitor.interdum@Sedcongue.edu","P.O. Box 871, 5313 Quis Ave","Sodales Company"

 

2,"Cody Bond","ante.lectus.convallis@antebibendumullamcorper.ca","232-513 Molestie Road","Aenean Eget Magna Incorporated"
3,"Holmes Cannon","a@metusAliquam.edu","P.O. Box 726, 7682 Bibendum Rd.","Velit Cras LLP"
4,"Alexander Lewis","risus@urna.edu","Ap #375-9675 Lacus Av.","Ut Aliquam Iaculis Inc."
5,"Gavin Ortiz","sit.amet@aliquameu.net","Ap #453-1440 Urna. St.","Libero Nec Ltd"
6,"Ralph Fleming","sociis.natoque.penatibus@quismassaMauris.edu","363-6976 Lacus. St.","Quisque Fringilla PC"
7,"Merrill Norton","at.sem@elementum.net","P.O. Box 452, 6951 Egestas. St.","Nec Metus Institute"
8,"Nathaniel Carrillo","eget@massa.co.uk","Ap #438-604 Tellus St.","Blandit Viverra Corporation"
9,"Warren Valenzuela","tempus.scelerisque.lorem@ornare.co.uk","Ap #590-320 Nulla Av.","Ligula Aliquam Erat Incorporated"
10,"Donovan Hill","facilisi@augue.org","979-6729 Donec Road","Turpis In Condimentum Associates"
11,"Kamal Matthews","augue.ut@necleoMorbi.org","Ap #530-8214 Convallis, St.","Tristique Senectus Et Foundation"


Black_2.png

7. Receive Hive data

With all of the above accomplished, now when you send data from Kafka, you will see the stream of data being sent to the Hive table within seconds.

Black_3.png

 

Defining the Data Lake White Paper banner

Opening the door to new use cases

I hope now you have a better idea of how real-time data from a relational source can be sent to Hive and be consumed by big data applications using Kafka. Compared to traditional message brokers, Kafka is easier to scale to accommodate massive streams of data, including IoT data, to enable near real-time analytics. This ability gives enterprises a competitive advantage by enabling a wider range of use cases leveraging the Hadoop ecosystem.

 

 

 

About the Author

Rajesh Nadipalli

Director of Product Support and Professional Services

More Content by Rajesh Nadipalli
Previous Article
How to Choose a Hadoop Distribution: Understanding Versioning
How to Choose a Hadoop Distribution: Understanding Versioning

This is the first in a multi-part series of blogs discussing Hadoop distribution differences to help enterp...

Next Article
So, you want to be a tech visionary? An executive guide to data lakes
So, you want to be a tech visionary? An executive guide to data lakes

You’ve heard it time and time again: cloud is the future; those who don’t adopt modern big data practices w...

×

Get the latest tips and how-to's delivered straight to your inbox!

First Name
Last Name
Zaloni Blog Email Subscription
Thank you!
Error - something went wrong!