Flume Installation and Configuration



Here is an other exercise in my course.


Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application.

Download and Extract

hadoop@gandhari:/opt/hadoop-2.6.4$ wget https://repository.cloudera.com/artifactory/public/org/apache/flume/flume-ng-dist/1.6.0-cdh5.5.1/flume-ng-dist-1.6.0-cdh5.5.1-bin.tar.gz

hadoop@gandhari:/opt/hadoop-2.6.4$ gunzip flume-ng-dist-1.6.0-cdh5.5.1-bin.tar.gz

hadoop@gandhari:/opt/hadoop-2.6.4$ tar -xvf flume-ng-dist-1.6.0-cdh5.5.1-bin.tar

 hadoop@gandhari:~$ ln -s apache-flume-1.6.0-cdh5.5.1-bin/ flume

hadoop@gandhari:~$ vi .bashrc

export FLUME_HOME=/opt/hadoop/flume
export FLUME_CONF_DIR=/etc/hadoop/conf
export FLUME_CLASSPATH=/etc/hadoop/conf

hadoop@gandhari:~$ source .bashrc

Flume setup

hadoop@gandhari:~$ cd flume

hadoop@gandhari:~/flume$ mkdir logs

hadoop@gandhari:~/flume$ cd conf/

hadoop@gandhari:~/flume/conf$ cp flume-conf.properties.template flume.conf

hadoop@gandhari:~/flume/conf$ vi flume.conf

agent.sources = avroSrc
agent.channels = memoryChannel
agent.sinks = loggerSink hdfs-sink

# For each one of the sources, the type is defined
agent.sources.avroSrc.type = exec
agent.sources.avroSrc.port = 3631
agent.sources.avroSrc.threads = 2
agent.sources.avroSrc.command = tail -f /opt/hadoop/logs/test.log

# The channel can be defined as follows.
agent.sources.avroSrc.channels = memoryChannel

# Each sink's type must be defined
agent.sinks.loggerSink.type = logger

#Specify the channel the sink should use
agent.sinks.loggerSink.channel = memoryChannel

# Each channel's type is defined.
agent.channels.memoryChannel.type = memory

# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 100


hadoop@gandhari:~/flume/bin$ flume-ng agent --name agent --conf-file ../conf/flume.conf -Dflume.root.logger=DEBUG,console >> /opt/hadoop/logs/test.log



One thought on “Flume Installation and Configuration

  1. Pingback: Hadoop Eco System Installation – Contents | JavaShine

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s