Database Reference
In-Depth Information
The clean script executes a Hadoop file system remove command with a recursive switch:
[hadoop@hc1nn flume]$ cat flume_clean_hdfs.sh
#!/bin/bash
hdfs dfs -rm -r /flume/messages
The execution script, flume_execute_hdfs.sh, runs the Flume agent and needs nine lines:
[hadoop@hc1nn flume]$ cat flume_exec_hdfs.sh
1 #!/bin/bash
2
3 # run the bash agent
4
5 flume-ng agent \
6 --conf /etc/flume-ng/conf \
7 --conf-file agent1.cfg \
8 -Dflume.root.logger=DEBUG,INFO,console \
9 -name agent1
This execution script runs the Flume agent within a Linux Bash shell and is easily repeatable because a single
script has been run, rather than retyping these options each time you want to move log file content. Line 5 actually
runs the agent, while lines 6 and 7 specify the configuration directory and agent configuration file. Line 8 specifies
the log4j log configuration via a -D command line option to show DEBUG , INFO , and console messages. Finally,
line 9 specifies the Flume agent name agent1.
The Flume agent configuration file (agent1.cfg, in this case) must contain the agent's source, sink, and channel.
Consider the contents of this example file:
[hadoop@hc1nn flume]$ cat agent1.cfg
1 # ----------------------------------------------------------------------
2 # define agent src, channel and sink
3 # ----------------------------------------------------------------------
4
5 agent1.sources = source1
6 agent1.channels = channel1
7 agent1.sinks = sink1
8
9 # ----------------------------------------------------------------------
10 # define agent channel
11 # ----------------------------------------------------------------------
12
13 agent1.channels.channel1.type = FILE
14 agent1.channels.channel1.capacity = 2000000
15 agent1.channels.channel1.checkpointInterval = 60000
16 agent1.channels.channel1.maxFileSize = 10737418240
17
 
Search WWH ::




Custom Search