Database Reference
In-Depth Information
[hadoop@hc1r1m2 logs]$ kill -9 17617
[hadoop@hc1r1m2 logs]$
[1]+ Killed storm supervisor
Next, I kill the Nimbus and the user interface processes on the master server hc1nn. Again, I use the jps
command to show the running processes. The Storm user interface shows the name “Core” instead of a meaningful
name. Remember, though, that Storm is an incubator project and so problems like this will be resolved in future
releases. However, when I kill the associated process numbers by using the Linux kill command, I can see that the
Nimbus and UI have stopped:
[hadoop@hc1nn starter]$ jps
24718 core
27661 Jps
24667 nimbus
[hadoop@hc1nn starter]$ kill -9 24718 24667
[hadoop@hc1nn starter]$
[1]- Killed storm nimbus (wd: /usr/local/storm/conf)
(wd now: /usr/local/storm/examples/storm-starter/src/jvm/storm/starter)
[2]+ Killed storm ui (wd: /usr/local/storm/conf)
(wd now: /usr/local/storm/examples/storm-starter/src/jvm/storm/starter)
If this very short introduction to Storm leaves you curious for more information, take a look at the other example
topologies and examine the code. Try running some of these other topologies and see what they do. You can read the
Apache Storm website, but be aware that because Storm is an incubator project, the documentation is a little thin.
Summary
This chapter has highlighted some, but not all, of the many tools and alternatives for moving data. For instance, the
Sqoop2 tool was just recently released. Remember that although most of the examples in this chapter have processed
data in to Hadoop, these same tools can be used to send data out of Hadoop as well. Also, each of the tools examined,
especially Sqoop and Flume, can process multiple types of data. You can also embed your Sqoop data-processing
scripts into Oozie workflows for management and scheduling. This chapter has examined only a small portion of the
functionality that is offered by Sqoop, Flume, and Storm for processing data. You could also examine a tool called
Apache Chukwa ( chukwa.apache.org ), which has features similar to Flume. Note also that Chapter 10 examines tools
like Pentaho and Talend, with which you can move data using visual building blocks.
The next chapter surveys monitoring systems like Hue to provide a visual view of Hadoop cluster processing. Hue
provides a single, integrated web-based interface by which scripting and monitoring functionality can be accessed.
Examples here and in earlier chapters have used Sqoop, Hive, Pig, and Oozie; next, you'll be accessing these tools
within Hue.
 
Search WWH ::




Custom Search