Database Reference
In-Depth Information
1998 DataNode
1878 NameNode
If you find that the jps command is not available, check that it exists as $JAVA_HOME/bin/jps. Ensure that you
installed the Java JDK in the previous step. If that does not work, then try installing the Java OpenJDK development
package as root:
[root@hc1nn ~]$ yum install java-1.6.0-openjdk-devel
Your result shows that the servers are running. If you need to stop them, use the stop-all.sh command, as
follows:
[hadoop@hc1nn ~]$ stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode
You have now completed a single-node Hadoop installation. Next, you repeat the steps for the Hadoop V1
installation on all of the nodes that you plan to use in your Hadoop cluster. When that is done, you can move to the
next section, “Setting up the Cluster,” where you'll combine all of the single-node machines into a Hadoop cluster
that's run from the Name Node machine.
Setting up the Cluster
Now you are ready to set up the Hadoop cluster. Make sure that all servers are stopped on all nodes by using the
stop-all.sh script.
First, you must tell the name node where all of its slaves are. To do so, you add the following lines to the master
and slaves files. (You only do this on the Name Node server [hc1nn], which is the master. It then knows that it is the
master and can identify its slave data nodes.) You add the following line to the file $HADOOP_PREFIX/conf/masters
to identify it as the master:
hc1nn
Then, you add the following lines to the file $HADOOP_PREFIX/conf/slaves to identify those servers as slaves:
hc1nn
hc1r1m1
hc1r1m2
hc1r1m3
These are all of the machines in my cluster. Your machine names may be different, so you would insert your own
machine names. Note also that I am using the Name Node machine (hc1nn) as a master and a slave. In a production
cluster you would have name nodes and data nodes on separate servers.
On all nodes, you change the value of fs. default.name in the file $HADOOP_PREFIX/conf/core-site.xml to be:
hdfs://hc1nn:54310
This configures all nodes for the core Hadoop component to access the HDFS using the same address.
 
Search WWH ::




Custom Search