Database Reference
In-Depth Information
First, change the user to the root account with the Linux su (switch user) command:
[hadoop@hc1nn ~]$ su -
Next, issue the command sequence:
[root@hc1nn ~]# cd /etc/init.d/
[root@hc1nn init.d]# ls hadoop*mapreduce*
hadoop-0.20-mapreduce-jobtracker hadoop-mapreduce-historyserver
[root@hc1nn init.d]# ls hadoop*yarn*
hadoop-yarn-proxyserver hadoop-yarn-resourcemanager
[root@hc1nn init.d]# ls hadoop*hdfs*
hadoop-hdfs-namenode
The Linux cd (change directory) command moves the current path to /etc/init.d/, and the ls command displays
the Map Reduce, Yarn, and HDFS Hadoop services. (The * character is a wildstar value that matches all text to the end
of the string.) For each of the services displayed, execute the following command:
service <service name> stop
For instance, the stop command stops the proxy server, although it was not running:
[root@hc1nn init.d]# service hadoop-yarn-proxyserver stop
Stopping Hadoop proxyserver: [ OK ]
no proxyserver to stop
Remember to stop the non-HDFS services before the HDFS services and stop Map Reduce and YARN before
HDFS. Once you have done this on all the servers in this small cluster, then Hadoop V2 CDH4 will be stopped. You are
still logged in as root, however, so use the Linux exit command to return control back to the Linux hadoop account
session:
[root@hc1nn init.d]# exit
logout
[hadoop@hc1nn ~]$
Changing the Environment Scripts
In Chapter 2, you worked with two versions of Hadoop with two different environment configurations. The
environment file used to hold these configurations was the Linux hadoop user's $HOME/.bashrc file. During the
creation of this topic, I needed to switch between Hadoop versions frequently, and so I created two separate versions
of the bashrc file on each server in the cluster, as follows:
[hadoop@hc1nn ~]$ pwd
/home/hadoop
 
Search WWH ::




Custom Search