Database Reference
In-Depth Information
Then, I create the file system directories needed for staging. I use chown to set their ownership and group
membership to YARN and the Linux chmod command to set the permissions:
[root@hc1nn conf]# mkdir -p /var/lib/hadoop-mapreduce/jobhistory/intermediate/donedir
[root@hc1nn conf]# mkdir -p /var/lib/hadoop-mapreduce/jobhistory/donedir
[root@hc1nn conf]# chown -R yarn:yarn /var/lib/hadoop-mapreduce/jobhistory/intermediate/donedir
[root@hc1nn conf]# chown -R yarn:yarn /var/lib/hadoop-mapreduce/jobhistory/donedir
[root@hc1nn conf]# chmod 1777 /var/lib/hadoop-mapreduce/jobhistory/intermediate/donedir
[root@hc1nn conf]# chmod 750 /var/lib/hhostnameadoop-mapreduce/jobhistory/donedir
After carrying out these configuration file changes on all cluster nodes, I restart the servers using the root
account. On the name node hc2nn, I enter:
service hadoop-hdfs-namenode start
service hadoop-yarn-resourcemanager start
On the data nodes, I enter:
service hadoop-hdfs-datanode start
service hadoop-yarn-nodemanager start
To confirm that Hadoop is up on the new cluster, I access the web interfaces for the name node and Resource
Manager. I find the name node web interface at http://hc2nn:50070/ , then I click Live Datanodes to show the list of
active data nodes; Figure 8-27 shows the results.
Figure 8-27. User interface for Bigtop name nodes
To access the Resource Manager user interface, you go to http://hc2nn:8088/cluster . Click Scheduler in the
left column to view the fair scheduler configuration on the new cluster, as shown in Figure 8-28 .
 
Search WWH ::




Custom Search