Database Reference
In-Depth Information
# Create a soft link for easy access and updates without
disrupting PATH
$ ln -s hadoop-2.6.0 hadoop
$ cd hadoop
Let's assume the directory where the Hadoop tarball is extracted is $HADOOP_HOME . You
need to configure Hadoop to get it working. We will perform the minimalistic configura-
tion that gets Hadoop working in a pseudo-cluster mode, where your single machine
works as master node with JobTracker and NameNode and slave node with TaskTracker
and DataNode. Remember, this is not a production-ready configuration.
Edit $HADOOP_HOME/conf/core-site.xml and add the following lines:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Edit $HADOOP_HOME/conf/hdfs-site.xml and add the replication parameter for
the data blocks:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Now, it is almost done. However, you need to tell Hadoop where Java lives. Edit
$HADOOP_HOME/etc/hadoop/hadoop-env.sh and add the export statement
for JAVA_HOME like this:
# The only required environment variable is JAVA_HOME. All
others are
# optional. When running a distributed configuration it is
best to
# set JAVA_HOME in this file, so that it is correctly
Search WWH ::




Custom Search