Documente Academic
Documente Profesional
Documente Cultură
export HADOOP_HOME="/usr/local/hadoop"
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
source ~/.bashrc (Now load the environment variables to the opened session)
8. sudo rm -r /usr/local/hadoop_tmp
sudo mkdir -p /usr/local/hadoop_tmp/hdfs/namenode
sudo mkdir -p /usr/local/hadoop_tmp/hdfs/datanode
sudo chown user:user -R /usr/local/hadoop_tmp
sudo chmod 700 /usr/local/hadoop_tmp/hdfs/datanode
9.start-dfs.sh and start-yarn.sh
10.Now go to your VMWARE and find the directory where your virtual machine is
installed.
11.Shutdown your virtual machine and head to the installation directory.
12.copy the ubuntu folder(virtual machine folder) three times and rename them to
master,slave1 and slave2.
13.start your VMWARE and select open virtual machine option.
14.open all the three virtual machines on your vmware and rename then to
master,slave1 and slave2 respectively.
15.start all the three virtual machines on your VMWARE
ping slave1
ping slave2
ping master
ssh master
exit
ssh slave1
exit
ssh slave2
exit
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>mapreduce.framework.name</name> #The runtime
framework for executing MapReduce jobs
<value>yarn</value> #to tell MapReduce
that it will run as a YARN application
</property>
<property>
<name>mapreduce.jobhistory.address</name> #the address of the
jobhistory server.It maintains the details of the jab that was dumped from the
memory.
<value>master:10020</value> #(this is not
necessary but some softwares like pig asks for it)
</property>
26. delete and create namenode directory on master and datanode directory on slaves
1) on master
sudo rm -r /usr/local/hadoop_tmp
sudo mkdir -p /usr/local/hadoop_tmp/hdfs/namenode
sudo chown user:user -R /usr/local/hadoop_tmp/hdfs
sudo chmod 700 /usr/local/hadoop_tmp/hdfs/namenode
2) on slaves
sudo rm -r /usr/local/hadoop_tmp
sudo mkdir -p /usr/local/hadoop_tmp/hdfs/datanode
sudo chown user:user -R /usr/local/hadoop_tmp #change
slave1 to slave2 for slave2 system
sudo chmod 700 /usr/local/hadoop_tmp/hdfs/datanode
27. Edit masters and workers files
1) sudo vim /usr/local/hadoop/etc/hadoop/masters
add master and save
2)sudo vim /usr/local/hadoop/etc/hadoop/workers
add slave1
slave2 and save
28.Configuration is done and now format your namenode.
ls *.txt
ls *.txt