Like the Blog?

Followers

Tuesday 5 September 2017

Installation of Hadoop 2.6.0 on Ubuntu 16.04.3 (Single-Node Cluster)

Step-by-Step tutorial
to Install
Hadoop on Ubuntu

(with detailed Screenshots and Explanations)

[Note: Here, Hadoop-2.6.0 is being installed on Ubuntu-16.04.3, but this document can be referred for installation of any version of Hadoop in any version of Ubuntu(14.04 or above)]


Step 1:
soumitra@Soumitra-PC:~$ cd ~

Step 2:
# Update the source list
soumitra@Soumitra-PC:~$ sudo apt-get update



Step 3:
# The OpenJDK project is the default version of Java
# that is provided from a supported Ubuntu repository.

soumitra@Soumitra-PC:~$ sudo apt-get install default-jdk
 


Step 4:
soumitra@Soumitra-PC:~$ java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-2ubuntu1.16.04.3-b11)
OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)


Step 5:
#Adding a dedicated Hadoop user in a Group
soumitra@Soumitra-PC:~$ sudo addgroup hadoop
Adding group `hadoop' (GID 1001) ...
Done.

soumitra@Soumitra-PC:~$ sudo adduser --ingroup hadoop hduser
Adding user `hduser' ...
Adding new user `hduser' (1001) with group `hadoop' ...
Creating home directory `/home/hduser' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for hduser
Enter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] Y

#We can check if we created the hadoop group and hduser user:
soumitra@Soumitra-PC:~$ groups hduser
hduser : hadoop
 


Step 6:
#Installing SSH

#ssh has two main components:
  1. ssh : The command we use to connect to remote machines - the client.    
  2. sshd : The daemon that is running on the server and allows clients to connect to the server.    

#The ssh is pre-enabled on Linux, but in order to start sshd daemon, we need to install ssh first.

soumitra@Soumitra-PC:~$ sudo apt-get install ssh 



#This will install ssh on our machine. If we get something similar to the following, we can think it is setup properly:
soumitra@Soumitra-PC:~$ which ssh
/usr/bin/ssh
soumitra@Soumitra-PC:~$ which sshd
/usr/sbin/sshd
 


Step 7:
#Create and Setup SSH Certificates
#Hadoop requires SSH access to manage its nodes, i.e. remote machines plus our local machine. #For our single-node setup of Hadoop, we, therefore, need to configure SSH access to localhost.
#So, we need to have SSH up and running on our machine and configured it to allow SSH public #key authentication.
#Hadoop uses SSH (to access its nodes) which would normally require the user to enter a password. #However, this requirement can be eliminated by creating and setting up SSH certificates using the #following commands. If asked for a filename just leave it blank and press the enter key to #continue.

soumitra@Soumitra-PC:~$ su hduser
Password:  

hduser@Soumitra-PC:~$ ssh-keygen –t rsa –P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:/M18Dv+ku5js8npZvYi45Fr4F84SzoqXBUO5xAfo+/8 hduser@Soumitra-PC
The key's randomart image is:
+---[RSA 2048]----+
|      o.o        |
|     . = .       |
|    . o o        |
|     . =         |
|      . S       .|
|     .  .+ +    o|
|      ..=o* * .oo|
|      .+== *.B++ |
|     ..o+==EB*B+.|
+----[SHA256]-----+




Note: Before you run the next two commands, you have to do ‘cd’ and go to the /home/<username> directory first and then run these commands. In my case, the username is ‘soumitra’. For you, it will be different. To check your terminal’s username, open a new terminal, and check the first few initials before the @ symbol. That is your username. Please refer the screenshot below:

hduser@Soumitra-PC:~$ cd /home/soumitra

hduser@Soumitra-PC:/home/soumitra$/home/soumitra$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

#The second command adds the newly created key to the list of authorized keys so that Hadoop can #use ssh without prompting for a password. We can check if ssh works:

hduser@Soumitra-PC:/home/soumitra$ ssh localhost

The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:e8SM2INFNu8NhXKzdX9bOyKIKbMoUSK4dXKonloN8JY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.03 LTS (GNU/Linux 4.10.0-28-generic x86_64)
...


Step 8:
#Install Hadoop


hduser@Soumitra-PC:~$ tar xvzf hadoop-2.6.0.tar.gz   

 

We want to move the Hadoop installation to the /usr/local/hadoop directory. So, we should create the directory first:

hduser@Soumitra-PC:~$ sudo mkdir -p /usr/local/hadoop
[sudo] password for hduser:
hduser is not in the sudoers file. This incident will be reported.

This can be resolved by logging in as a root user, and then add hduser to sudo group:
 hduser@Soumitra-PC:~/hadoop-2.6.0$ su soumitra
Password:

soumitra@Soumitra-PC:/home/hduser$ sudo adduser hduser sudo
[sudo] password for soumitra:
Adding user `hduser' to group `sudo' ...
Adding user hduser to group sudo
Done.

Now, the hduser has root privilege, we can move the Hadoop installation to the /usr/local/hadoop directory without any problem:
soumitra@Soumitra-PC:/home/hduser$ sudo su hduser

hduser@Soumitra-PC:~$ sudo mkdir -p /usr/local/hadoop
[sudo] password for hduser:

Very Very Important: Before going into the next step, don’t forget to do a cd and go into the directory hadoop-2.6.0. Refer the screenshot below:

hduser@Soumitra-PC:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
hduser@Soumitra-PC:~/hadoop-2.6.0$ sudo chown -R hduser:hadoop /usr/local/hadoop
 


Step 9:
#Setup Configuration Files
#The following files will have to be modified to complete the Hadoop setup:
  1. ~/.bashrc        
  2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh        
  3. /usr/local/hadoop/etc/hadoop/core-site.xml        
  4. /usr/local/hadoop/etc/hadoop/mapred-site.xml.template        
  5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
            
1. ~/.bashrc:
#Before editing the .bashrc file in hduser's home directory, we need to find the path where Java #has been installed to set the JAVA_HOME environment variable using the following command:

hduser@Soumitra-PC update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java
Nothing to configure.

#Now we can append the following to the end of ~/.bashrc:
hduser@Soumitra-PC:~$ vi ~/.bashrc

#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-i386
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END

hduser@Soumitra-PC:~$ source ~/.bashrc

#Very Very Important:
Please Note that the JAVA_HOME(2nd line in the above extension of ~/.bashrc) should be set as the path just before the '.../bin/' when you run the below command readlink -f /usr/bin/javac. In some machines in place of amd64 it may show i386, so you should modify accordingly. Also, check the Java version number. In this case it is 8, but it can be 7 or any other as well.

hduser@ubuntu-VirtualBox:~$ javac -version
javac 1.8.0_111

hduser@ubuntu-VirtualBox:~$ which javac
/usr/bin/javac

hduser@ubuntu-VirtualBox:~$ readlink -f /usr/bin/javac
/usr/lib/jvm/java-7-openjdk-i386/bin/javac

2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh

#We need to set JAVA_HOME by modifying hadoop-env.sh file.

#The same JAVA_HOME (2nd line in the above ~/.bashrc) needs to be copied here.
hduser@Soumitra-PC:~$ vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-i386

#Adding the above statement in the hadoop-env.sh file ensures that the value of JAVA_HOME #variable will be available to Hadoop whenever it is started up.

3. /usr/local/hadoop/etc/hadoop/core-site.xml:

#The /usr/local/hadoop/etc/hadoop/core-site.xml file contains configuration properties that #Hadoop uses when starting up.
#This file can be used to override the default settings that Hadoop starts with.
hduser@Soumitra-PC:~$ sudo mkdir -p /app/hadoop/tmp

hduser@Soumitra-PC:~$ sudo chown hduser:hadoop /app/hadoop/tmp

#Open the file and enter the following in between the <configuration></configuration> tag:
hduser@Soumitra-PC:~$ vi /usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose
scheme and authority determine the FileSystem implementation.  The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class.  The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
 
4. /usr/local/hadoop/etc/hadoop/mapred-site.xml

#By default, the /usr/local/hadoop/etc/hadoop/ folder contains
#/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
#file which has to be renamed/copied with the name mapred-site.xml:

hduser@Soumitra-PC:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

#The /usr/local/hadoop/etc/hadoop/mapred-site.xml file is used to specify which framework is #being used for MapReduce.
#We need to enter the following content in between the <configuration></configuration> tag:

hduser@Soumitra-PC:~$ vi /usr/local/hadoop/etc/hadoop/mapred-site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at.  If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>

5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml

#The /usr/local/hadoop/etc/hadoop/hdfs-site.xml file needs to be configured for each host in the #cluster that is being used. It specifies the directories which will be used as the namenode and the #datanode on that host.
#Before editing this file, we need to create two directories which will contain the namenode and #the datanode for this Hadoop installation.
#This can be done using the following commands:

hduser@Soumitra-PC:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode

hduser@Soumitra-PC:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode

hduser@Soumitra-PC:~$ sudo chown -R hduser:hadoop /usr/local/hadoop_store
#Open the file and enter the following content in between the <configuration></configuration> tag:
hduser@Soumitra-PC:~$ vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>

Step 10:
#Format the New Hadoop Filesystem
#Now, the Hadoop file system needs to be formatted so that we can start to use it. The format #command should be issued with write permission since it creates current directory
#under /usr/local/hadoop_store/hdfs/namenode folder:

hduser@Soumitra-PC:~$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
17/09/03 09:22:20 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = Soumitra-PC/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
...
...
...
17/09/03 09:22:21 INFO util.ExitUtil: Exiting with status 0
16/11/10 09:22:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Soumitra-PC/127.0.1.1
************************************************************/




Step 11:
#Starting Hadoop
Now it's time to start the newly installed single node cluster.
We can use start-all.sh or (start-dfs.sh and start-yarn.sh)

hduser@Soumitra-PC:~$ cd /usr/local/hadoop/sbin

hduser@Soumitra-PC:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
hadoop-daemon.sh         start-all.sh         stop-dfs.cmd
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
hdfs-config.sh           start-dfs.sh         stop-yarn.cmd
httpfs.sh                start-secure-dns.sh  stop-yarn.sh
kms.sh                   start-yarn.cmd       yarn-daemon.sh
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
refresh-namenodes.sh     stop-all.cmd
slaves.sh                stop-all.sh

#Start NameNode daemon and DataNode daemon:

hduser@Soumitra-PC:/usr/local/hadoop/sbin$ start-dfs.sh
16/11/10 14:51:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-Soumitra-PC.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-Soumitra-PC.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:e9SM2INFNu8NhXKzdX9bOyKIKbMoUSK4dXKonloN7JY.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-Soumitra-PC.out
16/11/10 14:52:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

#Start ResourceManager daemon and NodeManager daemon:

hduser@Soumitra-PC:/usr/local/hadoop/sbin$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-Soumitra-PC.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-Soumitra-PC.out

#We can check if it's really up and running:

hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
14306 DataNode
14660 ResourceManager
14505 SecondaryNameNode
14205 NameNode
14765 NodeManager
15166 Jps



#The output means that we now have a functional instance of Hadoop running on our VPS (Virtual #private server).

#Another way to check is using netstat:

hduser@Soumitra-PC:/usr/local/hadoop/sbin$ netstat -plten | grep java
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp   0  0 127.0.0.1:54310 0.0.0.0:*  LISTEN  1001  682747  14205/java
tcp   0  0 0.0.0.0:50090   0.0.0.0:*  LISTEN  1001  684425  14505/java
tcp   0  0 0.0.0.0:50070   0.0.0.0:*  LISTEN  1001  681708  14205/java
tcp   0  0 0.0.0.0:50010   0.0.0.0:*  LISTEN  1001  682751  14306/java
tcp   0  0 0.0.0.0:50075   0.0.0.0:*  LISTEN  1001  682989  14306/java
tcp   0  0 0.0.0.0:50020   0.0.0.0:*  LISTEN  1001  681774  14306/java
tcp6  0  0 :::8040         :::*       LISTEN  1001  686741  14765/java
tcp6  0  0 :::8042         :::*       LISTEN  1001  687454  14765/java
tcp6  0  0 :::35094        :::*       LISTEN  1001  687439  14765/java
tcp6  0  0 :::8088         :::*       LISTEN  1001  687453  14660/java
tcp6  0  0 :::8030         :::*       LISTEN  1001  684963  14660/java
tcp6  0  0 :::8031         :::*       LISTEN  1001  684959  14660/java
tcp6  0  0 :::8032         :::*       LISTEN  1001  687435  14660/java
tcp6  0  0 :::8033         :::*       LISTEN  1001  687460  14660/java

Step 12:
#Stopping Hadoop
#In order to stop all the daemons running on our machine, we can run stop-all.sh or (stop-dfs.sh #and stop-yarn.sh) :

hduser@ Soumitra-PC:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
hadoop-daemon.sh         start-all.sh         stop-dfs.cmd
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
hdfs-config.sh           start-dfs.sh         stop-yarn.cmd
httpfs.sh                start-secure-dns.sh  stop-yarn.sh
kms.sh                   start-yarn.cmd       yarn-daemon.sh
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
refresh-namenodes.sh     stop-all.cmd
slaves.sh                stop-all.sh

hduser@ Soumitra-PC:/usr/local/hadoop/sbin$ stop-dfs.sh
16/11/10 15:23:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
16/11/10 15:23:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

hduser@Soumitra-PC:/usr/local/hadoop/sbin$ stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop
Information about various Nodes
 


 Step 13:
#Hadoop Web Interfaces
#Let's start the Hadoop again and see its Web UI:


[Note: For further information of how to access Web UI of Hadoop Components, please refer this link].


References

[1] http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_16_04_single_node_cluster.php



Document Prepared by Mr. Soumitra Ghosh

Assistant Professor, Information Technology,
C.V.Raman College of Engineering, Bhubaneswar


Contact: soumitraghosh@cvrce.edu.in

No comments:

Post a Comment