Document prepared by Mr. Soumitra Ghosh
Assistant Professor, Information Technology,
C.V.Raman College of Engineering, Bhubaneswar
Contact: soumitraghosh@cvrce.edu.in
After you have logged in as the dedicated user for Hadoop(in my case it is hduser) that you must have created while installation, go to the installation folder of Hadoop(in my case it is /usr/local/hadoop). Inside the directory hadoop, there will be a folder 'sbin', where there will be a several files like start-all.sh, stop-all.sh, start-dfs.sh, stop-dfs.sh, hadoop-daemons.sh, yarn-daemons.sh, etc. Executing these files can help us start and/or stop in various ways.
Password:
hduser@Soumitra-PC:~$ cd /usr/local/hadoop/sbin
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh start-all.cmd stop-balancer.sh
hadoop-daemon.sh start-all.sh stop-dfs.cmd
hadoop-daemons.sh start-balancer.sh stop-dfs.sh
hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh
hdfs-config.sh start-dfs.sh stop-yarn.cmd
httpfs.sh start-secure-dns.sh stop-yarn.sh
kms.sh start-yarn.cmd yarn-daemon.sh
mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh
refresh-namenodes.sh stop-all.cmd
slaves.sh stop-all.sh
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh start-all.cmd stop-balancer.sh
hadoop-daemon.sh start-all.sh stop-dfs.cmd
hadoop-daemons.sh start-balancer.sh stop-dfs.sh
hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh
hdfs-config.sh start-dfs.sh stop-yarn.cmd
httpfs.sh start-secure-dns.sh stop-yarn.sh
kms.sh start-yarn.cmd yarn-daemon.sh
mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh
refresh-namenodes.sh stop-all.cmd
slaves.sh stop-all.sh
#You can start and stop all the daemons at the same time, by using start-all.sh and stop-all.sh #commands:
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-Soumitra-PC.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-Soumitra-PC.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-Soumitra-PC.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-Soumitra-PC.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-Soumitra-PC.out
#We do a jps to check whether all components are running or not.
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
10072 DataNode
11160 Jps
10441 ResourceManager
10281 SecondaryNameNode
9950 NameNode
10559 NodeManager
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop
#We do a jps to check whether all components have stopped or not.
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
11711 Jps
2. Starting a group of nodes among all at the same time :
#Starting Namenode, Datanode and SecondaryNamenode at the same time.
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-Soumitra-PC.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-Soumitra-PC.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:e9SM2INFNu8NhXKzdX9bOyKIKbMoUSK4dXKonloN7JY.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-Soumitra-PC.out
#Starting ResourceManager daemon and NodeManager daemon:
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-Soumitra-PC.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-Soumitra-PC.out
#We can check if it's really up and running:
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
14306 DataNode
14660 ResourceManager
14505 SecondaryNameNode
14205 NameNode
14765 NodeManager
15166 Jps
3. Starting and Stopping each node individually
#Starting each node separately
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ hadoop-daemons.sh start namenode
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-Soumitra-PC.out
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12384 NameNode
12453 Jps
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ hadoop-daemons.sh start datanode
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-Soumitra-PC.out
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12384 NameNode
12621 Jps
12543 DataNode
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ hadoop-daemons.sh start secondarynamenode
localhost: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-Soumitra-PC.out
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12752 Jps
12384 NameNode
12709 SecondaryNameNode
12543 DataNode
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ yarn-daemons.sh start resourcemanager
localhost: starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-Soumitra-PC.out
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12384 NameNode
12852 ResourceManager
12709 SecondaryNameNode
13078 Jps
12543 DataNode
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ yarn-daemons.sh start nodemanager
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-Soumitra-PC.out
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12384 NameNode
13298 Jps
12852 ResourceManager
12709 SecondaryNameNode
13179 NodeManager
12543 DataNode
#Stopping each node separately
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ yarn-daemons.sh stop nodemanager
localhost: stopping nodemanager
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12384 NameNode
12852 ResourceManager
12709 SecondaryNameNode
13514 Jps
12543 DataNode
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ yarn-daemons.sh stop resourcemanager
localhost: stopping resourcemanager
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12384 NameNode
12709 SecondaryNameNode
12543 DataNode
13615 Jps
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ hadoop-daemons.sh stop secondarynamenode
localhost: stopping secondarynamenode
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12384 NameNode
13705 Jps
12543 DataNode
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ hadoop-daemons.sh stop datanode
localhost: stopping datanode
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
12384 NameNode
13792 Jps
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ hadoop-daemons.sh stop namenode
localhost: stopping namenode
hduser@Soumitra-PC:/usr/local/hadoop/sbin$ jps
13885 Jps
Cool stuff you have got and you keep update all of us. Colchones de Lujo
ReplyDelete