Monday, 11 September 2017

Differences between the ways to start/stop Hadoop daemon processes

Differences between the ways to start/stop hadoop daemon processes:

Three ways to run the hadoop daemon process
  1. start-all.sh and stop-all.sh
  2. start-dfs.sh,stop-dfs.sh and start-yarn.sh,stop-yarn.sh
  3. hadoop-daemon.sh start/stop namenode/datanode and yarn-deamon.sh start/stop resourcemanager

1.start-all.sh and stop-all.sh : 

start-all.sh & stop-all.sh ==> used to start and stop hadoop daemons all at once. 

Note:
Running on the master machine will start/stop all daemons on all the nodes of a cluster.

2.start-dfs.sh, stop-dfs.sh and start-yarn.sh, stop-yarn.sh : 

It is also start/stops all the the daemons, but HDFS and YARN daemons separately on all the nodes from the master machine.

3. hadoop-daemon.sh start namenode/datanode and yarn-deamon.sh start/stop resourcemanager :

To start individual daemons on an individual machine manually. You need to go to a particular node and run these commands.
Use case:
Suppose you have added a new node to your cluster and need to start the DataNode daemon only on this machine, 
sbin/hadoop-daemon.sh start datanode


1 comment:

Fundamentals of Python programming

Fundamentals of Python programming: Following below are the fundamental constructs of Python programming: Python Data types Python...