Differences between the ways to start/stop hadoop daemon processes:
Three ways to run the hadoop daemon process
- start-all.sh and stop-all.sh
- start-dfs.sh,stop-dfs.sh and start-yarn.sh,stop-yarn.sh
- hadoop-daemon.sh start/stop namenode/datanode and yarn-deamon.sh start/stop resourcemanager
1.start-all.sh and stop-all.sh :
start-all.sh & stop-all.sh ==> used to start and stop hadoop daemons all at once.
Note:
Running on the master machine will start/stop all daemons on all the nodes of a cluster.
2.start-dfs.sh, stop-dfs.sh and start-yarn.sh, stop-yarn.sh :
It is also start/stops all the the daemons, but HDFS and YARN daemons separately on all the nodes from the master machine.
3. hadoop-daemon.sh start namenode/datanode and yarn-deamon.sh start/stop resourcemanager :
To start individual daemons on an individual machine manually. You need to go to a particular node and run these commands.
Use case:
Suppose you have added a new node to your cluster and need to start the DataNode daemon only on this machine,
sbin/hadoop-daemon.sh start datanode
nice post.salesforce training
ReplyDeletesalesforce online training
salesforce online course