• 欢迎访问蜷缩的蜗牛博客 蜷缩的蜗牛
  • 微信搜索: 蜷缩的蜗牛 | 联系站长 kbsonlong@qq.com
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏吧

hadoop集群之HDFS和YARN启动和停止命令

Hadoop 蜷缩的蜗牛 8个月前 (02-06) 21次浏览 已收录 0个评论

假如我们只有 3 台 linux 虚拟机,主机名分别为 hadoop01、hadoop02 和 hadoop03,在这 3 台机器上,hadoop 集群的部署情况如下:

  1. hadoop01:1 个 namenode,1 个 datanode,1 个 journalnode,1 个 zkfc,1 个 resourcemanager,1 个 nodemanager;
  2.  
  3. hadoop02:1 个 namenode,1 个 datanode,1 个 journalnode,1 个 zkfc,1 个 resourcemanager,1 个 nodemanager;
  4.  
  5. hadoop03:1 个 datenode,1 个 journalnode,1 个 nodemanager;

下面我们来介绍启动 hdfs 和 yarn 的一些命令。

1.启动 hdfs 集群(使用 hadoop 的批量启动脚本)

  1. /root/apps/hadoop/sbin/start-dfs.sh
  1. [root@hadoop01 ~]# /root/apps/hadoop/sbin/start-dfs.sh
  2. Starting namenodes on [hadoop01 hadoop02]
  3. hadoop01: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out
  4. hadoop02: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out
  5. hadoop03: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out
  6. hadoop02: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out
  7. hadoop01: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out
  8. Starting journal nodes [hadoop01 hadoop02 hadoop03]
  9. hadoop03: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out
  10. hadoop02: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out
  11. hadoop01: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out
  12. Starting ZK Failover Controllers on NN hosts [hadoop01 hadoop02]
  13. hadoop01: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out
  14. hadoop02: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out
  15. [root@hadoop01 ~]#

从上面的启动日志可以看出,start-dfs.sh 这个启动脚本是通过 ssh 对多个节点的 namenode、datanode、journalnode 以及 zkfc 进程进行批量启动的。

2.停止 hdfs 集群(使用 hadoop 的批量启动脚本)

  1. /root/apps/hadoop/sbin/stop-dfs.sh
  1. [root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-dfs.sh
  2. Stopping namenodes on [hadoop01 hadoop02]
  3. hadoop02: stopping namenode
  4. hadoop01: stopping namenode
  5. hadoop02: stopping datanode
  6. hadoop03: stopping datanode
  7. hadoop01: stopping datanode
  8. Stopping journal nodes [hadoop01 hadoop02 hadoop03]
  9. hadoop03: stopping journalnode
  10. hadoop02: stopping journalnode
  11. hadoop01: stopping journalnode
  12. Stopping ZK Failover Controllers on NN hosts [hadoop01 hadoop02]
  13. hadoop01: stopping zkfc
  14. hadoop02: stopping zkfc
  15. [root@hadoop01 ~]#

3.启动单个进程

  1. [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenode
  2. starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out
  1. [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenode
  2. starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out
  1. [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
  2. starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out
  1. [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
  2. starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out
  1. [root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
  2. starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out
  1. [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
  2. starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out
  1. [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
  2. starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out
  1. [root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
  2. starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out
  1. [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfc
  2. starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out
  1. [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfc
  2. starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out

分别查看启动后 3 台虚拟机上的进程情况:

  1. [root@hadoop01 ~]# jps
  2. 6695 DataNode
  3. 2002 QuorumPeerMain
  4. 6879 DFSZKFailoverController
  5. 7035 Jps
  6. 6800 JournalNode
  7. 6580 NameNode
  8. [root@hadoop01 ~]#
  1. [root@hadoop02 ~]# jps
  2. 6360 JournalNode
  3. 6436 DFSZKFailoverController
  4. 2130 QuorumPeerMain
  5. 6541 Jps
  6. 6255 DataNode
  7. 6155 NameNode
  8. [root@hadoop02 ~]#
  1. [root@hadoop03 apps]# jps
  2. 5331 Jps
  3. 5103 DataNode
  4. 5204 JournalNode
  5. 2258 QuorumPeerMain
  6. [root@hadoop03 apps]#

3.停止单个进程

  1. [root@hadoop01 ~]# jps
  2. 6695 DataNode
  3. 2002 QuorumPeerMain
  4. 8486 Jps
  5. 6879 DFSZKFailoverController
  6. 6800 JournalNode
  7. 6580 NameNode
  8. [root@hadoop01 ~]#
  9. [root@hadoop01 ~]#
  10. [root@hadoop01 ~]#
  11. [root@hadoop01 ~]#
  12. [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfc
  13. stopping zkfc
  14. [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
  15. stopping journalnode
  16. [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
  17. stopping datanode
  18. [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenode
  19. stopping namenode
  20. [root@hadoop01 ~]# jps
  21. 2002 QuorumPeerMain
  22. 8572 Jps
  23. [root@hadoop01 ~]#
  1. [root@hadoop02 ~]# jps
  2. 6360 JournalNode
  3. 6436 DFSZKFailoverController
  4. 2130 QuorumPeerMain
  5. 7378 Jps
  6. 6255 DataNode
  7. 6155 NameNode
  8. [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfc
  9. stopping zkfc
  10. [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
  11. stopping journalnode
  12. [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
  13. stopping datanode
  14. [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenode
  15. stopping namenode
  16. [root@hadoop02 ~]# jps
  17. 7455 Jps
  18. 2130 QuorumPeerMain
  19. [root@hadoop02 ~]#
  1. [root@hadoop03 apps]# jps
  2. 5103 DataNode
  3. 5204 JournalNode
  4. 5774 Jps
  5. 2258 QuorumPeerMain
  6. [root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
  7. stopping journalnode
  8. [root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
  9. stopping datanode
  10. [root@hadoop03 apps]# jps
  11. 5818 Jps
  12. 2258 QuorumPeerMain
  13. [root@hadoop03 apps]#

3.启动 yarn 集群(使用 hadoop 的批量启动脚本)

  1. /root/apps/hadoop/sbin/start-yarn.sh
  1. [root@hadoop01 ~]# /root/apps/hadoop/sbin/start-yarn.sh
  2. starting yarn daemons
  3. starting resourcemanager, logging to /root/apps/hadoop/logs/yarn-root-resourcemanager-hadoop01.out
  4. hadoop03: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop03.out
  5. hadoop02: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop02.out
  6. hadoop01: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop01.out
  7. [root@hadoop01 ~]#

从上面的启动日志可以看出,start-yarn.sh 启动脚本只在本地启动一个 ResourceManager 进程,而 3 台机器上的 nodemanager 都是通过 ssh 的方式启动的。所以 hadoop02 机器上的 ResourceManager 需要我们手动去启动。

4.启动 hadoop02 上的 ResourceManager 进程

  1. /root/apps/hadoop/sbin/yarn-daemon.sh start resourcemanager

5.停止 yarn

  1. /root/apps/hadoop/sbin/stop-yarn.sh
  1. [root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-yarn.sh
  2. stopping yarn daemons
  3. stopping resourcemanager
  4. hadoop01: stopping nodemanager
  5. hadoop03: stopping nodemanager
  6. hadoop02: stopping nodemanager
  7. no proxyserver to stop
  8. [root@hadoop01 ~]#

通过上面的停止日志可以看出,stop-yarn.sh 脚本只停止了本地的那个 ResourceManager 进程,所以 hadoop02 上的那个 resourcemanager 我们需要单独去停止。

6.停止 hadoop02 上的 resourcemanager

  1. /root/apps/hadoop/sbin/yarn-daemon.sh stop resourcemanager

注意:启动和停止单个 hdfs 相关的进程使用的是”hadoop-daemon.sh”脚本,而启动和停止 yarn 使用的是”yarn-daemon.sh”脚本。


蜷缩的蜗牛 , 版权所有丨如未注明 , 均为原创丨 转载请注明hadoop 集群之 HDFS 和 YARN 启动和停止命令
喜欢 (0)
[]
分享 (0)

您必须 登录 才能发表评论!