由于原生Hadoop集群没有统一的管理工具,当向集群中部署了越来越多的组件后,集群的管理就变得非常繁琐复杂,包括集群的启动与停止,需要执行好多条命令,所以我就写了个一键启动、停止集群的shell脚本。
注意:
- 该脚本在自己搭的伪分布式集群上随便玩玩就好,正式生产集群上慎用(一般正式环境上用原生Hadoop应该不会很多吧)!!!
- 放置该脚本的机器需要拥有对脚本中涉及到的机器的SSH免密钥登录权限
- 当前脚本包含组件有:Zookeeper,HDFS,YARN(HA),JobHistoryServer,HBase,HiveMetaStore,HiveServer2.
- 注意看脚本前面的说明
脚本内容
#!/bin/bash
export ZK_HOST=("master1" "master2" "worker1")
export HDFS_HOST=master1
export YARN_HOST=master1
export YARN_BAK_HOST=master2
export JOB_HOST=master2
export HBASE_HOST=master1
export HMETA_HOST=master1
export HSERVER_HOST=master2
export CLUSTER_USER=root
if [ $# -ne 1 ];then echo -e "\n\tUsage: $0 {start|stop}\n" exit 1; fi
case "$1" in start) echo "-------------------------- 启动Zookeeper ------------------------" for zk_host in ${ZK_HOST[@]} do echo -e "\nStart Zk_Server On Host [$zk_host]..." ssh $CLUSTER_USER@$zk_host "source /etc/profile;zkServer.sh start" done
echo "---------------------------- 启动HDFS ---------------------------" ssh $CLUSTER_USER@$HDFS_HOST "source /etc/profile;start-dfs.sh"
echo "---------------------------- 启动YARN ---------------------------" ssh $CLUSTER_USER@$YARN_HOST "source /etc/profile;start-yarn.sh" ssh $CLUSTER_USER@$YARN_BAK_HOST "source /etc/profile;yarn-daemon.sh start resourcemanager"
echo "---------------------- 启动JobHistoryServer ---------------------" ssh $CLUSTER_USER@$JOB_HOST "source /etc/profile;mr-jobhistory-daemon.sh start historyserver"
echo "---------------------------- 启动HBase --------------------------" ssh $CLUSTER_USER@$HBASE_HOST "source /etc/profile;start-hbase.sh"
echo "----------------------- 启动HiveMetaStore -----------------------" echo "Start HiveMetaStore On Host [$HMETA_HOST]..." ssh $CLUSTER_USER@$HMETA_HOST "source /etc/profile;nohup hive --service metastore >> /var/hivelog.log 2>&1 &"
echo "------------------------ 启动HiveServer2 ------------------------" echo "Start HiveServer2 On Host [$HSERVER_HOST]..." ssh $CLUSTER_USER@$HSERVER_HOST "source /etc/profile;nohup hiveserver2 >> /var/hivelog.log 2>&1 &"
echo -e "\n------------------------- 集群启动完成 --------------------------\n" ;; stop) echo "----------------------- 停止HiveMetaStore -----------------------" echo "Stop HiveMetaStore On Host [$HMETA_HOST]..." ssh $CLUSTER_USER@$HMETA_HOST "pkill -f hive.metastore.HiveMetaStore"
echo "------------------------ 停止HiveServer2 ------------------------" echo "Stop HiveServer2 On Host [$HSERVER_HOST]..." ssh $CLUSTER_USER@$HSERVER_HOST "pkill -f hive.service.server.HiveServer2"
echo "---------------------------- 停止HBase --------------------------" ssh $CLUSTER_USER@$HBASE_HOST "source /etc/profile;stop-hbase.sh"
echo "---------------------- 停止JobHistoryServer ---------------------" ssh $CLUSTER_USER@$JOB_HOST "source /etc/profile;mr-jobhistory-daemon.sh stop historyserver"
echo "---------------------------- 停止YARN ---------------------------"
ssh $CLUSTER_USER@$YARN_HOST "source /etc/profile;stop-yarn.sh" ssh $CLUSTER_USER@$YARN_BAK_HOST "source /etc/profile;yarn-daemon.sh stop resourcemanager"
echo "---------------------------- 停止HDFS ---------------------------" ssh $CLUSTER_USER@$HDFS_HOST "source /etc/profile;stop-dfs.sh"
echo "------------------------- 停止Zookeeper -------------------------" for zk_host in ${ZK_HOST[@]} do echo -e "\nStop Zk_Server On Host [$zk_host]..." ssh $CLUSTER_USER@$zk_host "source /etc/profile;zkServer.sh stop" done
echo -e "\n------------------------- 集群已停止运行 -------------------------\n" ;; *) echo -e "\n\tUsage: $0 {start|stop}\n" exit 1 ;; esac exit 0
|
使用说明
复制脚本内容保存为cluster.sh文件,
将脚本上传到集群中任意一个节点(推荐管理节点),
并使用# chmod +x cluster.sh赋予脚本可执行权限,
- 集群启动
# ./cluster.sh start
- 集群停止
# ./cluster.sh stop