实验需求:
·4台机器安装GlusterFS组成一个集群
·客户端把docker registry存储到文件系统里
·4个节点的硬盘空间不整合成一个硬盘空间,要求每个节点都存储一份,保证数据安全
环境规划
server
node1:192.168.0.165 主机名:glusterfs1
node2:192.168.0.157 主机名:glusterfs2
node3:192.168.0.166 主机名:glusterfs3
node4:192.168.0.150 主机名:glusterfs4
client
192.168.0.164 主机名:master3
实验前准备
·所有主机关闭防火墙,SElinux
·修改hosts文件,能够互相解析
192.168.0.165 glusterfs1192.168.0.157 glusterfs2192.168.0.166 glusterfs3192.168.0.150 glusterfs4192.168.0.164 master3
安装
服务端
1.在glusterfs {1-4}节点上安装GlusrerFS软件包
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo# yum install -y glusterfs glusterfs-server glusterfs-fuse 出现librcu的错误需要安装userspace-rcu-0.7.9-1.el7.x86_64# service glusterd start# chkconfig glusterd on
2.在glusterfs1节点上配置整个GlusterFS集群,把各个节点加入到集群
[root@glusterfs1 ~]# gluster peer probe glusterfs1 1 peer probe: success: on localhost not needed[root@glusterfs1 ~]# gluster peer probe glusterfs2 1 peer probe: success[root@glusterfs1 ~]# gluster peer probe glusterfs2 1 peer probe: success[root@glusterfs1 ~]# gluster peer probe glusterfs2 1 peer probe: success
3.查看节点状态
[root@glusterfs1 ~]#gluster peer status
4.在glusterfs{1-4}上创建数据存储目录
# mkdir -p /usr/local/share/models
5.查看卷
[root@glusterfs1 ~]# gluster volume info Volume Name: modelsType: DistributeVolume ID: b81587ff-5dd6-49b9-b46b-afe5df38d8c7Status: StartedNumber of Bricks: 4Transport-type: tcpBricks:Brick1: glusterfs1:/usr/local/share/modelsBrick2: glusterfs2:/usr/local/share/modelsBrick3: glusterfs3:/usr/local/share/modelsBrick4: glusterfs4:/usr/local/share/modelsOptions Reconfigured:performance.readdir-ahead: on
5.在glusterfs1上创建GlusterFS磁盘
注意:
加上replica 4就是4个节点中,每个节点都要把数据存储一次,就是一个数据存储4份,每个节点一份
如果不加replica 4,就是4个节点的磁盘空间整合成一个硬盘,
[root@glusterfs1 ~]#gluster volume create models replica 4 glusterfs1:/usr/local/share/models glusterfs2:/usr/local/share/models glusterfs3:/usr/local/share/models glusterfs4:/usr/local/share/models force 1 volume create: models: success: please start the volume to access data
6.启动
[root@glusterfs1 ~]# gluster volume start models
客户端
1.部署GlusterFS客户端并mount GlusterFS文件系统
[root@master3 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo[root@master3 ~]# yum install -y glusterfs glusterfs-fuse[root@master3 ~]# mkdir -p /mnt/models[root@master3 ~]# mount -t glusterfs -o ro glusterfs1:models /mnt/models/#加上 -o ro 的意思是 只读
2.查看效果
[root@master3 ~]#df -h Filesystem Size Used Avail Use% Mounted on/dev/vda3 289G 5.6G 284G 2% /devtmpfs 3.9G 0 3.9G 0% /devtmpfs 3.9G 80K 3.9G 1% /dev/shmtmpfs 3.9G 169M 3.7G 5% /runtmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup/dev/vda1 1014M 128M 887M 13% /bootglusterfs1:models 189G 3.5G 186G 2% /mnt/models
其他操作命令
删除GlusterFS磁盘
# gluster volume stop models #先停止# gluster volume delete models #再删除
卸载GlusterFS磁盘
gluster peer detach glusterfs4
ACL访问控制
gluster volume set models auth.allow 10.60.1.*,10.70.1.*
添加GlusterFS节点
# gluster peer probe sc2-log5# gluster peer probe sc2-log6# gluster volume add-brick models sc2-log5:/data/gluster sc2-log6:/data/gluster
迁移GlusterFS数据
# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models status# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit
修复GlusterFS数据(在节点1宕机的情况下)
# gluster volume replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit -force# gluster volume heal models full