这篇文章给大家分享的是有关如何实现基于ceph rbd+corosync+pacemaker HA-NFS文件共享的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。
1、 架构图
2、 环境准备
1.1 IP规划
两台支持rbd的nfs-server主机:10.20.18.97 10.20.18.11
Vip:10.20.18.123 设置在同一网段
1.2 软件安装
# yum install pacemaker corosync cluster-glue resource-agents # rpm -ivh crmsh-2.1-1.6.x86_64.rpm –nodeps
1.3 ssh互信略
1.4 ntp配置略
1.5 配置hosts(两台)
# vi /etc/hosts 10.20.18.97 SZB-L0005908 10.20.18.111 SZB-L0005469
3 Corosync配置(两台)
3.1 配置corosync
# mv /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf # vi /etc/corosync/corosync.conf # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2 secauth: off threads: 0 interface { ringnumber: 0 bindnetaddr: 10.20.18.111 mcastaddr: 226.94.1.1 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: yes to_syslog: yes logfile: /var/log/cluster/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } service { ver: 0 name: pacemaker } aisexec { user: root group: root }
Bindnetaddr 为节点ip
Mcastaddr 为合法的组播地址,随便填
3.2 启动corosync
# service corosync start
3.3 参数设置(目的是因为只有2个节点,忽略法定票数)
# crm configure property stonith-enabled=false # sudo crm configure property no-quorum-policy=ignore
3.4 查看节点状态(都online就ok)
# crm_mon -1 Last updated: Fri May 22 15:56:37 2015 Last change: Fri May 22 13:09:33 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected Votes 0 Resources configured Online: [ SZB-L0005469 SZB-L0005908 ]
4. Pacemaker资源配置
说明: Pacemaker主要管理资源,本实验中为了搭建rbd-nfs,所以会对rbd map 、mount 、nfs-export、vip等资源进行管理。简而言之,自动实现rbd到nfs共享。
4.1 格式化rbd
(本实验创建的镜像为share/share2),只需在一个节点做一次。
# rados mkpool share # rbd create share/share2 –size 1024 # rbd map share/share2 # rbd showmapped # mkfs.xfs /dev/rbd1 # rbd unmap share/share2
4.2 资源pacemaker配置
4.2.1 准备rbd.in脚本
(拷贝ceph源码中脚本src/ocf/rbd.in到下面目录,所有节点都做)
# mkdir /usr/lib/ocf/resource.d/ceph # cd /usr/lib/ocf/resource.d/ceph/ # chmod + rbd.in
注:下面配置单个节点做
4.2.2 配置rbd map
(可以用crm configure edit命令直接copy下面内容)
# primitive p_rbd_map_1 ocf:ceph:rbd.in \ params user=admin pool=share name=share2 cephconf="/etc/ceph/ceph.conf" \ op monitor interval=10s timeout=20s
4.2.3 mount 文件系统
# primitive p_fs_rbd_1 Filesystem \ params directory="/mnt/share2" fstype=xfs device="/dev/rbd/share/share2" fast_stop=no \ op monitor interval=20s timeout=40s \ op start interval=0 timeout=60s \ op stop interval=0 timeout=60s
4.2.4 nfs-export
primitive p_export_rbd_1 exportfs \ params directory="/mnt/share2" clientspec="10.20.0.0/24" options="rw,async,no_subtree_check,no_root_squash" fsid=1 \ op monitor interval=10s timeout=20s \
4.2.5 VIP 配置
primitive p_vip_1 IPaddr \ params ip=10.20.18.123 cidr_netmask=24 \ op monitor interval=5
4.2.6 nfs 服务配置
primitive p_rpcbind lsb:rpcbind \ op monitor interval=10s timeout=30s primitive p_nfs_server lsb:nfs \ op monitor interval=10s timeout=30s
4.3 源组配置
group g_nfs p_rpcbind p_nfs_server group g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1 clone clo_nfs g_nfs \ Meta globally-unique="false" target-role="Started"
4.4 资源定位规则
location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469
4.5 查看总的配置(可略过)
# crm configure edit node SZB-L0005469 node SZB-L0005908 primitive p_export_rbd_1 exportfs \ params directory="/mnt/share2" clientspec="10.20.0.0/24" options="rw,async,no_subtree_check,no_root_squash" fsid=1 \ op monitor interval=10s timeout=20s \ op start interval=0 timeout=40s primitive p_fs_rbd_1 Filesystem \ params directory="/mnt/share2" fstype=xfs device="/dev/rbd/share/share2" fast_stop=no \ op monitor interval=20s timeout=40s \ op start interval=0 timeout=60s \ op stop interval=0 timeout=60s primitive p_nfs_server lsb:nfs \ op monitor interval=10s timeout=30s primitive p_rbd_map_1 ocf:ceph:rbd.in \ params user=admin pool=share name=share2 cephconf="/etc/ceph/ceph.conf" \ op monitor interval=10s timeout=20s primitive p_rpcbind lsb:rpcbind \ op monitor interval=10s timeout=30s primitive p_vip_1 IPaddr \ params ip=10.20.18.123 cidr_netmask=24 \ op monitor interval=5 group g_nfs p_rpcbind p_nfs_server group g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1 clone clo_nfs g_nfs \ Meta globally-unique=false target-role=Started location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469 property cib-bootstrap-options: \ dc-version=1.1.10-14.el6-368c726 \ cluster-infrastructure="classic openais (with plugin)" \ symmetric-cluster=true \ stonith-enabled=false \ no-quorum-policy=ignore \ expected-quorum-Votes=2 rsc_defaults rsc_defaults-options: \ resource-stickiness=0 \ migration-threshold=1
4.6 重启corosync服务(两台)
# service corosync restart # crm_mon -1 Last updated: Fri May 22 16:55:14 2015 Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected Votes 8 Resources configured Online: [ SZB-L0005469 SZB-L0005908 ] Resource Group: g_rbd_share_1 p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005469 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005469 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005469 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005469 Clone Set: clo_nfs [g_nfs] Started: [ SZB-L0005469 SZB-L0005908 ]
5 测试
5.1 查看挂载点(通过虚拟Ip)
# showmount -e 10.20.18.123 Export list for 10.20.18.123: /mnt/share2 10.20.0.0/24
5. 2 故障转移测试
# service corosync stop # SZB-L0005469 执行 # crm_mon -1 # SZB-L0005908 执行 Last updated: Fri May 22 17:14:31 2015 Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition WITHOUT quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected Votes 8 Resources configured Online: [ SZB-L0005908 ] OFFLINE: [ SZB-L0005469 ] Resource Group: g_rbd_share_1 p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005908 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005908 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005908 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005908 Clone Set: clo_nfs [g_nfs] Started: [ SZB-L0005908 ] Stopped: [ SZB-L0005469 ]
感谢各位的阅读!关于“如何实现基于ceph rbd+corosync+pacemaker HA-NFS文件共享”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。