拓?fù)淙缦滤荆?br style="padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; " />
----------- ----------
| HA1 |____| HA2 |
|__________| |________|
to make sure we haven't added extra keys that you weren't expecting.
在HA2上做同上的操作,我就不具體演示了!
三、配置yum源,我使用的是163做的鏡像源
http://mirrors.163.com/
這上面有對(duì)應(yīng)fedora的yum源配置使用的說(shuō)明,我就不做詳細(xì)闡述了
如果你沒(méi)有DNS解析域名,還要在/etc/hosts文件中手動(dòng)添加解析奧,我的如下:
66.35.62.166 mirrors.fedoraproject.org
213.129.242.84 mirrors.rpmfusion.org
123.58.173.106 mirrors.163.com
這些對(duì)應(yīng)的域名和IP關(guān)系大家都會(huì),就是使用ping可以解析出,不解釋?zhuān)?/div>
四、安裝集群軟件
兩個(gè)節(jié)點(diǎn)上都要做的
#yum install corosync pacemaker -y //由于是網(wǎng)絡(luò)鏡像,會(huì)比較慢,耐心等會(huì)吧!
安裝完成之后就是配置了,注意配置的時(shí)候選擇的端口和地址不能跟已存在的集群沖突,所
以我就做了一下簡(jiǎn)單的設(shè)置
#export ais_port=4000
#export ais_mcast=226.94.1.1
接下來(lái)就是配置corosync了:
#cd /etc/corosync/
#cp corosync.conf.example corosync.conf
#vim !$ 把配置改成如下
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: on
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.1.0 //指定集群所在的網(wǎng)段的網(wǎng)絡(luò)號(hào)
mcastaddr: 226.94.1.1 //組播地址
mcastport: 4000 //端口號(hào)
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: no
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
####以下是添加的內(nèi)容
service {
ver: 1 //定義pacemaker的版本,fedora上使用版本1,而在redhat上可以
使用0
name: pacemaker
}
aisexec {
user: root
group: root
}
其中注釋的內(nèi)容為所修改的內(nèi)容
配置完成之后,拷貝一個(gè)到另一個(gè)節(jié)點(diǎn)上
#scp -p /etc/corosync/corosync.conf node2:/etc/corosync/
確保沒(méi)有錯(cuò)誤的情況下,可以在HA1上啟動(dòng)了,啟動(dòng)之后還要進(jìn)行一些列的檢測(cè)
#/etc/init.d/corosync start
添加認(rèn)證密鑰
#corosync-keygen //這個(gè)要是新機(jī)器的話(huà),時(shí)間會(huì)長(zhǎng)一點(diǎn),要有點(diǎn)耐性等待!
#scp -p authkeys corosync.conf node2:/etc/corosync/
配置完成之后,現(xiàn)在HA1上啟動(dòng)corosync:
#server corosync start
Starting corosync (via systemctl): [ OK ] oK,corosync
服務(wù)啟動(dòng)成功!
接下來(lái)就是檢測(cè)集群是否正確啟動(dòng)并且已經(jīng)可以和其他節(jié)點(diǎn)建立集群關(guān)系了:
查看corosync引擎是否正常啟動(dòng):
[root@node1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file"
/var/log/messages
Sep 18 23:09:44 node1 smartd[786]: Opened configuration file /etc/smartd.conf
Sep 19 13:41:03 node1 smartd[801]: Opened configuration file /etc/smartd.conf
Sep 19 20:44:55 node1 smartd[680]: Opened configuration file /etc/smartd.conf
[root@node1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file"
/var/log/cluster/corosync.log
Sep 18 17:12:06 corosync [MAIN ] Corosync Cluster Engine ('1.4.1'): started and
ready to provide service.
Sep 18 17:12:06 corosync [MAIN ] Successfully read main configuration file
'/etc/corosync/corosync.conf'.
Sep 18 17:12:06 corosync [MAIN ] Corosync Cluster Engine exiting with status 8
at main.c:1702.
Sep 18 17:16:11 corosync [MAIN ] Corosync Cluster Engine ('1.4.1'): started and
ready to provide service.
查看初始化成員節(jié)點(diǎn)通知是否正常發(fā)出:
[root@node1 ~]# grep TOTEM /var/log/cluster/corosync.log
檢查啟動(dòng)過(guò)程中是否有錯(cuò)誤產(chǎn)生:
[root@node2 ~]# grep ERROR: /var/log/cluster/corosync.log | grep -v
unpack_resources
查看pacemaker是否正常啟動(dòng):
[root@node1 ~]# grep pcmk_startup /var/log/cluster/corosync.log
Sep 19 13:48:48 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
Sep 19 13:48:48 corosync [pcmk ] Logging: Initialized pcmk_startup
Sep 19 13:48:48 corosync [pcmk ] info: pcmk_startup: Maximum core file size is:
4294967295
Sep 19 13:48:48 corosync [pcmk ] info: pcmk_startup: Service: 9
Sep 19 13:48:48 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.luo
檢查完畢,接下來(lái)就可以啟動(dòng)另一個(gè)節(jié)點(diǎn)了,最好在同一個(gè)節(jié)點(diǎn)上啟動(dòng)所有的其他的集群節(jié)
點(diǎn):
[root@node1 ~]# ssh node2 -- '/etc/init.d/corosync start'
Starting corosync (via systemctl): [ OK ]
啟動(dòng)成功了!
接下來(lái)就是啟動(dòng)pacemaker了!
[root@node1 corosync]# /etc/init.d/pacemaker start
Starting pacemaker (via systemctl): [ OK ]
ok,同樣啟動(dòng)成功
# ps axf //查看進(jìn)程
1724 ? R 5:59 /usr/lib/heartbeat/stonithd
1725 ? R 5:59 /usr/lib/heartbeat/cib
1726 ? S 0:00 /usr/lib/heartbeat/lrmd
1727 ? R 5:59 /usr/lib/heartbeat/attrd
1728 ? S 0:00 /usr/lib/heartbeat/pengine
1729 ? R 5:59 /usr/lib/heartbeat/crmd
可以看出已經(jīng)有進(jìn)程了
當(dāng)然這個(gè)時(shí)候有個(gè)關(guān)鍵性的設(shè)置,就是關(guān)閉防火墻,如果你沒(méi)有關(guān)閉防火墻功能,下面將會(huì)
給你帶來(lái)很大的麻煩,我開(kāi)始就是沒(méi)有關(guān)閉防火墻,后來(lái)看日志才知道,所以你做的時(shí)候可
以把防火墻先關(guān)閉了,但是在真正應(yīng)用之中,還是要開(kāi)啟防火墻功能
#setup 然后在里面選擇Firewall configure 然后disabled就行了
接下來(lái)使用crm的內(nèi)部命令進(jìn)行查看
#crm_mon 或crm status
Online: [ node2.luowei.com node1.luowei.com ]可以看出,集群的節(jié)點(diǎn)都啟動(dòng)了
一切準(zhǔn)備停當(dāng),接下來(lái)就是雙主集群的配置了!
五、安裝apache服務(wù)和集群文件系統(tǒng)-GFS2
為了方便驗(yàn)證,我就安裝一個(gè)apache服務(wù)用于測(cè)試:
#yum install httpd -y
在HA1上的添加測(cè)試頁(yè)面:
#echo "<h1>node1.luowei.com<h1>" >/var/www/html/index.html
在HA2上的添加測(cè)試頁(yè)面:
#echo "<h1>node2.luowei.com<h1>" >/var/www/html/index.html
然后把兩個(gè)節(jié)點(diǎn)上的/etc/httpd/conf/httpd.conf的配置文件,保持一下的內(nèi)容是開(kāi)啟的,
如果有注釋的,請(qǐng)去掉注釋
<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
保證httpd服務(wù)不會(huì)隨著開(kāi)機(jī)自動(dòng)啟動(dòng)
#chkconfig httpd off
#crm configure property stonith-enabled=false //關(guān)閉stonith設(shè)備
#crm configure property no-quorum-policy=ignore //關(guān)閉兩節(jié)點(diǎn)之間的選舉
#crm configure
為httpd添加資源
# crm configure primitive WebSite ocf:heartbeat:apache params
configfile=/etc/httpd/conf/httpd.conf op monitor interval=1min
# crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 params
ip=192.168.1.110 cidr_netmask=32 op monitor interval=30s //添加一個(gè)虛擬IP
[root@node1 ~]# crm status
============
Last updated: Mon Sep 19 23:44:05 2011
Stack: openais
Current DC: node2.luowei.com - partition with quorum
Version: 1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ node2.luowei.com node1.luowei.com ]
ClusterIP (ocf::heartbeat:IPaddr2): Started node1.luowei.com
WebSite (ocf::heartbeat:apache): Started node2.luowei.com
可以看到兩個(gè)資源不在同一個(gè)節(jié)點(diǎn)上,所以需要做一下的設(shè)置:
#crm configure colocation website-with-ip INFINITY: WebSite ClusterIP //做一個(gè)
位置約束
然后再使用crm status 查看資源已經(jīng)都流轉(zhuǎn)到同一個(gè)節(jié)點(diǎn)上了,如下所示
Online: [ node2.luowei.com node1.luowei.com ]
ClusterIP (ocf::heartbeat:IPaddr2): Started node1.luowei.com
WebSite (ocf::heartbeat:apache): Started node1.luowei.com
還要控制資源的啟動(dòng)停止順序
#crm configure order apache-after-ip mandatory: ClusterIP WebSite //定義ip的資
源要在apache的服務(wù)啟動(dòng)之前啟動(dòng)
指定優(yōu)先的Location
#crm configure location prefer-pcmk-l WebSite 50: node1.luowei.com
#crm configure show //查看一下自己的配置如下
[root@node1 ~]# crm configure show
node node1.luowei.com
node node2.luowei.com
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="192.168.1.110" cidr_netmask="32" \
op monitor interval="30s"
primitive WebSite ocf:heartbeat:apache \
params configfile="/etc/httpd/conf/httpd.conf" \
op monitor interval="1min"
location prefer-pcmk-l WebSite 50: node1.luowei.com
colocation website-with-ip inf: WebSite ClusterIP
order apache-after-ip inf: ClusterIP WebSite
property $id="cib-bootstrap-options" \
dc-version="1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
如上圖所示,資源已經(jīng)啟動(dòng)了,所以接下來(lái)就可以往下做了!
可以在瀏覽其中輸入
http://192.168.1.110可以訪問(wèn)web服務(wù)了!
六、安裝DRBD軟件包
DRBD實(shí)現(xiàn)節(jié)點(diǎn)之間的數(shù)據(jù)同步的,實(shí)現(xiàn)備份功能。
1.# yum install drbd-pacemaker drbd-udev -y
2.安裝完drbd之后,首先要在兩個(gè)節(jié)點(diǎn)上做一個(gè)單獨(dú)的磁盤(pán)分區(qū)來(lái)存放數(shù)據(jù)
這里我用一塊新的磁盤(pán)(/dev/sdb)進(jìn)行實(shí)驗(yàn),劃分磁盤(pán)分區(qū)如下所示:
#fdisk /dev/sdb
[root@node1 ~]# fdisk /dev/sda1
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel with disk identifier 0xcaf34d49.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): p
Disk /dev/sda1: 524 MB, 524288000 bytes
255 heads, 63 sectors/track, 63 cylinders, total 1024000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcaf34d49
Device Boot Start End Blocks Id System
Command (m for help): p
Disk /dev/sda1: 524 MB, 524288000 bytes
255 heads, 63 sectors/track, 63 cylinders, total 1024000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcaf34d49
Device Boot Start End Blocks Id System
Command (m for help): q
# partprobe /dev/sdb
# pvcreate /dev/sdb1
# vgcreate VolGroupb /dev/sdb1
# lvcreate -n drbd-demo -L 1G VolGroupb
[root@node1 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lv_root VolGroup -wi-ao 17.56g
lv_swap VolGroup -wi-ao 1.94g
drbd-demo VolGroupb -wi-a- 1.00g
ok!HA1上的邏輯卷做好了,這個(gè)過(guò)程也要在HA2上來(lái)一遍,我就不多展示了!
3.準(zhǔn)備完成,接下來(lái)就是配置DRBD了!
#vim /etc/drbd.conf
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
global {
usage-count yes;
}
common {
protocol C;
}
resource wwwdata {
meta-disk internal;
device /dev/drbd1;
syncer {
verify-alg sha1;
}
net {
allow-two-primaries;
}
on node1.luowei.com {
disk /dev/mapper/VolGroupb-drbd--demo;
address 192.168.1.78:7789; //定義HA1節(jié)點(diǎn)的
}
on node2.luowei.com {
disk /dev/mapper/VolGroupb-drbd--demo;
address 192.168.1.151:7789; //定義HA2節(jié)點(diǎn)的
}
}
4.接下來(lái)就是初始化并加載DRBD了
# drbdadm create-md wwwdata
New drbd meta data block successfully created.
初始化成功!
5.接下來(lái)查看DRBD的模塊載入內(nèi)核并檢測(cè)是不是都正常
[root@node1 ~]# modprobe drbd
[root@node1 ~]# drbdadm up wwwdata
[root@node1 ~]# cat /proc/drbd
version: 8.3.9 (api:88/proto:86-95)
srcversion: CF228D42875CF3A43F2945A
1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----s
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
可以看出已經(jīng)出現(xiàn)了Secondary了,下面在第二個(gè)節(jié)點(diǎn)上使用上面同樣的方法進(jìn)行模塊載入
并檢測(cè),此處省略.....
6.然后在任意一個(gè)節(jié)點(diǎn)上查看,現(xiàn)在兩個(gè)都已經(jīng)是Secondary了,所以一切正常
[root@node1 ~]# drbd-overview
1:wwwdata Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
7.現(xiàn)在我們把HA1設(shè)置為主節(jié)點(diǎn)
[root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary wwwdata
然后使用如下命令可以實(shí)時(shí)監(jiān)視這整個(gè)數(shù)據(jù)從主節(jié)點(diǎn)想備用節(jié)點(diǎn)上拷貝數(shù)據(jù)的過(guò)程:
[root@node1 ~]# watch -n 1 'drbd-overview'
1:wwwdata SyncSource Primary/Secondary UpToDate/Inconsistent C r-----
[==>.................] sync'ed: 0.8% (1042492/1048508)K
1:wwwdata Connected Primary/Secondary UpToDate/UpToDate C r-----完成了數(shù)據(jù)的
同步,現(xiàn)在HA1處于Primary狀態(tài),它允許寫(xiě)入了,可以在上面創(chuàng)建文件系統(tǒng)并把一些數(shù)據(jù)放
進(jìn)去了。
8.向DRBD中添加數(shù)據(jù):
[root@node1 ~]# mkfs.ext4 /dev/drbd1 //格式化分區(qū)
[root@node1 ~]# mount /dev/drbd1 /mnt/ //掛載分區(qū)
[root@node1 ~]# echo "<h2>drbd test page</h2>" >/mnt/index.html
[root@node1 ~]# umount /mnt/ //卸載分區(qū)
9.在集群中配置DRBD
[root@node1 ~]# crm
crm(live)# cib new drbd
crm(drbd)# configure
crm(drbd)configure# primitive WebData ocf:linbit:drbd params
drbd_resource=wwwdata op monitor interval=60s
crm(drbd)configure# ms WebDataClone WebData meta master-max=1 master-node-max=1
clone-max=2 clone-node-max=1 notify=true
crm(drbd)configure#commit
[root@node1 ~]# crm status
============
Last updated: Tue Sep 20 22:08:10 2011
Stack: openais
Current DC: node1.luowei.com - partition with quorum
Version: 1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
3 Resources configured.
============
Online: [ node2.luowei.com node1.luowei.com ]
ClusterIP (ocf::heartbeat:IPaddr2): Started node1.luowei.com
WebSite (ocf::heartbeat:apache): Started node1.luowei.com
Master/Slave Set: WebDataClone [WebData]
Masters: [ node2.luowei.com ]
Slaves: [ node1.luowei.com ]
有上面的輸出信息可以看出,資源啟動(dòng)正常,但是我們注意到drbd的主節(jié)點(diǎn)在HA2上,為了
統(tǒng)一到同一個(gè)節(jié)點(diǎn)上,還需要進(jìn)一步約束資源
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# primitive WebFS ocf:heartbeat:Filesystem params
device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="ext4"
crm(live)configure# colocation fs_ondrbd inf: WebFS WebDataClone:Master
crm(live)configure# order WebFS-after-WebData inf: WebDataClone:promote
WebFS:start
crm(live)configure# colocation WebSite-with-WebFS inf: WebSite WebFS
crm(live)configure# order WebSite-after-WebFS inf: WebFS WebSite
crm(live)configure# commit
再次查看,如下內(nèi)容:
[root@node1 ~]# crm status
============
Last updated: Tue Sep 20 22:38:16 2011
Stack: openais
Current DC: node1.luowei.com - partition with quorum
Version: 1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
4 Resources configured.
============
Online: [ node2.luowei.com node1.luowei.com ]
ClusterIP (ocf::heartbeat:IPaddr2): Started node2.luowei.com
Master/Slave Set: WebDataClone [WebData]
Masters: [ node2.luowei.com ]
Slaves: [ node1.luowei.com ]
WebFS (ocf::heartbeat:Filesystem): Started node2.luowei.com
我們可以看出,資源都在同一個(gè)節(jié)點(diǎn)上
七、接下來(lái)就是在上面的基礎(chǔ)之上做雙主模式的集群了:
1、安裝集群文件系統(tǒng)
#yum install gfs2-utils gfs2-cluster gfs-pcmk //兩個(gè)節(jié)點(diǎn)上都要進(jìn)行安裝的
2、添加DLM服務(wù)
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# primitive dlm ocf:pacemaker:controld op monitor
interval=120s
crm(live)configure# clone dlm-clone dlm meta interleave=true
crm(live)configure# commit
3、創(chuàng)建gfs-control這個(gè)集群資源:
[root@node1 ~]# clear
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# primitive gfs-control ocf:pacemaker:controld params
daemon=gfs_controld.pcmk args="-g 0" op monitor interval=120s
crm(live)configure# clone gfs-clone gfs-control meta interleave=true
crm(live)configure# colocation gfs-with-dlm INFINITY: gfs-clone dlm-clone
crm(live)configure# order start-gfs-after-dlm mandatory: dlm-clone gfs-clone
crm(live)configure# commit
然后查看一下我們的配置如下所示:
#crm configure show
node node1.luowei.com
node node2.luowei.com
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="192.168.1.110" cidr_netmask="32" \
op monitor interval="30s"
primitive WebData ocf:linbit:drbd \
params drbd_resource="wwwdata" \
op monitor interval="60s"
primitive WebFS ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html"
fstype="ext4"
primitive WebSite ocf:heartbeat:apache \
params configfile="/etc/httpd/conf/httpd.conf" \
op monitor interval="1min"
primitive dlm ocf:pacemaker:controld \
op monitor interval="120s"
primitive gfs-control ocf:pacemaker:controld \
params daemon="gfs_controld.pcmk" args="-g 0" \
op monitor interval="120s"
ms WebDataClone WebData \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
notify="true"
clone dlm-clone dlm \
meta interleave="true"
clone gfs-clone gfs-control \
meta interleave="true"
location prefer-pcmk-l WebSite 50: node1.luowei.com
colocation WebSite-with-WebFS inf: WebSite WebFS
colocation fs_ondrbd inf: WebFS WebDataClone:Master
colocation gfs-with-dlm inf: gfs-clone dlm-clone
colocation website-with-ip inf: WebSite ClusterIP
order WebFS-after-WebData inf: WebDataClone:promote WebFS:start
order WebSite-after-WebFS inf: WebFS WebSite
order apache-after-ip inf: ClusterIP WebSite
order start-gfs-after-dlm inf: dlm-clone gfs-clone
property $id="cib-bootstrap-options" \
dc-version="1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
查看集群輸出的信息:
[root@node1 ~]# crm_mon
============
Last updated: Tue Sep 20 23:18:22 2011
Stack: openais
Current DC: node1.luowei.com - partition with quorum
Version: 1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
6 Resources configured.
============
Online: [ node2.luowei.com node1.luowei.com ]
ClusterIP (ocf::heartbeat:IPaddr2): Started node1.luowei.com
Master/Slave Set: WebDataClone [WebData]
Masters: [ node2.luowei.com ]
Slaves: [ node1.luowei.com ]
WebSite (ocf::heartbeat:apache): Started node1.luowei.com
Clone Set: dlm-clone
Started: [node2.luowei.com node1.luowei.com]
Clone Set: gfs-clone
Startde: [node2.luowei.com node1.luowei.com]
WebFS (ocf::heartbeat:Filesystem): Started node1.luowei.com
4、創(chuàng)建GFS2文件系統(tǒng)
[root@node1 ~]# crm_resource --resource WebFS --set-parameter target-role --meta
--parameter-value Stopped
這個(gè)時(shí)候使用crm status可以看到apache 和WebFS兩個(gè)資源都已經(jīng)停止。
5、創(chuàng)建并遷移數(shù)據(jù)到GFS2分區(qū)
在兩個(gè)節(jié)點(diǎn)上都執(zhí)行以下命令:
[root@node2 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t pcmk:web /dev/drbd1
This will destroy any data on /dev/drbd1.
It appears to contain: Linux rev 1.0 ext4 filesystem data, UUID=19976683-c802-
479c-854d-e786617be523 (extents) (large files) (huge files)
Are you sure you want to proceed? [y/n] y
6、然后遷移數(shù)據(jù)到這個(gè)新的文件系統(tǒng)并且為集群重新配置GFS2
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# primitive WebFS ocf:heartbeat:Filesystem params
device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="gfs2"
crm(live)configure# colocation WebSite-with-WebFS inf: WebSite WebFS
crm(live)configure# colocation fs_on_debd inf: WebFS WebDataClone:Master
crm(live)configure# order WebFS-after-WebData inf: WebDataClone:promote
WebFS:start
crm(live)configure# order WebSite-after-WebFS inf: WebFS WebSite
crm(live)configure# colocation WebFS-with-gfs-control INFINITY: WebFS gfs-clone
crm(live)configure# order start-WebFS-after-gfs-control mandatory: gfs-clone
WebFS
crm(live)configure# commit
7、重新配置pacemaker為Active/Active
[root@node1 ~]# crm
crm(live)# configure clone WebIP ClusterIP meta globally-unique="true" clone-
max="2" clone-node-max="2"
crm(live)# configure primitive ClusterIP ocf:heartbeat:IPaddr2 params
ip="192.168.1.110" cidr_netmask="32" clusterip_hash="sourceip" op monitor
interval="30s" //設(shè)置ClusterIP的參數(shù)
crm(live)# configure clone WebFSClone WebFS
crm(live)# configure clone WebSiteClone WebSite
同時(shí)把CIB文件中的master-max改為2
資源配置完成,
主從到主主集群架構(gòu)就這樣配置完成!
作者:Gezidan
本文版權(quán)歸作者和博客園共有,歡迎轉(zhuǎn)載,但未經(jīng)作者同意必須保留此段聲明,且在文章頁(yè)面明顯位置給出原文連接,否則保留追究法律責(zé)任的權(quán)利。
本文轉(zhuǎn)載自
http://roqi410.blog.51cto.com/2186161/669877
posted on 2011-09-23 09:59
日需博客 閱讀(2326)
評(píng)論(1) 編輯 收藏 引用 所屬分類(lèi):
技術(shù)文章 、
轉(zhuǎn)載