• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            流量統計:
            Rixu Blog (日需博客)
            日需博客,每日必需來踩踩哦..
            posts - 108,comments - 54,trackbacks - 0
            Fedora 15上做雙主模型的集群
            拓撲如下所示:
             
            -----------     ----------
            |   HA1    |____|  HA2   |
            |__________|    |________|
            HA1:
            IP:192.168.1.78/24
            HA2:
            IP:192.168.1.151/24
            VIP:192.168.1.110

            一、配置網絡屬性
            HA1:
            #ifconfit eth0 192.168.1.78/24
            #route add default gw 192.168.1.1
            #hostname node1.luowei.com
            HA2:
            #ifconfig eth0 192.168.1.151/24
            #route add default gw 192.168.1.1
            #hostname node2.luowei.com
             
            二、配置主機名及兩個之間不實用密碼能相互通信
            #vim /etc/hosts 添加如下內容
            192.168.1.78 node1.luowei.com node1
            192.168.1.151 node2.luowei.com node2
            同樣在HA2上也添加這些內容
            #ping node2|node1能解析出來就OK啦
            分別在兩個HA上生成一對密鑰,如下所示
            [root@node1 ~]# ssh-keygen -t rsa  //生成公鑰和密鑰
            Generating public/private rsa key pair.
            Enter file in which to save the key (/root/.ssh/id_rsa): 
            Created directory '/root/.ssh'.
            Enter passphrase (empty for no passphrase): 
            Enter same passphrase again: 
            Your identification has been saved in /root/.ssh/id_rsa.
            Your public key has been saved in /root/.ssh/id_rsa.pub.
            The key fingerprint is:
            59:71:5d:4d:4c:6d:71:b1:ec:04:17:26:49:cb:27:a1 root@node1.luowei.com
            The key's randomart image is:
            +--[ RSA 2048]----+
            |          . o*.@@|
            |           oo.X B|
            |          .E + * |
            |         o    =  |
            |        S      . |
            |                 |
            |                 |
            |                 |
            |                 |  //這個圖案就是所謂的指紋信息吧,呵呵,redhat上沒有
            +-----------------+
            [root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2    //拷貝公鑰到對方機
            器上
            The authenticity of host 'node2 (192.168.1.151)' can't be established.
            RSA key fingerprint is 77:b6:c6:09:51:f9:f4:70:c1:35:81:47:a5:19:f4:d2.
            Are you sure you want to continue connecting (yes/no)? yes
            Warning: Permanently added 'node2,192.168.1.151' (RSA) to the list of known
            hosts.
            root@node2's password:     //輸入對方機器的密碼
            Now try logging into the machine, with "ssh 'root@node2'", and check in:
              ~/.ssh/authorized_keys
            to make sure we haven't added extra keys that you weren't expecting.
            在HA2上做同上的操作,我就不具體演示了!
             
            三、配置yum源,我使用的是163做的鏡像源
            http://mirrors.163.com/
            這上面有對應fedora的yum源配置使用的說明,我就不做詳細闡述了
            如果你沒有DNS解析域名,還要在/etc/hosts文件中手動添加解析奧,我的如下:
            66.35.62.166   mirrors.fedoraproject.org
            213.129.242.84   mirrors.rpmfusion.org
            123.58.173.106  mirrors.163.com
            這些對應的域名和IP關系大家都會,就是使用ping可以解析出,不解釋!
             
            四、安裝集群軟件
            兩個節點上都要做的
            #yum install corosync pacemaker -y //由于是網絡鏡像,會比較慢,耐心等會吧!
            安裝完成之后就是配置了,注意配置的時候選擇的端口和地址不能跟已存在的集群沖突,所
            以我就做了一下簡單的設置
            #export ais_port=4000
            #export ais_mcast=226.94.1.1
            接下來就是配置corosync了:
            #cd /etc/corosync/
            #cp corosync.conf.example corosync.conf
            #vim !$ 把配置改成如下
            # Please read the corosync.conf.5 manual page
            compatibility: whitetank
            totem {
             version: 2
             secauth: on
             threads: 0
             interface {
              ringnumber: 0
              bindnetaddr: 192.168.1.0   //指定集群所在的網段的網絡號
              mcastaddr: 226.94.1.1  //組播地址
              mcastport: 4000  //端口號
              ttl: 1
             }
            }
            logging {
             fileline: off
             to_stderr: no
             to_logfile: yes
             to_syslog: no
             logfile: /var/log/cluster/corosync.log
             debug: off
             timestamp: on
             logger_subsys {
              subsys: AMF
              debug: off
             }
            }
            amf {
             mode: disabled
            }
            ####以下是添加的內容
            service {
             ver: 1    //定義pacemaker的版本,fedora上使用版本1,而在redhat上可以
            使用0
             name: pacemaker 
            }
            aisexec {
                    user:   root
                    group:  root
            }
            其中注釋的內容為所修改的內容
            配置完成之后,拷貝一個到另一個節點上
            #scp -p  /etc/corosync/corosync.conf node2:/etc/corosync/
            確保沒有錯誤的情況下,可以在HA1上啟動了,啟動之后還要進行一些列的檢測
            #/etc/init.d/corosync start
            添加認證密鑰
            #corosync-keygen  //這個要是新機器的話,時間會長一點,要有點耐性等待!
            #scp -p authkeys corosync.conf node2:/etc/corosync/
            配置完成之后,現在HA1上啟動corosync:
            #server corosync start
            Starting corosync (via systemctl):                         [  OK  ] oK,corosync
            服務啟動成功!
            接下來就是檢測集群是否正確啟動并且已經可以和其他節點建立集群關系了:
            查看corosync引擎是否正常啟動:
            [root@node1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file"
            /var/log/messages
            Sep 18 23:09:44 node1 smartd[786]: Opened configuration file /etc/smartd.conf
            Sep 19 13:41:03 node1 smartd[801]: Opened configuration file /etc/smartd.conf
            Sep 19 20:44:55 node1 smartd[680]: Opened configuration file /etc/smartd.conf
            [root@node1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file"
            /var/log/cluster/corosync.log 
            Sep 18 17:12:06 corosync [MAIN  ] Corosync Cluster Engine ('1.4.1'): started and
            ready to provide service.
            Sep 18 17:12:06 corosync [MAIN  ] Successfully read main configuration file
            '/etc/corosync/corosync.conf'.
            Sep 18 17:12:06 corosync [MAIN  ] Corosync Cluster Engine exiting with status 8
            at main.c:1702.
            Sep 18 17:16:11 corosync [MAIN  ] Corosync Cluster Engine ('1.4.1'): started and
            ready to provide service.
            查看初始化成員節點通知是否正常發出:
            [root@node1 ~]#  grep TOTEM /var/log/cluster/corosync.log
            檢查啟動過程中是否有錯誤產生:
            [root@node2 ~]# grep ERROR: /var/log/cluster/corosync.log | grep -v
            unpack_resources
            查看pacemaker是否正常啟動:
            [root@node1 ~]# grep pcmk_startup /var/log/cluster/corosync.log 
            Sep 19 13:48:48 corosync [pcmk  ] info: pcmk_startup: CRM: Initialized
            Sep 19 13:48:48 corosync [pcmk  ] Logging: Initialized pcmk_startup
            Sep 19 13:48:48 corosync [pcmk  ] info: pcmk_startup: Maximum core file size is:
            4294967295
            Sep 19 13:48:48 corosync [pcmk  ] info: pcmk_startup: Service: 9
            Sep 19 13:48:48 corosync [pcmk  ] info: pcmk_startup: Local hostname: node1.luo
            檢查完畢,接下來就可以啟動另一個節點了,最好在同一個節點上啟動所有的其他的集群節
            點:
            [root@node1 ~]# ssh node2 -- '/etc/init.d/corosync start'
            Starting corosync (via systemctl):  [  OK  ]
            啟動成功了!
            接下來就是啟動pacemaker了!
            [root@node1 corosync]# /etc/init.d/pacemaker  start
            Starting pacemaker (via systemctl):                        [  OK  ]
            ok,同樣啟動成功
            # ps axf //查看進程
             1724 ?        R      5:59 /usr/lib/heartbeat/stonithd
             1725 ?        R      5:59 /usr/lib/heartbeat/cib
             1726 ?        S      0:00 /usr/lib/heartbeat/lrmd
             1727 ?        R      5:59 /usr/lib/heartbeat/attrd
             1728 ?        S      0:00 /usr/lib/heartbeat/pengine
             1729 ?        R      5:59 /usr/lib/heartbeat/crmd
            可以看出已經有進程了
            當然這個時候有個關鍵性的設置,就是關閉防火墻,如果你沒有關閉防火墻功能,下面將會
            給你帶來很大的麻煩,我開始就是沒有關閉防火墻,后來看日志才知道,所以你做的時候可
            以把防火墻先關閉了,但是在真正應用之中,還是要開啟防火墻功能
            #setup 然后在里面選擇Firewall configure 然后disabled就行了
            接下來使用crm的內部命令進行查看
            #crm_mon 或crm status
            Online: [ node2.luowei.com node1.luowei.com ]可以看出,集群的節點都啟動了

            一切準備停當,接下來就是雙主集群的配置了!
            五、安裝apache服務和集群文件系統-GFS2
            為了方便驗證,我就安裝一個apache服務用于測試:
            #yum install httpd -y
            在HA1上的添加測試頁面:
            #echo "<h1>node1.luowei.com<h1>" >/var/www/html/index.html
            在HA2上的添加測試頁面:
            #echo "<h1>node2.luowei.com<h1>" >/var/www/html/index.html
            然后把兩個節點上的/etc/httpd/conf/httpd.conf的配置文件,保持一下的內容是開啟的,
            如果有注釋的,請去掉注釋
            <Location /server-status>
                SetHandler server-status
                Order deny,allow
                Deny from all
                Allow from 127.0.0.1
            </Location>
            保證httpd服務不會隨著開機自動啟動
            #chkconfig httpd off
            #crm configure property stonith-enabled=false  //關閉stonith設備
            #crm configure property no-quorum-policy=ignore //關閉兩節點之間的選舉
            #crm configure
            為httpd添加資源
            # crm configure primitive WebSite ocf:heartbeat:apache params
            configfile=/etc/httpd/conf/httpd.conf op monitor interval=1min
            # crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 params
            ip=192.168.1.110 cidr_netmask=32 op monitor interval=30s  //添加一個虛擬IP
            [root@node1 ~]# crm status
            ============
            Last updated: Mon Sep 19 23:44:05 2011
            Stack: openais
            Current DC: node2.luowei.com - partition with quorum
            Version: 1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
            2 Nodes configured, 2 expected votes
            2 Resources configured.
            ============
            Online: [ node2.luowei.com node1.luowei.com ]
             ClusterIP (ocf::heartbeat:IPaddr2): Started node1.luowei.com
             WebSite (ocf::heartbeat:apache): Started node2.luowei.com
            可以看到兩個資源不在同一個節點上,所以需要做一下的設置:
            #crm configure colocation website-with-ip INFINITY: WebSite ClusterIP  //做一個
            位置約束
            然后再使用crm status 查看資源已經都流轉到同一個節點上了,如下所示
            Online: [ node2.luowei.com node1.luowei.com ]
             ClusterIP (ocf::heartbeat:IPaddr2): Started node1.luowei.com
             WebSite (ocf::heartbeat:apache): Started node1.luowei.com
            還要控制資源的啟動停止順序
            #crm configure order apache-after-ip mandatory: ClusterIP WebSite  //定義ip的資
            源要在apache的服務啟動之前啟動
            指定優先的Location
            #crm configure location prefer-pcmk-l WebSite 50: node1.luowei.com
            #crm configure show  //查看一下自己的配置如下
            [root@node1 ~]# crm configure show
            node node1.luowei.com
            node node2.luowei.com
            primitive ClusterIP ocf:heartbeat:IPaddr2 \
             params ip="192.168.1.110" cidr_netmask="32" \
             op monitor interval="30s"
            primitive WebSite ocf:heartbeat:apache \
             params configfile="/etc/httpd/conf/httpd.conf" \
             op monitor interval="1min"
            location prefer-pcmk-l WebSite 50: node1.luowei.com
            colocation website-with-ip inf: WebSite ClusterIP
            order apache-after-ip inf: ClusterIP WebSite
            property $id="cib-bootstrap-options" \
             dc-version="1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
             cluster-infrastructure="openais" \
             expected-quorum-votes="2" \
             stonith-enabled="false" \
             no-quorum-policy="ignore"
            rsc_defaults $id="rsc-options" \
             resource-stickiness="100"
            如上圖所示,資源已經啟動了,所以接下來就可以往下做了!
            可以在瀏覽其中輸入http://192.168.1.110可以訪問web服務了!
             
            六、安裝DRBD軟件包
            DRBD實現節點之間的數據同步的,實現備份功能。
            1.# yum install drbd-pacemaker drbd-udev -y
            2.安裝完drbd之后,首先要在兩個節點上做一個單獨的磁盤分區來存放數據
            這里我用一塊新的磁盤(/dev/sdb)進行實驗,劃分磁盤分區如下所示:
            #fdisk /dev/sdb
            [root@node1 ~]# fdisk /dev/sda1
            Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
            disklabel
            Building a new DOS disklabel with disk identifier 0xcaf34d49.
            Changes will remain in memory only, until you decide to write them.
            After that, of course, the previous content won't be recoverable.
            Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
            Command (m for help): p
            Disk /dev/sda1: 524 MB, 524288000 bytes
            255 heads, 63 sectors/track, 63 cylinders, total 1024000 sectors
            Units = sectors of 1 * 512 = 512 bytes
            Sector size (logical/physical): 512 bytes / 512 bytes
            I/O size (minimum/optimal): 512 bytes / 512 bytes
            Disk identifier: 0xcaf34d49
                 Device Boot      Start         End      Blocks   Id  System
            Command (m for help): p
            Disk /dev/sda1: 524 MB, 524288000 bytes
            255 heads, 63 sectors/track, 63 cylinders, total 1024000 sectors
            Units = sectors of 1 * 512 = 512 bytes
            Sector size (logical/physical): 512 bytes / 512 bytes
            I/O size (minimum/optimal): 512 bytes / 512 bytes
            Disk identifier: 0xcaf34d49
                 Device Boot      Start         End      Blocks   Id  System
            Command (m for help): q
            # partprobe /dev/sdb
            # pvcreate /dev/sdb1
            # vgcreate VolGroupb /dev/sdb1
            # lvcreate -n drbd-demo -L 1G VolGroupb
            [root@node1 ~]# lvs
              LV        VG        Attr   LSize  Origin Snap%  Move Log Copy%  Convert
              lv_root   VolGroup  -wi-ao 17.56g                                      
              lv_swap   VolGroup  -wi-ao  1.94g                                      
              drbd-demo VolGroupb -wi-a-  1.00g  
            ok!HA1上的邏輯卷做好了,這個過程也要在HA2上來一遍,我就不多展示了!
            3.準備完成,接下來就是配置DRBD了!
            #vim /etc/drbd.conf
            include "drbd.d/global_common.conf";
            include "drbd.d/*.res";
            global {
                    usage-count yes;
            }
            common {
                    protocol C;
            }
            resource wwwdata {
                    meta-disk internal;
                    device  /dev/drbd1;
                    syncer {
                            verify-alg sha1;
                    }
                    net {
                    allow-two-primaries;
                    }
                    on node1.luowei.com {
                            disk    /dev/mapper/VolGroupb-drbd--demo;
                            address 192.168.1.78:7789;    //定義HA1節點的
                    }
                    on node2.luowei.com {
                            disk    /dev/mapper/VolGroupb-drbd--demo;
                            address 192.168.1.151:7789;   //定義HA2節點的
                    }
            }
            4.接下來就是初始化并加載DRBD了
            # drbdadm create-md wwwdata
            New drbd meta data block successfully created.
            初始化成功!
            5.接下來查看DRBD的模塊載入內核并檢測是不是都正常
            [root@node1 ~]# modprobe drbd
            [root@node1 ~]# drbdadm up wwwdata
            [root@node1 ~]# cat /proc/drbd 
            version: 8.3.9 (api:88/proto:86-95)
            srcversion: CF228D42875CF3A43F2945A 
             1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----s
                ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
            可以看出已經出現了Secondary了,下面在第二個節點上使用上面同樣的方法進行模塊載入
            并檢測,此處省略.....
            6.然后在任意一個節點上查看,現在兩個都已經是Secondary了,所以一切正常
            [root@node1 ~]# drbd-overview 
              1:wwwdata  Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
            7.現在我們把HA1設置為主節點
            [root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary wwwdata
            然后使用如下命令可以實時監視這整個數據從主節點想備用節點上拷貝數據的過程:
            [root@node1 ~]# watch -n 1 'drbd-overview' 
              1:wwwdata  SyncSource Primary/Secondary UpToDate/Inconsistent C r----- 
             [==>.................] sync'ed:  0.8% (1042492/1048508)K
              1:wwwdata  Connected Primary/Secondary UpToDate/UpToDate C r-----完成了數據的
            同步,現在HA1處于Primary狀態,它允許寫入了,可以在上面創建文件系統并把一些數據放
            進去了。
            8.向DRBD中添加數據:
            [root@node1 ~]# mkfs.ext4 /dev/drbd1  //格式化分區
            [root@node1 ~]# mount /dev/drbd1 /mnt/   //掛載分區
            [root@node1 ~]# echo "<h2>drbd test page</h2>" >/mnt/index.html
            [root@node1 ~]# umount /mnt/   //卸載分區
            9.在集群中配置DRBD
            [root@node1 ~]# crm 
            crm(live)# cib new drbd
            crm(drbd)# configure
            crm(drbd)configure# primitive WebData ocf:linbit:drbd  params
            drbd_resource=wwwdata op monitor interval=60s
            crm(drbd)configure# ms WebDataClone WebData meta master-max=1 master-node-max=1
            clone-max=2 clone-node-max=1 notify=true 
            crm(drbd)configure#commit
            [root@node1 ~]# crm status
            ============
            Last updated: Tue Sep 20 22:08:10 2011
            Stack: openais
            Current DC: node1.luowei.com - partition with quorum
            Version: 1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
            2 Nodes configured, 2 expected votes
            3 Resources configured.
            ============
            Online: [ node2.luowei.com node1.luowei.com ]
             ClusterIP (ocf::heartbeat:IPaddr2): Started node1.luowei.com
             WebSite (ocf::heartbeat:apache): Started node1.luowei.com
             Master/Slave Set: WebDataClone [WebData]
                 Masters: [ node2.luowei.com ]
                 Slaves: [ node1.luowei.com ]
            有上面的輸出信息可以看出,資源啟動正常,但是我們注意到drbd的主節點在HA2上,為了
            統一到同一個節點上,還需要進一步約束資源
            [root@node1 ~]# crm 
            crm(live)# configure 
            crm(live)configure# primitive WebFS ocf:heartbeat:Filesystem params
            device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="ext4"
            crm(live)configure# colocation fs_ondrbd inf: WebFS WebDataClone:Master
            crm(live)configure# order WebFS-after-WebData inf: WebDataClone:promote
            WebFS:start
            crm(live)configure# colocation WebSite-with-WebFS inf: WebSite WebFS 
            crm(live)configure# order WebSite-after-WebFS inf: WebFS WebSite 
            crm(live)configure# commit
            再次查看,如下內容:
            [root@node1 ~]# crm status
            ============
            Last updated: Tue Sep 20 22:38:16 2011
            Stack: openais
            Current DC: node1.luowei.com - partition with quorum
            Version: 1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
            2 Nodes configured, 2 expected votes
            4 Resources configured.
            ============
            Online: [ node2.luowei.com node1.luowei.com ]
             ClusterIP (ocf::heartbeat:IPaddr2): Started node2.luowei.com
             Master/Slave Set: WebDataClone [WebData]
                 Masters: [ node2.luowei.com ]
                 Slaves: [ node1.luowei.com ]
             WebFS (ocf::heartbeat:Filesystem): Started node2.luowei.com
            我們可以看出,資源都在同一個節點上
             
            七、接下來就是在上面的基礎之上做雙主模式的集群了:
            1、安裝集群文件系統
            #yum install gfs2-utils gfs2-cluster gfs-pcmk  //兩個節點上都要進行安裝的
            2、添加DLM服務
            [root@node1 ~]# crm 
            crm(live)# configure 
            crm(live)configure# primitive dlm ocf:pacemaker:controld op monitor
            interval=120s
            crm(live)configure# clone dlm-clone dlm meta interleave=true
            crm(live)configure# commit
            3、創建gfs-control這個集群資源:
            [root@node1 ~]# clear
            [root@node1 ~]# crm 
            crm(live)# configure 
            crm(live)configure# primitive gfs-control ocf:pacemaker:controld params
            daemon=gfs_controld.pcmk args="-g 0" op monitor interval=120s
            crm(live)configure# clone gfs-clone gfs-control meta interleave=true
            crm(live)configure# colocation gfs-with-dlm INFINITY: gfs-clone dlm-clone 
            crm(live)configure# order start-gfs-after-dlm mandatory: dlm-clone gfs-clone 
            crm(live)configure# commit
            然后查看一下我們的配置如下所示:
            #crm configure show
            node node1.luowei.com
            node node2.luowei.com
            primitive ClusterIP ocf:heartbeat:IPaddr2 \
             params ip="192.168.1.110" cidr_netmask="32" \
             op monitor interval="30s"
            primitive WebData ocf:linbit:drbd \
             params drbd_resource="wwwdata" \
             op monitor interval="60s"
            primitive WebFS ocf:heartbeat:Filesystem \
             params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html"
            fstype="ext4"
            primitive WebSite ocf:heartbeat:apache \
             params configfile="/etc/httpd/conf/httpd.conf" \
             op monitor interval="1min"
            primitive dlm ocf:pacemaker:controld \
             op monitor interval="120s"
            primitive gfs-control ocf:pacemaker:controld \
             params daemon="gfs_controld.pcmk" args="-g 0" \
             op monitor interval="120s"
            ms WebDataClone WebData \
             meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
            notify="true"
            clone dlm-clone dlm \
             meta interleave="true"
            clone gfs-clone gfs-control \
             meta interleave="true"
            location prefer-pcmk-l WebSite 50: node1.luowei.com
            colocation WebSite-with-WebFS inf: WebSite WebFS
            colocation fs_ondrbd inf: WebFS WebDataClone:Master
            colocation gfs-with-dlm inf: gfs-clone dlm-clone
            colocation website-with-ip inf: WebSite ClusterIP
            order WebFS-after-WebData inf: WebDataClone:promote WebFS:start
            order WebSite-after-WebFS inf: WebFS WebSite
            order apache-after-ip inf: ClusterIP WebSite
            order start-gfs-after-dlm inf: dlm-clone gfs-clone
            property $id="cib-bootstrap-options" \
             dc-version="1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
             cluster-infrastructure="openais" \
             expected-quorum-votes="2" \
             stonith-enabled="false" \
             no-quorum-policy="ignore"
            rsc_defaults $id="rsc-options" \
             resource-stickiness="100"
            查看集群輸出的信息:
            [root@node1 ~]# crm_mon
            ============
            Last updated: Tue Sep 20 23:18:22 2011
            Stack: openais
            Current DC: node1.luowei.com - partition with quorum
            Version: 1.1.5-1.fc15-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
            2 Nodes configured, 2 expected votes
            6 Resources configured.
            ============
            Online: [ node2.luowei.com node1.luowei.com ]
            ClusterIP (ocf::heartbeat:IPaddr2): Started node1.luowei.com
             Master/Slave Set: WebDataClone [WebData]
                 Masters: [ node2.luowei.com ]
                 Slaves: [ node1.luowei.com ]
            WebSite  (ocf::heartbeat:apache): Started node1.luowei.com
            Clone Set: dlm-clone
             Started: [node2.luowei.com node1.luowei.com]
            Clone Set: gfs-clone
             Startde: [node2.luowei.com node1.luowei.com]
            WebFS   (ocf::heartbeat:Filesystem):    Started node1.luowei.com
            4、創建GFS2文件系統
            [root@node1 ~]# crm_resource --resource WebFS --set-parameter target-role --meta
            --parameter-value Stopped 
            這個時候使用crm status可以看到apache 和WebFS兩個資源都已經停止。
            5、創建并遷移數據到GFS2分區
            在兩個節點上都執行以下命令:
            [root@node2 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t pcmk:web /dev/drbd1
            This will destroy any data on /dev/drbd1.
            It appears to contain: Linux rev 1.0 ext4 filesystem data, UUID=19976683-c802-
            479c-854d-e786617be523 (extents) (large files) (huge files)
            Are you sure you want to proceed? [y/n] y
            6、然后遷移數據到這個新的文件系統并且為集群重新配置GFS2
            [root@node1 ~]# crm 
            crm(live)# configure 
            crm(live)configure# primitive WebFS ocf:heartbeat:Filesystem params
            device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="gfs2"
            crm(live)configure# colocation WebSite-with-WebFS inf: WebSite WebFS
            crm(live)configure# colocation fs_on_debd inf: WebFS WebDataClone:Master
            crm(live)configure# order WebFS-after-WebData inf: WebDataClone:promote
            WebFS:start
            crm(live)configure# order WebSite-after-WebFS inf: WebFS WebSite
            crm(live)configure# colocation WebFS-with-gfs-control INFINITY: WebFS gfs-clone
            crm(live)configure# order start-WebFS-after-gfs-control mandatory: gfs-clone
            WebFS
            crm(live)configure# commit
            7、重新配置pacemaker為Active/Active
            [root@node1 ~]# crm 
            crm(live)# configure clone WebIP ClusterIP meta globally-unique="true" clone-
            max="2" clone-node-max="2"
            crm(live)# configure primitive ClusterIP ocf:heartbeat:IPaddr2 params
            ip="192.168.1.110" cidr_netmask="32" clusterip_hash="sourceip" op monitor
            interval="30s" //設置ClusterIP的參數
            crm(live)# configure clone WebFSClone WebFS
            crm(live)# configure clone WebSiteClone WebSite
            同時把CIB文件中的master-max改為2
            資源配置完成,
            主從到主主集群架構就這樣配置完成!

            Logo
            作者:Gezidan
            出處:http://www.rixu.net    
            本文版權歸作者和博客園共有,歡迎轉載,但未經作者同意必須保留此段聲明,且在文章頁面明顯位置給出原文連接,否則保留追究法律責任的權利。
            本文轉載自 
            http://roqi410.blog.51cto.com/2186161/669877
            posted on 2011-09-23 09:59 日需博客 閱讀(2326) 評論(1)  編輯 收藏 引用 所屬分類: 技術文章轉載

            FeedBack:
            # re: Fedora 15上做主從、雙主模型的集群
            2011-09-23 19:30 | cheap lace front wigs
            這個很有用,收藏一下,做大型服務不錯  回復  更多評論
              
            97久久超碰国产精品旧版| 亚洲国产视频久久| 色偷偷偷久久伊人大杳蕉| 伊人久久无码中文字幕| 久久综合亚洲欧美成人| 久久成人国产精品二三区| 久久综合综合久久97色| 久久久无码精品亚洲日韩软件| 久久伊人中文无码| 亚洲中文字幕无码久久2020| 色综合色天天久久婷婷基地| 精品国产乱码久久久久软件| 国产精品久久久99| 国内精品伊人久久久久av一坑| 99久久99久久精品国产片| 日本强好片久久久久久AAA| 国产精品99久久精品| 精品久久久久成人码免费动漫 | 777午夜精品久久av蜜臀| 91亚洲国产成人久久精品网址| 久久久久久久97| 久久久久亚洲精品天堂久久久久久 | 国产国产成人久久精品| 久久精品国产亚洲av水果派| 亚洲午夜无码AV毛片久久| 国产成人香蕉久久久久| 漂亮人妻被黑人久久精品| 亚洲国产成人久久一区久久| 国产午夜电影久久| 国产亚洲色婷婷久久99精品| 99久久精品免费看国产一区二区三区| 精品免费久久久久国产一区| 91精品国产综合久久香蕉 | 久久精品?ⅴ无码中文字幕| 97久久国产亚洲精品超碰热| 久久国产精品99国产精| 久久婷婷五月综合97色一本一本 | 久久精品无码专区免费| 久久se精品一区二区影院 | 国产精品99久久不卡| 亚洲国产二区三区久久|