yum install ricci system-config-cluster chkconfig ricci on; service ricci start
To test the start order of a service: | rg_test noop /etc/cluster/cluster.conf start service {service_name} |
To test if a service will actually start: | rg_test test /etc/cluster/cluster.conf start service {service_name} |
To test the stop order of a service: | rg_test noop /etc/cluster/cluster.conf stop service {service_name} |
To test if a service will actually stop: | rg_test test /etc/cluster/cluster.conf stop service {service_name} |
Test configuration for errors: | rg_test test /etc/cluster/cluster.conf |
Display delta between two configs: | rg_test delta /etc/cluster/cluster.conf.bak /etc/cluster/cluster.conf |
Cluster Status: (run on any node) | clustat |
Member Status: (run on each node) | cman_tool status |
Start a service: (default member) | clusvcadm -e {service_name} |
Start a service: (specific member) | clusvcadm -e {service_name} -m {member_name} |
Stop a service: | clusvcadm -s {service_name} |
Move/relocate a service: (two-node cluster) | clusvcadm -r {service_name} |
Move/relocate a service: (multi-node cluster) | clusvcadm -r {service_name} -m {member_name} |
Start a failed service: | clusvcadm -d {service_name} clusvcadm -e {service_name} -m {member_name} |
Leave a cluster: (forcefully) | cman_tool leave force /etc/init.d/cman stop |
# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4096 count=1
# chown root:root /etc/cluster/fence_xvm.key; chmod 600 /etc/cluster/fence_xvm.key
This step does not need to be done for existing Xen/KVM hosts. The file is already located in /etc/cluster.
xm list
or virsh list
)If you have a host that you definitely know is down (or you do not want to be fenced) and fenced is throwing errors in syslog like:
fenced{some_pid}: fencing node "some_node.domain.com" fenced{some_pid}: fence "some_node.domain.com" failed
Do this, where some_node matches node name in /etc/cluster/cluster.conf
# fence_ack_manual -n some_node -e
In Red Hat Cluster Suite, there are two methods for configuring LVM on shared storage devices, CLVMD and HA_LVM:
clvmd
An extension to the lvm2 package called lvm2-cluster exists in the Cluster Storage channels for Red Hat Enterprise Linux 4 and 5 on Red Hat Network (RHN). This package provides the clvmd daemon which is responsible for managing clustered volume groups and communicating metadata changes for them back to other cluster members. With this option, the same volume group and logical volumes can be activated and mounted on multiple nodes as long as clvmd is running and the cluster is quorate. This option is better suited for clusters utilizing GFS or GFS2 filesystems, since they are often mounted on multiple nodes at the same time (and thus require that the logical volume be active on each).
NOTE: Although clvmd allows for the same volume group to be activated on multiple systems concurrently, this does not make it possible to mount a non-clustered filesystem such as ext2 or ext3 on multiple nodes at the same time. Doing so will likely cause corruption of the filesystem and data. Proper precautions should be taken to ensure no two nodes mount the same non-clustered filesystem at once.
To configure clvmd, first ensure the lvm2-cluster package is installed. Next the lvm configuration file /etc/lvm/lvm.conf must be modified to tell it to allow for the usage of clustered volume groups. Change the locking_type option to 3 as follows: locking_type=3
Alternatively, you can enable clustered LVM by using the lvmconf command:
# lvmconf --enable-cluster
Now the daemon can be configured to start at boot time:
# chkconfig clvmd on
And started (provided the cluster services are running and the cluster is quorate):
# service clvmd start
Any volume groups with the clustered flag set will be activated when clvmd starts. To determine if a volume group has the clustered flag set, use vgdisplay:
# vgdisplay myVG | grep Clustered Clustered yes
To set this flag while creating a volume group:
# vgcreate -cy <volume group name> <device path>
Or to add the clustered flag to an existing volume group:
# vgchange -cy <volume group name>
NOTE: Any volume group with the clustered flag set will not be available until the clvmd daemon is started and the cluster is quorate.
All nodes in the cluster must have the same view of all storage devices contained within clustered volume groups. For instance if a new device is presented to one node and added to a volume group, errors may be thrown if that same device is not presented to the other nodes. Or if a device is partitioned on one node and then added to a volume group, errors may be thrown if the other nodes have not re-read the partition table by running # partprobe
locking_type = 3