I see you used /dev/sdb as your device instead of uuid for the device. When I rebooted my nodes the scsi devices /dev/sda & /dev/sdb were swapped and my cluster wouldn't start because fencing was broken.
Hello After restarting cluster, the sdb -d command only shows local server. I am sorry but i cant paste the output Can you figure out what's happening? Thanks
Thanks ,this is a good video.
Great Its working fine
Thank you.
Great Video friend.... it helps me a lot... pl help to configure oracle db on pacemaker cluster with failover with LVM
I see you used /dev/sdb as your device instead of uuid for the device. When I rebooted my nodes the scsi devices /dev/sda & /dev/sdb were swapped and my cluster wouldn't start because fencing was broken.
Hello sir, do we need to create physical volume for both nodes or just 1 node is enough
Is the same procediment with multipath block devices? /dev/mapper/mpatha....
You didn't mention that how did you set up shared storage?
Hi , how to solve this error msg ''fence_sbd: power timeout needs to be greater then sbd message timeout' ?
Hi. I don’t quite understand. Where is the shared storage in this context ? And why do u need to create fencing stonith 2 times when it should be 1 ?
/Dev/sdb is the shared disk I am using for fence device. I have created only 1 stonith agent and that is enough.
what software you are using for taking ssh connection to server?
Mobaxterm
why SBD device in same cluster nodes ?
Hello
After restarting cluster, the sdb -d command only shows local server.
I am sorry but i cant paste the output
Can you figure out what's happening?
Thanks
Please watch the video again and follow exact steps I did. I will not able to comment without the error.
@@tunetolinux4173 Thanks. I think the error is that /dev/sdb is a different disk in each VM. Can it be?
Please paste all the commands from notepad under description, it would be helpful
1. Install fence-agents-sbd package on both nodes.
yum -y install sbd fence-agents-sbd
2. On node01, Initialize the disk and create sbd device.
sbd -d /dev/sdb -4 20 -1 10 create
3. Check status of sbd device.
sbd -d /dev/sdb dump
==Dumping header on disk /dev/sdb
Header version : 2.1
UUID : 3a77e11e-f9ee-4f6c-bbd8-d8570cc1f694
Number of slots : 255
Sector size : 512
Timeout (watchdog) : 10
Timeout (allocate) : 2
Timeout (loop) : 1
Timeout (msgwait) : 20
==Header on disk /dev/sdb is dumped
4. Enable software watchdog
modprobe softdog
echo softdog > /etc/modules-load.d/softdog.conf
5. Edit sbd configuration file and below two information on both the nodes.
# edit sbd config /etc/sysconfig/sbd
SBD_DEVICE="/dev/sdb"
SBD_OPTS="-W"
6. Enable sbd service and restart cluster service.
systemctl enable sbd
pcs cluster stop --all
pcs cluster start --all
7. List sbd devices.
sbd -d /dev/sda list
0 clnode01 clear
1 clnode02 clear
# get sbd status using fence_sbd, and then reboot node02 from node01
fence_sbd --devices=/dev/sdb -n clnode02 -o status
fence_sbd --devices=/dev/sdb -n clnode02 -o reboot
stonith_admin --reboot clnode02
8. Setup the fencing on pcs
pcs stonith create fence-sbd fence_sbd devices="/dev/sdb" power_timeout=20
9. Test fence node with fence agent just created.
pcs stonith fence clnode02
10. Check sbd device information.
pcs stonith sbd status