[RHEL8] Configure Stonith Fence Devices In PCS Pacemaker Cluster |RHEL8 |Centos8

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น •

  • @张瑞军-s5j
    @张瑞军-s5j 14 วันที่ผ่านมา

    What bothers me is that I can implement fencing according to your video, but when I unplug the fiber cable of the node's multipath to test, the node whose fiber cable was unplugged did not restart. In this case, the node whose fiber cable was removed needs to be fenced (rebooted).

  • @suresh9250604856
    @suresh9250604856 2 ปีที่แล้ว

    Great Its working fine

  • @jackykwan6735
    @jackykwan6735 ปีที่แล้ว

    Thanks ,this is a good video.

  • @Filmyjaduu
    @Filmyjaduu 2 ปีที่แล้ว +1

    Great Video friend.... it helps me a lot... pl help to configure oracle db on pacemaker cluster with failover with LVM

  • @minhduc6330
    @minhduc6330 ปีที่แล้ว

    Hello sir, do we need to create physical volume for both nodes or just 1 node is enough

  • @stanlo45
    @stanlo45 ปีที่แล้ว +1

    I see you used /dev/sdb as your device instead of uuid for the device. When I rebooted my nodes the scsi devices /dev/sda & /dev/sdb were swapped and my cluster wouldn't start because fencing was broken.

  • @Nales
    @Nales 10 หลายเดือนก่อน

    Is the same procediment with multipath block devices? /dev/mapper/mpatha....

  • @weilin4872
    @weilin4872 9 หลายเดือนก่อน

    Hi , how to solve this error msg ''fence_sbd: power timeout needs to be greater then sbd message timeout' ?

  • @hemantsinghsolanki3047
    @hemantsinghsolanki3047 3 ปีที่แล้ว

    You didn't mention that how did you set up shared storage?

  • @Ramkumar-pd5sv
    @Ramkumar-pd5sv 2 ปีที่แล้ว

    what software you are using for taking ssh connection to server?

  • @davidchang5862
    @davidchang5862 3 ปีที่แล้ว

    Hi. I don’t quite understand. Where is the shared storage in this context ? And why do u need to create fencing stonith 2 times when it should be 1 ?

    • @tunetolinux4173
      @tunetolinux4173  3 ปีที่แล้ว

      /Dev/sdb is the shared disk I am using for fence device. I have created only 1 stonith agent and that is enough.

  • @khznm2174
    @khznm2174 2 ปีที่แล้ว

    why SBD device in same cluster nodes ?

  • @luisgarciaaguilar7546
    @luisgarciaaguilar7546 2 ปีที่แล้ว

    Hello
    After restarting cluster, the sdb -d command only shows local server.
    I am sorry but i cant paste the output
    Can you figure out what's happening?
    Thanks

    • @tunetolinux4173
      @tunetolinux4173  2 ปีที่แล้ว

      Please watch the video again and follow exact steps I did. I will not able to comment without the error.

    • @luisgarciaaguilar7546
      @luisgarciaaguilar7546 2 ปีที่แล้ว

      @@tunetolinux4173 Thanks. I think the error is that /dev/sdb is a different disk in each VM. Can it be?

  • @vedagiribalaji3303
    @vedagiribalaji3303 4 ปีที่แล้ว +1

    Please paste all the commands from notepad under description, it would be helpful

    • @tunetolinux4173
      @tunetolinux4173  4 ปีที่แล้ว +2

      1. Install fence-agents-sbd package on both nodes.
      yum -y install sbd fence-agents-sbd
      2. On node01, Initialize the disk and create sbd device.
      sbd -d /dev/sdb -4 20 -1 10 create
      3. Check status of sbd device.
      sbd -d /dev/sdb dump
      ==Dumping header on disk /dev/sdb
      Header version : 2.1
      UUID : 3a77e11e-f9ee-4f6c-bbd8-d8570cc1f694
      Number of slots : 255
      Sector size : 512
      Timeout (watchdog) : 10
      Timeout (allocate) : 2
      Timeout (loop) : 1
      Timeout (msgwait) : 20
      ==Header on disk /dev/sdb is dumped
      4. Enable software watchdog
      modprobe softdog
      echo softdog > /etc/modules-load.d/softdog.conf
      5. Edit sbd configuration file and below two information on both the nodes.
      # edit sbd config /etc/sysconfig/sbd
      SBD_DEVICE="/dev/sdb"
      SBD_OPTS="-W"
      6. Enable sbd service and restart cluster service.
      systemctl enable sbd
      pcs cluster stop --all
      pcs cluster start --all
      7. List sbd devices.
      sbd -d /dev/sda list
      0 clnode01 clear
      1 clnode02 clear
      # get sbd status using fence_sbd, and then reboot node02 from node01
      fence_sbd --devices=/dev/sdb -n clnode02 -o status
      fence_sbd --devices=/dev/sdb -n clnode02 -o reboot
      stonith_admin --reboot clnode02
      8. Setup the fencing on pcs
      pcs stonith create fence-sbd fence_sbd devices="/dev/sdb" power_timeout=20
      9. Test fence node with fence agent just created.
      pcs stonith fence clnode02
      10. Check sbd device information.
      pcs stonith sbd status