High Availability Cluster Configuration in Linux | Configure Cluster Using Pacemaker in CentOS 8

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 84

  • @NehraClasses
    @NehraClasses  4 ปีที่แล้ว +4

    HA Cluster Configuration in RHEL 8 (CentOS 8):
    ==============================================
    High Availability cluster, also known as failover cluster or active-passive cluster, is one of the most widely used cluster types in a production environment to have continuous availability of services even one of the cluster nodes fails.
    In technical, if the server running application has failed for some reason (ex: hardware failure), cluster software (pacemaker) will restart the application on the working node.
    Failover is not just restarting an application; it is a series of operations associated with it, like mounting filesystems, configuring networks, and starting dependent applications.
    Environment:
    Here, we will configure a failover cluster with Pacemaker to make the Apache (web) server as a highly available application.
    Here, we will configure the Apache web server, filesystem, and networks as resources for our cluster.
    For a filesystem resource, we would be using shared storage coming from iSCSI storage.
    CentOS 8 High Availability Cluster Infrastructure

    Host Name IP Address OS Purpose
    node1.nehraclasses.local 192.168.1.126 CentOS 8 Cluster Node 1
    node2.nehraclasses.local 192.168.1.119 CentOS 8 Cluster Node 2
    storage.nehraclasses.local 192.168.1.109 CentOS 8 iSCSI Shared Storage
    virtualhost.nehraclasses.local 192.168.1.112 CentOS 8 Virtual Cluster IP (Apache)
    Shared Storage
    Shared storage is one of the critical resources in the high availability cluster as it stores the data of a running application. All the nodes in a cluster will have access to the shared storage for the latest data.
    SAN storage is the widely used shared storage in a production environment. Due to resource constraints, for this demo, we will configure a cluster with iSCSI storage for a demonstration purpose.
    [root@storage ~]# dnf install -y targetcli lvm2 iscsi-initiator-utils lvm2
    Let’s list the available disks in the iSCSI server using the below command.
    [root@storage ~]# fdisk -l | grep -i sd
    Here, we will create an LVM on the iSCSI server to use as shared storage for our cluster nodes.
    [root@storage ~]# pvcreate /dev/sdb
    [root@storage ~]# vgcreate vg_iscsi /dev/sdb
    [root@storage ~]# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi
    cat /etc/iscsi/initiatorname.iscsi
    Node 1:
    InitiatorName=iqn.1994-05.com.redhat:121c93cbad3a
    Node 2:
    InitiatorName=iqn.1994-05.com.redhat:827e5e8fecb

    Enter the below command to get an iSCSI CLI for an interactive prompt.
    [root@storage ~]# targetcli
    Output:
    Warning: Could not load preferences file /root/.targetcli/prefs.bin.
    targetcli shell version 2.1.fb49
    right 2011-2013 by Datera, Inc and others.
    For help on commands, type 'help'.
    /> cd /backstores/block
    /backstores/block> create iscsi_shared_storage /dev/vg_iscsi/lv_iscsi
    Created block storage object iscsi_shared_storage using /dev/vg_iscsi/lv_iscsi.
    /backstores/block> cd /iscsi
    /iscsi> create
    Created target iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18.
    Created TPG 1.
    Global pref auto_add_default_portal=true
    Created default portal listening on all IPs (0.0.0.0), port 3260.
    /iscsi> cd iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18/tpg1/acls create iqn.1994-05.com.redhat:121c93cbad3a create iqn.1994-05.com.redhat:827e5e8fecb cd /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18/tpg1/luns
    /iscsi/iqn.20...e18/tpg1/luns> create /backstores/block/iscsi_shared_storage
    Created LUN 0.
    Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:827e5e8fecb
    Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:121c93cbad3a
    /iscsi/iqn.20...e18/tpg1/luns> cd /
    /> ls
    o- / ......................................................................................................................... [...]
    o- backstores .............................................................................................................. [...]
    | o- block .................................................................................................. [Storage Objects: 1]
    | | o- iscsi_shared_storage .............................................. [/dev/vg_iscsi/lv_iscsi (10.0GiB) write-thru activated]
    | | o- alua ................................................................................................... [ALUA Groups: 1]
    | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
    | o- fileio ................................................................................................. [Storage Objects: 0]
    | o- pscsi .................................................................................................. [Storage Objects: 0]
    | o- ramdisk ................................................................................................ [Storage Objects: 0]
    o- iscsi ............................................................................................................ [Targets: 1]
    | o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18 ......................................................... [TPGs: 1]
    | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
    | o- acls .......................................................................................................... [ACLs: 2]
    | | o- iqn.1994-05.com.redhat:121c93cbad3a .................................................................. [Mapped LUNs: 1]
    | | | o- mapped_lun0 .................................................................. [lun0 block/iscsi_shared_storage (rw)]
    | | o- iqn.1994-05.com.redhat:827e5e8fecb ................................................................... [Mapped LUNs: 1]
    | | o- mapped_lun0 .................................................................. [lun0 block/iscsi_shared_storage (rw)]
    | o- luns .......................................................................................................... [LUNs: 1]
    | | o- lun0 ......................................... [block/iscsi_shared_storage (/dev/vg_iscsi/lv_iscsi) (default_tg_pt_gp)]
    | o- portals .................................................................................................... [Portals: 1]
    | o- 0.0.0.0:3260 ..................................................................................................... [OK]
    o- loopback ......................................................................................................... [Targets: 0]
    /> saveconfig
    Configuration saved to /etc/target/saveconfig.json
    /> exit
    Global pref auto_save_on_exit=true
    Last 10 configs saved in /etc/target/backup/.
    Configuration saved to /etc/target/saveconfig.json
    Enable and restart the Target service.
    [root@storage ~]# systemctl enable target
    [root@storage ~]# systemctl restart target
    Configure the firewall to allow iSCSI traffic.
    [root@storage ~]# firewall-cmd --permanent --add-port=3260/tcp
    [root@storage ~]# firewall-cmd --reload

    • @santoshbolar3485
      @santoshbolar3485 3 ปีที่แล้ว +1

      Sir ur classes are superb, I have learned soo many things in Linux.please upload pcs clustering document also.

    • @NehraClasses
      @NehraClasses  3 ปีที่แล้ว

      Thank you, all the documents are available in our telegram channel.

    • @Nk-gaming106
      @Nk-gaming106 2 ปีที่แล้ว

      @@NehraClasses hi sir, I am working professional need help in setting up a pacemaker cluster in our lab. I am open for pay the fee.

  • @udayarpandey3937
    @udayarpandey3937 4 ปีที่แล้ว +6

    This is the lecture I was waiting for bro! I hope you will make a series on this with proper explanation rather than just walking though the commands.
    Thank you for the video.

    • @NehraClasses
      @NehraClasses  4 ปีที่แล้ว

      Thanks, will definitely make the series of videos. This video I have uploaded on the demand of one of our subscribers, it is public for today only. Tomorrow it will be visible to members only. 🙏

  • @mushfigmustafazade8880
    @mushfigmustafazade8880 ปีที่แล้ว +1

    This man's videos are precious. Thank you Nehra

  • @kalyanb6995
    @kalyanb6995 4 ปีที่แล้ว +2

    Awesome session ji thanks for the video

  • @KMCreations24
    @KMCreations24 4 หลายเดือนก่อน +1

    I got more knowledge on clusters, could you provide the document on it please🙏

  • @rakeshverma1707
    @rakeshverma1707 3 ปีที่แล้ว +1

    Great Bro Superb

  • @thescalpervijay
    @thescalpervijay 2 ปีที่แล้ว +1

    can u make video for mysql DB cluster with pacemaker

  • @lokeshkumar1365
    @lokeshkumar1365 2 ปีที่แล้ว +1

    This video is informative and could you please create detailed video on HA NFS server for prod level? Thanks

  • @vmalparikh
    @vmalparikh 3 ปีที่แล้ว +2

    Hi..the Physical and logical volume groups not being displayed in another node.. I have performed all the three commands like pvcan, vgcsan and lvscan....Please help

    • @NehraClasses
      @NehraClasses  3 ปีที่แล้ว

      please join our channel platinum membership and join our telegram channel for support.

    • @vmalparikh
      @vmalparikh 3 ปีที่แล้ว

      @@NehraClasses Sir, Could share the documentation? also can you show the fencing through sbd ?

    • @IlsaTaibani
      @IlsaTaibani 10 หลายเดือนก่อน

      Heyy i have the same issue can you please help me out ?

    • @IlsaTaibani
      @IlsaTaibani 10 หลายเดือนก่อน

      ​@@vmalparikh i have the same issue how did u solve it ?

  • @davidghitis2586
    @davidghitis2586 ปีที่แล้ว +1

    Hello
    After seting all the iSCSI and the hosts files in node 1 and 2, you said we need to do the yum config-manager --set-enabled HighAvailability but I'm getting No matching repo to modify: HighAvailability. Any idea?
    I'm using RHEL 8.7.
    Thanks!

    • @NehraClasses
      @NehraClasses  ปีที่แล้ว

      Repository name may differenet on RHEL 8, make sure you have the active redhat subscription for that. kindly list the avaialble repos first.

  • @ManikandanK-lp6yy
    @ManikandanK-lp6yy 2 ปีที่แล้ว +1

    16:49 - not understadn how its executed

  • @smrutiranjandas4766
    @smrutiranjandas4766 4 ปีที่แล้ว +1

    Super tutorial sir

  • @Myyutubee
    @Myyutubee 2 ปีที่แล้ว +1

    Thank you so much sir. I have 2 questions. Why lvm needs to be created again on node1 or node 2 when lvm had already been created on iscsi storage server. Why we need another machine for virtual ip ?

    • @NehraClasses
      @NehraClasses  2 ปีที่แล้ว

      First Lvm was created on iscsi server, however you can directly use the physical disk instead of lvm if you want and then share it with clients. This shared storage is a block storage and needs to be formatted and partitioned so better to use Lvm so that we can extend it if required.
      Virtual IP doesn't require any physical machine unless you want to use NAT to hide the ip address of your actual machine.

    • @Myyutubee
      @Myyutubee 2 ปีที่แล้ว

      @@NehraClasses Sir, Thank you for your quick response. I agree with advantage of Extending capability of LVM but why to create LVM again on nodes when we have already created it on iscsi server. Doesn't it get reflected on nodes. Cant we directly format it on iscsi server ?

  • @vipin_mishra19
    @vipin_mishra19 2 ปีที่แล้ว +1

    16:37 --Getting error while authoring the nodes...

    • @NehraClasses
      @NehraClasses  2 ปีที่แล้ว

      send us error screenshot in telegram

  • @gam3955
    @gam3955 3 ปีที่แล้ว +1

    Great, thanks a lot for sharing. Have you a video or web page for the same with Oracle database servers with Pacemaker ?

  • @fahadalhajri5144
    @fahadalhajri5144 2 ปีที่แล้ว +1

    Hi Nehra, I followed steps and all run successful, my new disk on storage vm "nvme0n2" I don't see it in node1 and node2 when running lsblk ? here i stopped. any advise ?

    • @NehraClasses
      @NehraClasses  2 ปีที่แล้ว

      Check your iSCSI target configuration.

  • @mohitbaluka8094
    @mohitbaluka8094 2 ปีที่แล้ว +1

    Sir ..please provide the documentation as well for pcs clustering ??

    • @NehraClasses
      @NehraClasses  2 ปีที่แล้ว

      Please join channel membership to access all documents.

  • @davidchang5862
    @davidchang5862 2 ปีที่แล้ว +1

    Hi. Can u also provide a video detailing how to setup fencing for RHEL ?

  • @chillySauceMind
    @chillySauceMind 3 ปีที่แล้ว +1

    Hello sir, I using RHEL Ec2 instance and i am not able to install pacemaker corosync pcs, are there any other steps to install in ec2 instance?

    • @NehraClasses
      @NehraClasses  3 ปีที่แล้ว

      Configure EPEL Repository

  • @venkateshvenki
    @venkateshvenki 2 หลายเดือนก่อน

    Can you make us sap hana high availability using pacemaker

  • @sainiamit4911
    @sainiamit4911 3 ปีที่แล้ว +2

    Hi sir it seems like some steps are missing for virtualip server.

    • @NehraClasses
      @NehraClasses  3 ปีที่แล้ว

      No dear, please Check again.

  • @rajashekhar1963
    @rajashekhar1963 12 วันที่ผ่านมา

    There is a mistake in the video you were already create lvm in iscsi server and you shared the same lun but you created again lvm for sdb disk in node1server

    • @NehraClasses
      @NehraClasses  12 วันที่ผ่านมา

      This is not "wrong," but it depends on the use case. Below are some points to consider:
      Advantages:
      1. Centralized Storage Management:
      The LVM on the server allows dynamic resizing and management of logical volumes.
      Makes it easier to manage and allocate storage for multiple clients.
      2. Flexibility for Clients:
      Creating LVMs on the client side provides flexibility to manage volumes independently (e.g., create, resize, or delete logical volumes).
      3. Redundancy and Performance:
      You can manage redundancy or striping (RAID-like) at the client level if needed.
      4. Scalability:
      It allows multiple clients to have isolated LUNs while the server handles the back-end storage.

  • @balaji276
    @balaji276 3 ปีที่แล้ว +1

    Thanks brother

  • @ragupcr
    @ragupcr 4 ปีที่แล้ว +2

    Hi., I am unable to install the Pacemaker cluster packages. I have created the repo. Still I am unable to install them.
    Can you help me with the repo configuration for the Pacemaker cluster.

    • @NehraClasses
      @NehraClasses  4 ปีที่แล้ว +1

      Which flavour u r using? RHEL or CentOS
      For better support join our telegram channel 🙏

  • @pankajnara6528
    @pankajnara6528 4 ปีที่แล้ว +2

    Nice

  • @podishettivikram8681
    @podishettivikram8681 3 ปีที่แล้ว +1

    How to set up this in lab environment

    • @NehraClasses
      @NehraClasses  3 ปีที่แล้ว +1

      First create these four machines, you should have sufficient hardware resources to run all these machines togather.

  • @54_amol_more95
    @54_amol_more95 3 ปีที่แล้ว +1

    Thanks sir

  • @supriyomukherjee4030
    @supriyomukherjee4030 2 ปีที่แล้ว +1

    Please share the command details. Sir

    • @NehraClasses
      @NehraClasses  2 ปีที่แล้ว

      its available for members on google drive

  • @vipulgajbhiye569
    @vipulgajbhiye569 ปีที่แล้ว

    Pcsd service unable to start
    Error PCS gui

  • @manjeetgupta8462
    @manjeetgupta8462 3 ปีที่แล้ว +1

    Please sir iska documentation b dijiye.

    • @NehraClasses
      @NehraClasses  3 ปีที่แล้ว

      please join our telegram channel for the same.

    • @manjeetgupta8462
      @manjeetgupta8462 3 ปีที่แล้ว +1

      @@NehraClasses I am already connected with your telegram channel n. Tried to find out the document of above topic near by same date . But not found. I am watching your recent hindi sessions for linux to brush up my concept.

    • @NehraClasses
      @NehraClasses  3 ปีที่แล้ว

      I will search and let u know once I will get it.

  • @mayank7616
    @mayank7616 2 ปีที่แล้ว +1

    From where I can get rpm of that ?

  • @gam3955
    @gam3955 3 ปีที่แล้ว +1

    Is this run on OEL8 ?

    • @NehraClasses
      @NehraClasses  3 ปีที่แล้ว

      Yes, it will

    • @gam3955
      @gam3955 3 ปีที่แล้ว

      @@NehraClasses thanks for your reply

  • @MrRamu143
    @MrRamu143 4 ปีที่แล้ว +1

    Dear Nehra, please provide Documentation also, thanQ

    • @NehraClasses
      @NehraClasses  4 ปีที่แล้ว +2

      Will upload soon🙂

    • @NehraClasses
      @NehraClasses  4 ปีที่แล้ว +1

      Please check comments, it's already uploaded there in comments.

    • @MrRamu143
      @MrRamu143 4 ปีที่แล้ว +1

      @@NehraClasses Unable to apply (follow) steps via video, so please post the setps.

    • @NehraClasses
      @NehraClasses  4 ปีที่แล้ว +1

      Please check the comment section of this video, already provided the steps in comments section. See the pinned comment first 🙏

    • @santoshbolar3485
      @santoshbolar3485 3 ปีที่แล้ว

      Sir ur classes are superb, I have learned soo many things in Linux.please upload pcs clustering document also.

  • @Vinutha-xv2kb
    @Vinutha-xv2kb ปีที่แล้ว +1

    This course name please

    • @NehraClasses
      @NehraClasses  ปีที่แล้ว

      Servers Training

    • @Vinutha-xv2kb
      @Vinutha-xv2kb ปีที่แล้ว

      @@NehraClasses how can I get this course from redhat

  • @CROAbomb
    @CROAbomb 2 ปีที่แล้ว

    What if the storage Server stops?

    • @DELIU-b2m
      @DELIU-b2m ปีที่แล้ว

      multipath---存储双活

  • @NehraClasses
    @NehraClasses  4 ปีที่แล้ว

    Discover Shared Storage
    On both cluster nodes, discover the target using the below command.
    iscsiadm -m discovery -t st -p IP address
    Now, login to the target storage with the below command.
    iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18 -p IP address -l
    systemctl restart iscsid
    systemctl enable iscsid
    [root@node1 ~]# pvcreate /dev/sdb
    [root@node1 ~]# vgcreate vg_apache /dev/sdb
    [root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_apache
    [root@node1 ~]# mkfs.ext4 /dev/vg_apache/lv_apache
    [root@node2 ~]# pvscan
    [root@node2 ~]# vgscan
    [root@node2 ~]# lvscan
    Finally, verify the LVM we created on node1 is available to you on another node (Ex. node2) using the below commands.
    ls -al /dev/vg_apache/lv_apache
    [root@node2 ~]# lvdisplay /dev/vg_apache/lv_apache
    Make a host entry about each node on all nodes. The cluster will be using the hostname to communicate with each other.
    vi /etc/hosts
    Host entries will be something like below.
    192.168.1.126 node1.nehraclasses.local node1
    192.168.1.119 node2.nehraclasses.local node2
    dnf config-manager --set-enabled HighAvailability
    RHEL 8
    Enable Red Hat subscription on RHEL 8 and then enable a High Availability repository to download cluster packages form Red Hat.
    subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
    dnf install -y pcs fence-agents-all pcp-zeroconf

    Add a firewall rule to allow all high availability application to have proper communication between nodes. You can skip this step if the system doesn’t have firewalld enabled.
    firewall-cmd --permanent --add-service=high-availability
    firewall-cmd --add-service=high-availability
    firewall-cmd --reload
    Set a password for the hacluster user.
    passwd hacluster
    Start the cluster service and enable it to start automatically on system startup.
    systemctl start pcsd
    systemctl enable pcsd
    [root@node1 ~]# pcs host auth node1.nehraclasses.local node2.nehraclasses.local
    [root@node1 ~]# pcs cluster setup nehraclasses_cluster --start node1.nehraclasses.local node2.nehraclasses.local
    Enable the cluster to start at the system startup.
    [root@node1 ~]# pcs cluster enable --all
    [root@node1 ~]# pcs cluster status
    [root@node1 ~]# pcs status
    Fencing Devices
    The fencing device is a hardware device that helps to disconnect the problematic node by resetting node / disconnecting shared storage from accessing it. This demo cluster is running on top of the VMware and doesn’t have any external fence device to set up. However, you can follow this guide to set up a fencing device.
    [root@node1 ~]# pcs property set stonith-enabled=false
    dnf install -y httpd
    Edit the configuration file.
    vi /etc/httpd/conf/httpd.conf
    Add below content at the end of the file on both cluster nodes.
    SetHandler server-status
    Require local
    Edit the Apache web server’s logrotate configuration to tell not to use systemd as cluster resource doesn’t use systemd to reload the service.
    Change the below line.
    FROM:
    /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
    TO:
    /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true
    [root@node1 ~]# mount /dev/vg_apache/lv_apache /var/www/
    [root@node1 ~]# mkdir /var/www/html
    [root@node1 ~]# mkdir /var/www/cgi-bin
    [root@node1 ~]# mkdir /var/www/error
    [root@node1 ~]# restorecon -R /var/www
    [root@node1 ~]# cat

  • @piyushshipraagarwal996
    @piyushshipraagarwal996 ปีที่แล้ว +1

    please share second notepad file

    • @NehraClasses
      @NehraClasses  ปีที่แล้ว

      already available to members on gdrive.

  • @TheAdventureAwaitsTV
    @TheAdventureAwaitsTV ปีที่แล้ว

    where is the comment for steps I might be blind or crossed eye guy I can't see the documetation

  • @LINUXGURU08
    @LINUXGURU08 2 ปีที่แล้ว

    stonith.enabled=false is not real time example, requesting you need to share fencing configuration video

  • @babugowda1683
    @babugowda1683 ปีที่แล้ว

    Everything was great. But stop playing with your screen recorder console over the screen

  • @ankushjain822
    @ankushjain822 3 ปีที่แล้ว +1

    Sir Aap hindi mai Samjha dete to jayada Tik tha