HA Cluster Configuration in RHEL 8 (CentOS 8): ============================================== High Availability cluster, also known as failover cluster or active-passive cluster, is one of the most widely used cluster types in a production environment to have continuous availability of services even one of the cluster nodes fails. In technical, if the server running application has failed for some reason (ex: hardware failure), cluster software (pacemaker) will restart the application on the working node. Failover is not just restarting an application; it is a series of operations associated with it, like mounting filesystems, configuring networks, and starting dependent applications. Environment: Here, we will configure a failover cluster with Pacemaker to make the Apache (web) server as a highly available application. Here, we will configure the Apache web server, filesystem, and networks as resources for our cluster. For a filesystem resource, we would be using shared storage coming from iSCSI storage. CentOS 8 High Availability Cluster Infrastructure
Host Name IP Address OS Purpose node1.nehraclasses.local 192.168.1.126 CentOS 8 Cluster Node 1 node2.nehraclasses.local 192.168.1.119 CentOS 8 Cluster Node 2 storage.nehraclasses.local 192.168.1.109 CentOS 8 iSCSI Shared Storage virtualhost.nehraclasses.local 192.168.1.112 CentOS 8 Virtual Cluster IP (Apache) Shared Storage Shared storage is one of the critical resources in the high availability cluster as it stores the data of a running application. All the nodes in a cluster will have access to the shared storage for the latest data. SAN storage is the widely used shared storage in a production environment. Due to resource constraints, for this demo, we will configure a cluster with iSCSI storage for a demonstration purpose. [root@storage ~]# dnf install -y targetcli lvm2 iscsi-initiator-utils lvm2 Let’s list the available disks in the iSCSI server using the below command. [root@storage ~]# fdisk -l | grep -i sd Here, we will create an LVM on the iSCSI server to use as shared storage for our cluster nodes. [root@storage ~]# pvcreate /dev/sdb [root@storage ~]# vgcreate vg_iscsi /dev/sdb [root@storage ~]# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi cat /etc/iscsi/initiatorname.iscsi Node 1: InitiatorName=iqn.1994-05.com.redhat:121c93cbad3a Node 2: InitiatorName=iqn.1994-05.com.redhat:827e5e8fecb
Enter the below command to get an iSCSI CLI for an interactive prompt. [root@storage ~]# targetcli Output: Warning: Could not load preferences file /root/.targetcli/prefs.bin. targetcli shell version 2.1.fb49 right 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. /> cd /backstores/block /backstores/block> create iscsi_shared_storage /dev/vg_iscsi/lv_iscsi Created block storage object iscsi_shared_storage using /dev/vg_iscsi/lv_iscsi. /backstores/block> cd /iscsi /iscsi> create Created target iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260. /iscsi> cd iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18/tpg1/acls create iqn.1994-05.com.redhat:121c93cbad3a create iqn.1994-05.com.redhat:827e5e8fecb cd /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18/tpg1/luns /iscsi/iqn.20...e18/tpg1/luns> create /backstores/block/iscsi_shared_storage Created LUN 0. Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:827e5e8fecb Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:121c93cbad3a /iscsi/iqn.20...e18/tpg1/luns> cd / /> ls o- / ......................................................................................................................... [...] o- backstores .............................................................................................................. [...] | o- block .................................................................................................. [Storage Objects: 1] | | o- iscsi_shared_storage .............................................. [/dev/vg_iscsi/lv_iscsi (10.0GiB) write-thru activated] | | o- alua ................................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | o- fileio ................................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................................ [Targets: 1] | o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18 ......................................................... [TPGs: 1] | o- tpg1 ............................................................................................... [no-gen-acls, no-auth] | o- acls .......................................................................................................... [ACLs: 2] | | o- iqn.1994-05.com.redhat:121c93cbad3a .................................................................. [Mapped LUNs: 1] | | | o- mapped_lun0 .................................................................. [lun0 block/iscsi_shared_storage (rw)] | | o- iqn.1994-05.com.redhat:827e5e8fecb ................................................................... [Mapped LUNs: 1] | | o- mapped_lun0 .................................................................. [lun0 block/iscsi_shared_storage (rw)] | o- luns .......................................................................................................... [LUNs: 1] | | o- lun0 ......................................... [block/iscsi_shared_storage (/dev/vg_iscsi/lv_iscsi) (default_tg_pt_gp)] | o- portals .................................................................................................... [Portals: 1] | o- 0.0.0.0:3260 ..................................................................................................... [OK] o- loopback ......................................................................................................... [Targets: 0] /> saveconfig Configuration saved to /etc/target/saveconfig.json /> exit Global pref auto_save_on_exit=true Last 10 configs saved in /etc/target/backup/. Configuration saved to /etc/target/saveconfig.json Enable and restart the Target service. [root@storage ~]# systemctl enable target [root@storage ~]# systemctl restart target Configure the firewall to allow iSCSI traffic. [root@storage ~]# firewall-cmd --permanent --add-port=3260/tcp [root@storage ~]# firewall-cmd --reload
This is the lecture I was waiting for bro! I hope you will make a series on this with proper explanation rather than just walking though the commands. Thank you for the video.
Thanks, will definitely make the series of videos. This video I have uploaded on the demand of one of our subscribers, it is public for today only. Tomorrow it will be visible to members only. 🙏
Hi..the Physical and logical volume groups not being displayed in another node.. I have performed all the three commands like pvcan, vgcsan and lvscan....Please help
Hello After seting all the iSCSI and the hosts files in node 1 and 2, you said we need to do the yum config-manager --set-enabled HighAvailability but I'm getting No matching repo to modify: HighAvailability. Any idea? I'm using RHEL 8.7. Thanks!
Thank you so much sir. I have 2 questions. Why lvm needs to be created again on node1 or node 2 when lvm had already been created on iscsi storage server. Why we need another machine for virtual ip ?
First Lvm was created on iscsi server, however you can directly use the physical disk instead of lvm if you want and then share it with clients. This shared storage is a block storage and needs to be formatted and partitioned so better to use Lvm so that we can extend it if required. Virtual IP doesn't require any physical machine unless you want to use NAT to hide the ip address of your actual machine.
@@NehraClasses Sir, Thank you for your quick response. I agree with advantage of Extending capability of LVM but why to create LVM again on nodes when we have already created it on iscsi server. Doesn't it get reflected on nodes. Cant we directly format it on iscsi server ?
Hi Nehra, I followed steps and all run successful, my new disk on storage vm "nvme0n2" I don't see it in node1 and node2 when running lsblk ? here i stopped. any advise ?
There is a mistake in the video you were already create lvm in iscsi server and you shared the same lun but you created again lvm for sdb disk in node1server
This is not "wrong," but it depends on the use case. Below are some points to consider: Advantages: 1. Centralized Storage Management: The LVM on the server allows dynamic resizing and management of logical volumes. Makes it easier to manage and allocate storage for multiple clients. 2. Flexibility for Clients: Creating LVMs on the client side provides flexibility to manage volumes independently (e.g., create, resize, or delete logical volumes). 3. Redundancy and Performance: You can manage redundancy or striping (RAID-like) at the client level if needed. 4. Scalability: It allows multiple clients to have isolated LUNs while the server handles the back-end storage.
Hi., I am unable to install the Pacemaker cluster packages. I have created the repo. Still I am unable to install them. Can you help me with the repo configuration for the Pacemaker cluster.
@@NehraClasses I am already connected with your telegram channel n. Tried to find out the document of above topic near by same date . But not found. I am watching your recent hindi sessions for linux to brush up my concept.
Discover Shared Storage On both cluster nodes, discover the target using the below command. iscsiadm -m discovery -t st -p IP address Now, login to the target storage with the below command. iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18 -p IP address -l systemctl restart iscsid systemctl enable iscsid [root@node1 ~]# pvcreate /dev/sdb [root@node1 ~]# vgcreate vg_apache /dev/sdb [root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_apache [root@node1 ~]# mkfs.ext4 /dev/vg_apache/lv_apache [root@node2 ~]# pvscan [root@node2 ~]# vgscan [root@node2 ~]# lvscan Finally, verify the LVM we created on node1 is available to you on another node (Ex. node2) using the below commands. ls -al /dev/vg_apache/lv_apache [root@node2 ~]# lvdisplay /dev/vg_apache/lv_apache Make a host entry about each node on all nodes. The cluster will be using the hostname to communicate with each other. vi /etc/hosts Host entries will be something like below. 192.168.1.126 node1.nehraclasses.local node1 192.168.1.119 node2.nehraclasses.local node2 dnf config-manager --set-enabled HighAvailability RHEL 8 Enable Red Hat subscription on RHEL 8 and then enable a High Availability repository to download cluster packages form Red Hat. subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms dnf install -y pcs fence-agents-all pcp-zeroconf
Add a firewall rule to allow all high availability application to have proper communication between nodes. You can skip this step if the system doesn’t have firewalld enabled. firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability firewall-cmd --reload Set a password for the hacluster user. passwd hacluster Start the cluster service and enable it to start automatically on system startup. systemctl start pcsd systemctl enable pcsd [root@node1 ~]# pcs host auth node1.nehraclasses.local node2.nehraclasses.local [root@node1 ~]# pcs cluster setup nehraclasses_cluster --start node1.nehraclasses.local node2.nehraclasses.local Enable the cluster to start at the system startup. [root@node1 ~]# pcs cluster enable --all [root@node1 ~]# pcs cluster status [root@node1 ~]# pcs status Fencing Devices The fencing device is a hardware device that helps to disconnect the problematic node by resetting node / disconnecting shared storage from accessing it. This demo cluster is running on top of the VMware and doesn’t have any external fence device to set up. However, you can follow this guide to set up a fencing device. [root@node1 ~]# pcs property set stonith-enabled=false dnf install -y httpd Edit the configuration file. vi /etc/httpd/conf/httpd.conf Add below content at the end of the file on both cluster nodes. SetHandler server-status Require local Edit the Apache web server’s logrotate configuration to tell not to use systemd as cluster resource doesn’t use systemd to reload the service. Change the below line. FROM: /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true TO: /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true [root@node1 ~]# mount /dev/vg_apache/lv_apache /var/www/ [root@node1 ~]# mkdir /var/www/html [root@node1 ~]# mkdir /var/www/cgi-bin [root@node1 ~]# mkdir /var/www/error [root@node1 ~]# restorecon -R /var/www [root@node1 ~]# cat
HA Cluster Configuration in RHEL 8 (CentOS 8):
==============================================
High Availability cluster, also known as failover cluster or active-passive cluster, is one of the most widely used cluster types in a production environment to have continuous availability of services even one of the cluster nodes fails.
In technical, if the server running application has failed for some reason (ex: hardware failure), cluster software (pacemaker) will restart the application on the working node.
Failover is not just restarting an application; it is a series of operations associated with it, like mounting filesystems, configuring networks, and starting dependent applications.
Environment:
Here, we will configure a failover cluster with Pacemaker to make the Apache (web) server as a highly available application.
Here, we will configure the Apache web server, filesystem, and networks as resources for our cluster.
For a filesystem resource, we would be using shared storage coming from iSCSI storage.
CentOS 8 High Availability Cluster Infrastructure
Host Name IP Address OS Purpose
node1.nehraclasses.local 192.168.1.126 CentOS 8 Cluster Node 1
node2.nehraclasses.local 192.168.1.119 CentOS 8 Cluster Node 2
storage.nehraclasses.local 192.168.1.109 CentOS 8 iSCSI Shared Storage
virtualhost.nehraclasses.local 192.168.1.112 CentOS 8 Virtual Cluster IP (Apache)
Shared Storage
Shared storage is one of the critical resources in the high availability cluster as it stores the data of a running application. All the nodes in a cluster will have access to the shared storage for the latest data.
SAN storage is the widely used shared storage in a production environment. Due to resource constraints, for this demo, we will configure a cluster with iSCSI storage for a demonstration purpose.
[root@storage ~]# dnf install -y targetcli lvm2 iscsi-initiator-utils lvm2
Let’s list the available disks in the iSCSI server using the below command.
[root@storage ~]# fdisk -l | grep -i sd
Here, we will create an LVM on the iSCSI server to use as shared storage for our cluster nodes.
[root@storage ~]# pvcreate /dev/sdb
[root@storage ~]# vgcreate vg_iscsi /dev/sdb
[root@storage ~]# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi
cat /etc/iscsi/initiatorname.iscsi
Node 1:
InitiatorName=iqn.1994-05.com.redhat:121c93cbad3a
Node 2:
InitiatorName=iqn.1994-05.com.redhat:827e5e8fecb
Enter the below command to get an iSCSI CLI for an interactive prompt.
[root@storage ~]# targetcli
Output:
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb49
right 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/> cd /backstores/block
/backstores/block> create iscsi_shared_storage /dev/vg_iscsi/lv_iscsi
Created block storage object iscsi_shared_storage using /dev/vg_iscsi/lv_iscsi.
/backstores/block> cd /iscsi
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> cd iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18/tpg1/acls create iqn.1994-05.com.redhat:121c93cbad3a create iqn.1994-05.com.redhat:827e5e8fecb cd /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18/tpg1/luns
/iscsi/iqn.20...e18/tpg1/luns> create /backstores/block/iscsi_shared_storage
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:827e5e8fecb
Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:121c93cbad3a
/iscsi/iqn.20...e18/tpg1/luns> cd /
/> ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 1]
| | o- iscsi_shared_storage .............................................. [/dev/vg_iscsi/lv_iscsi (10.0GiB) write-thru activated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 1]
| o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18 ......................................................... [TPGs: 1]
| o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
| o- acls .......................................................................................................... [ACLs: 2]
| | o- iqn.1994-05.com.redhat:121c93cbad3a .................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 .................................................................. [lun0 block/iscsi_shared_storage (rw)]
| | o- iqn.1994-05.com.redhat:827e5e8fecb ................................................................... [Mapped LUNs: 1]
| | o- mapped_lun0 .................................................................. [lun0 block/iscsi_shared_storage (rw)]
| o- luns .......................................................................................................... [LUNs: 1]
| | o- lun0 ......................................... [block/iscsi_shared_storage (/dev/vg_iscsi/lv_iscsi) (default_tg_pt_gp)]
| o- portals .................................................................................................... [Portals: 1]
| o- 0.0.0.0:3260 ..................................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: 0]
/> saveconfig
Configuration saved to /etc/target/saveconfig.json
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json
Enable and restart the Target service.
[root@storage ~]# systemctl enable target
[root@storage ~]# systemctl restart target
Configure the firewall to allow iSCSI traffic.
[root@storage ~]# firewall-cmd --permanent --add-port=3260/tcp
[root@storage ~]# firewall-cmd --reload
Sir ur classes are superb, I have learned soo many things in Linux.please upload pcs clustering document also.
Thank you, all the documents are available in our telegram channel.
@@NehraClasses hi sir, I am working professional need help in setting up a pacemaker cluster in our lab. I am open for pay the fee.
This is the lecture I was waiting for bro! I hope you will make a series on this with proper explanation rather than just walking though the commands.
Thank you for the video.
Thanks, will definitely make the series of videos. This video I have uploaded on the demand of one of our subscribers, it is public for today only. Tomorrow it will be visible to members only. 🙏
This man's videos are precious. Thank you Nehra
Awesome session ji thanks for the video
Thanks
I got more knowledge on clusters, could you provide the document on it please🙏
Great Bro Superb
Thanks 🤗
can u make video for mysql DB cluster with pacemaker
This video is informative and could you please create detailed video on HA NFS server for prod level? Thanks
Hi..the Physical and logical volume groups not being displayed in another node.. I have performed all the three commands like pvcan, vgcsan and lvscan....Please help
please join our channel platinum membership and join our telegram channel for support.
@@NehraClasses Sir, Could share the documentation? also can you show the fencing through sbd ?
Heyy i have the same issue can you please help me out ?
@@vmalparikh i have the same issue how did u solve it ?
Hello
After seting all the iSCSI and the hosts files in node 1 and 2, you said we need to do the yum config-manager --set-enabled HighAvailability but I'm getting No matching repo to modify: HighAvailability. Any idea?
I'm using RHEL 8.7.
Thanks!
Repository name may differenet on RHEL 8, make sure you have the active redhat subscription for that. kindly list the avaialble repos first.
16:49 - not understadn how its executed
Super tutorial sir
Thanks 🙏
Thank you so much sir. I have 2 questions. Why lvm needs to be created again on node1 or node 2 when lvm had already been created on iscsi storage server. Why we need another machine for virtual ip ?
First Lvm was created on iscsi server, however you can directly use the physical disk instead of lvm if you want and then share it with clients. This shared storage is a block storage and needs to be formatted and partitioned so better to use Lvm so that we can extend it if required.
Virtual IP doesn't require any physical machine unless you want to use NAT to hide the ip address of your actual machine.
@@NehraClasses Sir, Thank you for your quick response. I agree with advantage of Extending capability of LVM but why to create LVM again on nodes when we have already created it on iscsi server. Doesn't it get reflected on nodes. Cant we directly format it on iscsi server ?
16:37 --Getting error while authoring the nodes...
send us error screenshot in telegram
Great, thanks a lot for sharing. Have you a video or web page for the same with Oracle database servers with Pacemaker ?
No
Hi Nehra, I followed steps and all run successful, my new disk on storage vm "nvme0n2" I don't see it in node1 and node2 when running lsblk ? here i stopped. any advise ?
Check your iSCSI target configuration.
Sir ..please provide the documentation as well for pcs clustering ??
Please join channel membership to access all documents.
Hi. Can u also provide a video detailing how to setup fencing for RHEL ?
ok, will upload soon.
Thanks bro.
Hello sir, I using RHEL Ec2 instance and i am not able to install pacemaker corosync pcs, are there any other steps to install in ec2 instance?
Configure EPEL Repository
Can you make us sap hana high availability using pacemaker
Hi sir it seems like some steps are missing for virtualip server.
No dear, please Check again.
There is a mistake in the video you were already create lvm in iscsi server and you shared the same lun but you created again lvm for sdb disk in node1server
This is not "wrong," but it depends on the use case. Below are some points to consider:
Advantages:
1. Centralized Storage Management:
The LVM on the server allows dynamic resizing and management of logical volumes.
Makes it easier to manage and allocate storage for multiple clients.
2. Flexibility for Clients:
Creating LVMs on the client side provides flexibility to manage volumes independently (e.g., create, resize, or delete logical volumes).
3. Redundancy and Performance:
You can manage redundancy or striping (RAID-like) at the client level if needed.
4. Scalability:
It allows multiple clients to have isolated LUNs while the server handles the back-end storage.
Thanks brother
Hi., I am unable to install the Pacemaker cluster packages. I have created the repo. Still I am unable to install them.
Can you help me with the repo configuration for the Pacemaker cluster.
Which flavour u r using? RHEL or CentOS
For better support join our telegram channel 🙏
Nice
Thanks 😊
How to set up this in lab environment
First create these four machines, you should have sufficient hardware resources to run all these machines togather.
Thanks sir
Please share the command details. Sir
its available for members on google drive
Pcsd service unable to start
Error PCS gui
Please sir iska documentation b dijiye.
please join our telegram channel for the same.
@@NehraClasses I am already connected with your telegram channel n. Tried to find out the document of above topic near by same date . But not found. I am watching your recent hindi sessions for linux to brush up my concept.
I will search and let u know once I will get it.
From where I can get rpm of that ?
Which rpm?
Is this run on OEL8 ?
Yes, it will
@@NehraClasses thanks for your reply
Dear Nehra, please provide Documentation also, thanQ
Will upload soon🙂
Please check comments, it's already uploaded there in comments.
@@NehraClasses Unable to apply (follow) steps via video, so please post the setps.
Please check the comment section of this video, already provided the steps in comments section. See the pinned comment first 🙏
Sir ur classes are superb, I have learned soo many things in Linux.please upload pcs clustering document also.
This course name please
Servers Training
@@NehraClasses how can I get this course from redhat
What if the storage Server stops?
multipath---存储双活
Discover Shared Storage
On both cluster nodes, discover the target using the below command.
iscsiadm -m discovery -t st -p IP address
Now, login to the target storage with the below command.
iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18 -p IP address -l
systemctl restart iscsid
systemctl enable iscsid
[root@node1 ~]# pvcreate /dev/sdb
[root@node1 ~]# vgcreate vg_apache /dev/sdb
[root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_apache
[root@node1 ~]# mkfs.ext4 /dev/vg_apache/lv_apache
[root@node2 ~]# pvscan
[root@node2 ~]# vgscan
[root@node2 ~]# lvscan
Finally, verify the LVM we created on node1 is available to you on another node (Ex. node2) using the below commands.
ls -al /dev/vg_apache/lv_apache
[root@node2 ~]# lvdisplay /dev/vg_apache/lv_apache
Make a host entry about each node on all nodes. The cluster will be using the hostname to communicate with each other.
vi /etc/hosts
Host entries will be something like below.
192.168.1.126 node1.nehraclasses.local node1
192.168.1.119 node2.nehraclasses.local node2
dnf config-manager --set-enabled HighAvailability
RHEL 8
Enable Red Hat subscription on RHEL 8 and then enable a High Availability repository to download cluster packages form Red Hat.
subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
dnf install -y pcs fence-agents-all pcp-zeroconf
Add a firewall rule to allow all high availability application to have proper communication between nodes. You can skip this step if the system doesn’t have firewalld enabled.
firewall-cmd --permanent --add-service=high-availability
firewall-cmd --add-service=high-availability
firewall-cmd --reload
Set a password for the hacluster user.
passwd hacluster
Start the cluster service and enable it to start automatically on system startup.
systemctl start pcsd
systemctl enable pcsd
[root@node1 ~]# pcs host auth node1.nehraclasses.local node2.nehraclasses.local
[root@node1 ~]# pcs cluster setup nehraclasses_cluster --start node1.nehraclasses.local node2.nehraclasses.local
Enable the cluster to start at the system startup.
[root@node1 ~]# pcs cluster enable --all
[root@node1 ~]# pcs cluster status
[root@node1 ~]# pcs status
Fencing Devices
The fencing device is a hardware device that helps to disconnect the problematic node by resetting node / disconnecting shared storage from accessing it. This demo cluster is running on top of the VMware and doesn’t have any external fence device to set up. However, you can follow this guide to set up a fencing device.
[root@node1 ~]# pcs property set stonith-enabled=false
dnf install -y httpd
Edit the configuration file.
vi /etc/httpd/conf/httpd.conf
Add below content at the end of the file on both cluster nodes.
SetHandler server-status
Require local
Edit the Apache web server’s logrotate configuration to tell not to use systemd as cluster resource doesn’t use systemd to reload the service.
Change the below line.
FROM:
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
TO:
/usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true
[root@node1 ~]# mount /dev/vg_apache/lv_apache /var/www/
[root@node1 ~]# mkdir /var/www/html
[root@node1 ~]# mkdir /var/www/cgi-bin
[root@node1 ~]# mkdir /var/www/error
[root@node1 ~]# restorecon -R /var/www
[root@node1 ~]# cat
please share second notepad file
already available to members on gdrive.
where is the comment for steps I might be blind or crossed eye guy I can't see the documetation
stonith.enabled=false is not real time example, requesting you need to share fencing configuration video
Everything was great. But stop playing with your screen recorder console over the screen
Oh really
Sir Aap hindi mai Samjha dete to jayada Tik tha