That was a really nice tutorial! Is Mario Kart a discount coupon to buy hardware(2 for 1)? NFS HA on VMWare = totally a must So hardware would be minimum 2 x AV15 + an external ceph server? Blog post about this?
Hey, Mateusz, Ceph has a fully featured web managed UI which they call the Ceph dashboard. This is where a vast majority of your daily administration can be handled from. The Ceph dashboard is part of the Ceph Manager Daemon, which has been necessary for normal operations of a Ceph cluster for the last several releases. We have actually done a fully featured video running through all the features of the Ceph dashboard in a previous Tech Tip which you can find here: th-cam.com/video/RUBMj5ORbe4/w-d-xo.html&t Thanks!
FAILOVER TIME IS 5 MIN !! Kindly advice. Hello Team, we also trying the same setup and are using git-hub code for ceph-nfs into our setup , but what we see here is that it takes around 5 min to switchover from one active to another during failover. In our environment of Ceph Cluster(version 15.2.7) we are trying to use NFS HA Mode. Mode:"Active/Passive HA NFS Cluster" When we are using Active/Passive HA Config for NFS Server using Corosync/Pacemekar: 1. configuration is done and we are able to perform fail-over, but when an active node is tested with power-off/service-stop, we observe: 1.1 : I/O operations gets stuck for around 5 minute and then it resumes although the handover from active to other standby node happens immediately once the node is powered-off/service is stopped. Ganesha.conf: Ceph version: 15.2.7 NFS Ganesha : 3.3 Ganesha Conf: [ansible@cephnode2 ~]$ cat /etc/ganesha/ganesha.conf # Please do not change this file directly since it is managed by Ansible and will be overwritten NFS_Core_Param { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 3,4; } EXPORT_DEFAULTS { Attr_Expiration_Time = 0; } CACHEINODE { Dir_Chunk = 0; NParts = 1; Cache_Size = 1; } RADOS_URLS { ceph_conf = '/etc/ceph/ceph.conf'; userid = "admin"; watch_url = "rados://nfs_ganesha/ganesha-export/conf-cephnode2"; } NFSv4 { RecoveryBackend = 'rados_ng'; } RADOS_KV { ceph_conf = '/etc/ceph/ceph.conf'; userid = "admin"; pool = "nfs_ganesha"; namespace = "ganesha-grace"; nodeid = "cephnode2"; } %url rados://nfs_ganesha/ganesha-export/conf-cephnode2 LOG { Facility { name = FILE; destination = "/var/log/ganesha/ganesha.log"; enable = active; } } EXPORT { Export_id=20235; Path = "/volumes/hns/conf/bb21b7c7-c663-40e9-ad11-a61441e6f77f"; Pseudo = /conf; Access_Type = RW; Protocols = 3,4; Transports = TCP; SecType = sys,krb5,krb5i,krb5p; Squash = No_Root_Squash; Attr_Expiration_Time = 0; FSAL { Name = CEPH; User_Id = "admin"; } } EXPORT { Export_id=20236; Path = "/volumes/hns/opr/138304ca-a70d-4962-9754-b572bce196b6"; Pseudo = /opr; Access_Type = RW; Protocols = 3,4; Transports = TCP; SecType = sys,krb5,krb5i,krb5p; Squash = No_Root_Squash; Attr_Expiration_Time = 0; FSAL { Name = CEPH; User_Id = "admin"; } }
do you mind telling how much downtime you are able to achieve ? mine is worse - 5min !! looking for some config level changes that needs to achieve something less.
starts at 12:01
That was a really nice tutorial! Is Mario Kart a discount coupon to buy hardware(2 for 1)?
NFS HA on VMWare = totally a must
So hardware would be minimum 2 x AV15 + an external ceph server? Blog post about this?
Is this video for professionals that know the material or for tech enthusiast that don't understand the technology?
I was wondering what your nfs.yml ansible-playbook file looks like.
Hey Joe, here is a direct link to that: github.com/45Drives/ceph-ansible-45d/blob/master/nfs.yml - hope this helps!
Link no longer works.
Thanks, what's this web interface for Ceph that you are using? Does Ceph come with its own interface?
Hey, Mateusz,
Ceph has a fully featured web managed UI which they call the Ceph dashboard. This is where a vast majority of your daily administration can be handled from. The Ceph dashboard is part of the Ceph Manager Daemon, which has been necessary for normal operations of a Ceph cluster for the last several releases. We have actually done a fully featured video running through all the features of the Ceph dashboard in a previous Tech Tip which you can find here:
th-cam.com/video/RUBMj5ORbe4/w-d-xo.html&t
Thanks!
@@45Drives Will watch. Thank you!
FAILOVER TIME IS 5 MIN !! Kindly advice.
Hello Team, we also trying the same setup and are using git-hub code for ceph-nfs into our setup , but what we see here is that it takes around 5 min to switchover from one active to another during failover. In our environment of Ceph Cluster(version 15.2.7) we are trying to use NFS HA Mode.
Mode:"Active/Passive HA NFS Cluster"
When we are using Active/Passive HA Config for NFS Server using Corosync/Pacemekar:
1. configuration is done and we are able to perform fail-over, but when an active
node is tested with power-off/service-stop, we observe:
1.1 : I/O operations gets stuck for around 5 minute and then it resumes although the
handover from active to other standby node happens immediately once the node is
powered-off/service is stopped.
Ganesha.conf:
Ceph version: 15.2.7
NFS Ganesha : 3.3
Ganesha Conf:
[ansible@cephnode2 ~]$ cat /etc/ganesha/ganesha.conf
# Please do not change this file directly since it is managed by Ansible and will be
overwritten
NFS_Core_Param
{
Enable_NLM = false;
Enable_RQUOTA = false;
Protocols = 3,4;
}
EXPORT_DEFAULTS {
Attr_Expiration_Time = 0;
}
CACHEINODE {
Dir_Chunk = 0;
NParts = 1;
Cache_Size = 1;
}
RADOS_URLS {
ceph_conf = '/etc/ceph/ceph.conf';
userid = "admin";
watch_url = "rados://nfs_ganesha/ganesha-export/conf-cephnode2";
}
NFSv4 {
RecoveryBackend = 'rados_ng';
}
RADOS_KV {
ceph_conf = '/etc/ceph/ceph.conf';
userid = "admin";
pool = "nfs_ganesha";
namespace = "ganesha-grace";
nodeid = "cephnode2";
}
%url rados://nfs_ganesha/ganesha-export/conf-cephnode2
LOG {
Facility {
name = FILE;
destination = "/var/log/ganesha/ganesha.log";
enable = active;
}
}
EXPORT
{
Export_id=20235;
Path = "/volumes/hns/conf/bb21b7c7-c663-40e9-ad11-a61441e6f77f";
Pseudo = /conf;
Access_Type = RW;
Protocols = 3,4;
Transports = TCP;
SecType = sys,krb5,krb5i,krb5p;
Squash = No_Root_Squash;
Attr_Expiration_Time = 0;
FSAL {
Name = CEPH;
User_Id = "admin";
}
}
EXPORT
{
Export_id=20236;
Path = "/volumes/hns/opr/138304ca-a70d-4962-9754-b572bce196b6";
Pseudo = /opr;
Access_Type = RW;
Protocols = 3,4;
Transports = TCP;
SecType = sys,krb5,krb5i,krb5p;
Squash = No_Root_Squash;
Attr_Expiration_Time = 0;
FSAL {
Name = CEPH;
User_Id = "admin";
}
}
any input please ?
15 seconds downtime is way too slow for me :
do you mind telling how much downtime you are able to achieve ? mine is worse - 5min !! looking for some config level changes that needs to achieve something less.
I have ceph nautilus and cephfs configured. I want nfs with cephfs . How to modify nfs playbook considering cephfs mounted on /mnt/mycephfs