Gluster Upgrade Lab
Contents
Introduction⌗
This lab walks through the setup of a small Gluster 3.8.4 cluster, provisioning of some test criterias and eventually the upgrade to the latest stable version of Gluster at the time of writing (6.0). This is only a small lab and does not cover other systems like NFS-Ganesha included in RHGS as the clients are all using the Gluster fuse driver. Full upgrade instructions for RHGS can be found here.
Requirements⌗
This lab requires a libvirt host with at least:
- Memory >= 30GB
- vCPU >= 14
- Disk >= 200GB
VMs created with this lab
Hostname | vCPU | Memory | OS Disk | Brick Size | Brick Count | Optional |
---|---|---|---|---|---|---|
gluster01 | 4 | 8GB | 40GB | 10GB | 2 | No |
gluster02 | 4 | 8GB | 40GB | 10GB | 2 | No |
gluster03 | 4 | 8GB | 40GB | 10GB | 2 | No |
gluster-client | 2 | 4GB | 9GB(from Gluster) | Yes |
Setup⌗
VMs⌗
Bash script to set up 3 VMs with 2 extra 10GB drives each in order to install Gluster. This lab uses the RHEL 7.9 image.
- Demo
- Get the IP addresses from libvirt DHCP
Requirements⌗
- Simple Ansible inventory file
- Ensure Chrony is installed, started and enabled
Subscriptions⌗
- Attach entitlement pools to the system
- Enable the RHEL and Gluster channel
Install Gluster 3.8.4⌗
- Install the packages
- Start the Gluster service
Setup Gluster Bricks⌗
- Configure drive partitions (Ansible drive partitions]
Ansible drive partitions⌗
- Add drives to PV (Ansible version here)
- Add PVs to VGs
- Add VGs to LVs
- Create XFS file system
Ansible PV creation⌗
- Create Bricks mounting directory on each node (Ansible create mount point)
Ansible create mountpoint⌗
- Add bricks to fstab (Ansible mount)
- Mount bricks
- Check bricks have mounted
Ansible mount⌗
- Create a
brick
directory on each brick (Ansible create brick directory)
- Repeat on the remaining two Gluster nodes.
Ansible create brick directory⌗
Create Gluster Volumes⌗
- Start Gluster on all nodes
- From one node, probe the other two to bring them into the cluster
- Creating two replicated volumes over 3 drives with a replica of 3
- Check the volumes have created
- Start the volumes
- Check status of the volumes (rep-vol-1 example)
Test Setups⌗
Add Data to the Volumes⌗
- Install glusterfs-fuse on client machine
Make sure the client can resolve the Gluster host names.
- Mount first Gluster volume
- Copy a few files to the mount and md5 sum them
Use Gluster for a VM⌗
Host file entries may be required in order to mount the volume
Using netfs for the pool as the glusterfs locally is not backward compatible with Gluster 3.8.4
- Set permissions on the Gluster pool
From a gluster node
- Make mount directory
From the libvirt host
- Define XML for a Gluster storage pool
- Create the storage pool
- Confirm pool has started
- Create VM OS disk on Gluster
- Resize the OS into the client image
- Customise VM
- Create VM
SELinux on the libvirt host may have issues starting the VM from a fuse mount. For this example, SELinux has been set to permissive
Install WordPress for testing⌗
- Install Podman on the client node
- Create the WordPress and MariaDB containers into a Pod
- Setup WordPress with WordPress cli
Site should now be available from http://gluster-client.gluster.lab:8080 assuming DNS or hosts file entries are in place
From the wp-cli container, generate a handful of posts to check the database is working during the upgrade
- If it’s required to have continual database activity, something like this can be done to add a comment to a random post every 30 seconds:
Upgrade to Gluster 6⌗
Following the guides documented here
During the upgrade ensure that the mount is still working from the adding data to volumes section. If there is a VM set up as well, keep checking this
- From the first Gluster node check the peer and volume status
- Check the are no pending self-heals on either volume.
- Backup the following directories if they exist
- /var/lib/glusterd
- /etc/samba
- /etc/ctdb
- /etc/glusterfs
- /var/lib/samba
- /var/lib/ctdb
- /var/run/gluster/shared_storage/nfs-ganesha
- Stop Gluster on the node and ensure it has stopped
- Run yum update on the node
- Disable Gluster systemd unit to ensure the node comes back healthy
- Reboot the node
- When the node is back up, check the version and brick mount points
- Start Gluster
- Check Gluster volume status
- Start a self heal on the two volumes
- Check heal info
If there is a running VM on the rep-vol-2 volume, it is to be expected there will be a heal operation ongoing for a while
- Re-enable Gluster on the node
Once all nodes have been upgraded, set the
op-version
on all volumes
Summary⌗
This is a very basic upgrade lab as in a production environment, there is likely to be a lot more I/O against the bricks and multiple other services running to support the environment. This page details a lot of the other services in detail and should be read before upgrading an important environment.
The upgrade in the lab was managed without a break in I/O to the running VM.
Cleanup⌗
This teardown script should remove all the VMs and their associated storage. It will remove the gluster pool and client VM first.