GlusterFS can be configured to provide persistent storage and dynamic provisioning for OKD. It can be used both containerized within OKD (Containerized GlusterFS) and non-containerized on its own nodes (External GlusterFS). For GlusterFS storage, we will use EC2 Volumes.

In this guide, we will focus on setting up GlusterFS on an existing Openshift 3.10 setup.

If you need to setup an Openshift Origin 3.10 environment, please find relevant instructions here.


There must be atleast 3 Compute nodes in the Openshift 3.10 setup. If you need to add additional Compute nodes to your cluster, please find the steps for adding additional nodes into the cluster here. Else you will encounter the error below:


Step1: Create EC2 volumes and attach to Compute Nodes in Openshift Cluster


  • Attach EC2 volumes to only the Compute Nodes.


  • Verify the output of lsblk command on the Compute Nodes (Notice the additional device xvdg:


Step2: Modify inventory file to add GlusterFS details

Perform the steps below on the Master Node:

  • Take a backup of the existing inventory file (/etc/ansible/hosts).
cp /etc/ansible/hosts /etc/ansible/hosts.orig
  • Modify the inventory file to add details related to GlusterFS storage as shown below:
vi /etc/ansible/hosts
  • Notice the GlusterFS content in blue:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups

# Set variables common for all OSEv3 hosts
# SSH user, this user should allow ssh based auth without requiring a password
# Deployment type: origin or openshift-enterprise
# resolvable domain (for testing you can use external ip of the master node)
# external ip of the master node
# external ip of the master node
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]

# host group for masters

# host group for etcd

# host group for nodes, includes region info
[nodes] openshift_node_group_name='node-config-master' openshift_node_group_name='node-config-compute' openshift_node_group_name='node-config-compute' openshift_node_group_name='node-config-compute' openshift_node_group_name='node-config-infra' openshift_node_group_name='node-config-infra'

[glusterfs] glusterfs_devices='[ "/dev/xvdg" ]' glusterfs_devices='[ "/dev/xvdg" ]' glusterfs_devices='[ "/dev/xvdg" ]'

Step 3:  Running Openshift-Ansible Playbook for setting up GlusterFS

Perform the steps below on the Master Node:

For an installation onto an existing OKD cluster:

ansible-playbook -i /etc/ansible/hosts playbooks/openshift-glusterfs/config.yml
Step 4:  Validating the Environment
  • Once the installation is completed, ensure that the status of the GlusterFS pods is healthy using the command:
kubectl get pods --namespace=app-storage


  • Verify GlusterFS Storage Class using the commands:
kubectl get sc

kubectl describe sc glusterfs-storage


blackDisclaimer: All data and information provided on this site is for informational and learning purposes only. makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this site and will not be liable for any errors, issues, or any losses, damages arising from its display or use. This is a personal weblog. The opinions expressed here represent my own and not those of anyone.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s