2 min read

How to delete Gluster volume roughly

How to delete Gluster volume roughly
Photo by Collab Media / Unsplash

These are exceptional cases.

When you want to remove volume on your Gluster cluster but everything goes down, you can't delete or even detach any peers.

root@DEV-YK[/var/db/system/glusterd]# gluster peer status
Number of Peers: 2

Hostname: 192.168.90.9
Uuid: f0a4aab8-701f-4b6a-8e68-9693d2519b4f
State: Peer in Cluster (Disconnected)
Other names:
192.168.3.14

Hostname: 192.168.90.12
Uuid: 7c693233-25cb-487d-99e4-c0bb60365c34
State: Peer in Cluster (Disconnected)
Other names:
192.168.3.16
root@DEV-YK[/var/db/system/glusterd]# gluster vol info   
 
Volume Name: ctdb_shared_vol
Type: Distribute
Volume ID: 4e6e5bff-b937-48a1-be65-775f544d61e0
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.90.11:/var/db/system/ctdb_shared_vol
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
 
Volume Name: replicated-raidz1
Type: Distribute
Volume ID: fdf1807f-a9f3-40b5-bb6d-6d6c51f1ce38
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.90.11:/mnt/cluster-raidz1/.glusterfs/replicated-raidz1/brick0
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on

I can't delete volume or even detach any peers.

root@DEV-YK[~]# gluster vol delete replicated-raidz1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: replicated-raidz1: failed: Volume replicated-raidz1 has been started.Volume needs to be stopped before deletion.
root@DEV-YK[~]# gluster vol stop replicated-raidz1     
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: replicated-raidz1: success
root@DEV-YK[~]# gluster vol delete replicated-raidz1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: replicated-raidz1: failed: Some of the peers are down
root@DEV-YK[~]# gluster peer status
Number of Peers: 2

Hostname: 192.168.90.9
Uuid: f0a4aab8-701f-4b6a-8e68-9693d2519b4f
State: Peer in Cluster (Disconnected)

Hostname: 192.168.90.11
Uuid: c5187d1b-cf86-4d96-9f37-f78c51f22aab
State: Peer in Cluster (Disconnected)
root@DEV-YK[~]# gluster peer detach 192.168.90.11 
All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y
peer detach: failed: One of the peers is probably down. Check with 'peer status'

Try this.

root@DEV-YK[~]# rm -rfv /var/db/system/glusterd/peers/*
root@DEV-YK[~]# rm -frv /var/db/system/glusterd/vols/*

Then restart glusterd service.

root@DEV-YK[~]# systemctl restart glusterd         
root@DEV-YK[~]# vol info
zsh: command not found: vol
root@DEV-YK[~]# gluster vol info
No volumes present
root@DEV-YK[~]# gluster peer status
Number of Peers: 0

Voila.