Go to ...


A Better Technical Repository

RSS Feed

Linux: Setting up a basic gluster storage cluster

Setup Centos server
Update the server
yum -y update

Assign IP

set the hostname
hostnamectl set-hostname fscluster1 –static

nano /etc/sysconfig/selinux

Disable the Firewall
systemctl status firewalld
systemctl disable firewalld
systemctl stop firewalld
iptables -F


At this point DNS is important. Make certain your servers ar in you local DNS server in order for them to talk to each other by short and FQDN. Use RRDNS to allow the client the ability to talk with all servers in a cluster.

Setup repos and get needed software
cd Downloads/
wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
rpm -ivh epel-release-7-6.noarch.rpm
yum update -y

Install gluster
yum install glusterfs-server
service glusterd start
service glusterd status

Add glusterd to startup
systemctl enable glusterd

Create the directory structure for mounting the new drives (I am doing two new drives)
mkdir -p /data/brick1/gv0
mkdir -p /data/brick2/gv0

Setup File System
fdisk -l
fdisk /dev/sdb
mkfs.btrfs /dev/sdb1
fdisk /dev/sdc
mkfs.btrfs /dev/sdc1

Add new drives to fstab for automounting
echo “/dev/sdb1 /data/brick1 btrfs defaults 0 0” >> /etc/fstab
echo “/dev/sdc1 /data/brick2 btrfs defaults 0 0” >> /etc/fstab
mount -a
df -h

After doing this setup to at least one more server gluster can be configure.
From fscluster1
gluster peer probe fscluster2.domainname.local
gluster peer status
gluster pool list

Create volume
gluster volume create datavol1 replica 2 transport tcp fscluster1.domainname.local:/data/brick1/gv0 fscluster2.domainname.local:/data/brick1/gv0 force
gluster volume start datavol1
gluster volume info

Create a glusterfs mount point to connect to on each server
mkdir -p /mnt/gv01
Add mount to fstab (be sure to specify the appropriate server name)
echo “fscluster1.riverview.local:datavol1 /mnt/gv01 glusterfs defaults 0 0” >> /etc/fstab
mount -a

Install glusterfs on a client (I am using Ubuntu 16.04)
sudo apt-get install glusterfs-client

Create a mount point on you client
mkdir /mnt/fscluster

mount the gluster file system
sudo mount -t glusterfs fscluster.domainname.local:/datavol1 /mnt/fscluster

Other information:

Possible Connection Issues
gluster peer status – Will show what servers are part of the cluster and their connection status
gluster volume info – Will show what servers and bricks are connected in the cluster.  It will also display the clusters brick configuration.
A four server Distributed/Replicated Example:
Volume Name: volume1
Type: Distributed-Replicate
Volume ID: 6689132a-6221-4fee-ba6d-892b7d0fc7f5
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Brick1: fscluster1.domainname.local:/data/brick1/gv0
Brick2: fscluster2.domainname.local:/data/brick1/gv0
Brick3: fscluster3.domainname.local:/data/brick1/gv0
Brick4: fscluster4.domainname.local:/data/brick1/gv0
Options Reconfigured:
performance.readdir-ahead: on
netstat -apt | grep glusterfsdThe output of this command should show each of your connected servers.  If not restarted glusterfsd on each server that is not showing up and try the command again

To delete a data volume
gluster volume stop datavol1
gluster volume delete datavol1

If you have problems with creating a volume sometime it is do to a connection issue. Troubleshoot with telnet.
yum install telnet
telnet (name or ip of server) 24007

If a data volume is not created right and reminant remain you can use the following command to clean things up
(Possible error when recreating a volume: Staging failed on fscluster2.domainname.local. Error: /data/brick1/gv0 is already part of a volume)
Do the follow:
setfattr -x trusted.glusterfs.volume-id /$pathtobrick
setfattr -x trusted.gfid /$pathtobrick
rm -rf /$pathtobrick/.glusterfs
setfattr -x trusted.glusterfs.volume-id /data/brick1/gv0
setfattr -x trusted.gfid /data/brick1/gv0
rm -rf /data/brick1/.glusterfs

By: nighthawk