Glusterfs server and client at the same node

Tested in Ubuntu 16 / Glusterfs 3.8

We’re going to configure a glusterfs cluster on two nodes with server and client on both hosts and without a dedicated partition or disk for storage.

First add name and IP address to the /etc/hosts file on both nodes, it’s important to configure glusterfs in a local network or to use a firewall to drop external traffic, for security reasons:

server01 10.10.0.1
server02 10.10.0.2

Then add glusterfs repositories, in this case the stable version was 3.8:

~ $ echo 'deb http://ppa.launchpad.net/gluster/glusterfs-3.8/ubuntu xenial main' > /etc/apt/sources.list.d/gluster-glusterfs.list 

Update and install needed packages:

~ $ apt-get upgrade
~ $ apt-get purge glusterfs-client glusterfs-server glusterfs-common 

Start glusterfs daemon:

~ $ /etc/init.d/glusterfs-server start 

Configure peers, in server01 type:

~ $ gluster peer probe server02 
~ $ gluster peer status

Or if you do it from server02, then:

~ $ gluster peer probe server01 
~ $ gluster peer status

List peers in the pool:

~ $ gluster pool list
UUID					Hostname   	State
bbc3443a-2433-4bba-25a1-0474ec77b571	server02	Connected 
df55a706-a32c-4d5b-a240-b29b6d16024b	localhost  	Connected

Now is the time to create a volume:

~ $ gluster volume create storage-volume replica 2 transport tcp server01:/storage-volume server02:/storage-volume force

gluster volume create: create a volume named storage-volume
replica 2: volume replication with two replicas, each node have a copy of all data
transport tcp: type protocol to use
server01:/storage-volume and server02:/storage-volume: node bricks
force: force create a volume in a root partition (root filesystem)

Start volume:

~ $ gluster volume start storage-volume 

Show the volume status:

~ $ gluster volume status 

Show the volume info:

~ $ gluster volume info 

You can configure a lot of settings for tuning performance and security, for example permit traffic only between nodes:

~ $ gluster volume set storage-volume auth.allow 10.10.0.1,10.10.0.2 

Or for improve IO performance (be carefully because it could be inconsistent):

~ $ gluster volume set storage-volume performance.flush-behind on 

Now create a directory where mount the volume:

~ $ mkdir /mnt/dir-storage-volume 

And finally mount in both nodes:

~ $ mount -t glusterfs 127.0.0.1:/storage-volume /mnt/dir-storage-volume 

Now test the replication, writing in /mnt/dir-storage-volume directory on the first node and watch if changes are traslated to the second node, and vice-versa.

TIP: If you need add more bricks/nodes to extend the size of volume, first add the bricks and then extend replication with rebalance, remember we’re using two replicas:

~ $ gluster add-brick storage-volume replica 2 server03:/storage-volume server04:/storage-volume 
~ $ gluster rebalance storage-volume start
~ $ gluster rebalance storage-volume status

Leave a Reply

Your email address will not be published. Required fields are marked *