Skip to content

Rage Against the Shell

Linux tips and other things…

  • Home
  • Contact
  • Privacy Policy

Glusterfs server and client at the same node

Posted on May 4, 2017 - May 23, 2020 by Mr. Reboot

Tested in Ubuntu 16 / Glusterfs 3.8

We’re going to configure a glusterfs cluster on two nodes with server and client on both hosts and without a dedicated partition or disk for storage.

First add name and IP address to the /etc/hosts file on both nodes, it’s important to configure glusterfs in a local network or to use a firewall to drop external traffic, for security reasons:

server01 10.10.0.1
server02 10.10.0.2

Then add glusterfs repositories, in this case the stable version was 3.8:

~ $ echo 'deb http://ppa.launchpad.net/gluster/glusterfs-3.8/ubuntu xenial main' > /etc/apt/sources.list.d/gluster-glusterfs.list 

Update and install needed packages:

~ $ apt-get upgrade
~ $ apt-get purge glusterfs-client glusterfs-server glusterfs-common 

Start glusterfs daemon:

~ $ /etc/init.d/glusterfs-server start 

Configure peers, in server01 type:

~ $ gluster peer probe server02 
~ $ gluster peer status

Or if you do it from server02, then:

~ $ gluster peer probe server01 
~ $ gluster peer status

List peers in the pool:

~ $ gluster pool list
UUID					Hostname   	State
bbc3443a-2433-4bba-25a1-0474ec77b571	server02	Connected 
df55a706-a32c-4d5b-a240-b29b6d16024b	localhost  	Connected

Now is the time to create a volume:

~ $ gluster volume create storage-volume replica 2 transport tcp server01:/storage-volume server02:/storage-volume force

gluster volume create: create a volume named storage-volume
replica 2: volume replication with two replicas, each node have a copy of all data
transport tcp: type protocol to use
server01:/storage-volume and server02:/storage-volume: node bricks
force: force create a volume in a root partition (root filesystem)

Start volume:

~ $ gluster volume start storage-volume 

Show the volume status:

~ $ gluster volume status 

Show the volume info:

~ $ gluster volume info 

You can configure a lot of settings for tuning performance and security, for example permit traffic only between nodes:

~ $ gluster volume set storage-volume auth.allow 10.10.0.1,10.10.0.2 

Or for improve IO performance (be carefully because it could be inconsistent):

~ $ gluster volume set storage-volume performance.flush-behind on 

Now create a directory where mount the volume:

~ $ mkdir /mnt/dir-storage-volume 

And finally mount in both nodes:

~ $ mount -t glusterfs 127.0.0.1:/storage-volume /mnt/dir-storage-volume 

Now test the replication, writing in /mnt/dir-storage-volume directory on the first node and watch if changes are traslated to the second node, and vice-versa.

TIP: If you need add more bricks/nodes to extend the size of volume, first add the bricks and then extend replication with rebalance, remember we’re using two replicas:

~ $ gluster add-brick storage-volume replica 2 server03:/storage-volume server04:/storage-volume 
~ $ gluster rebalance storage-volume start
~ $ gluster rebalance storage-volume status
Posted in GlusterFS

Post navigation

Boot on LVM root partition in Raspbian
Suunto Moon Age App

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Calendar

May 2017
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031  
« Mar   Jul »

Categories

  • Apache
  • Cisco
  • Command line
  • Distros
  • Dovecot
  • File systems
  • Gadgets
  • GlusterFS
  • MySQL
  • Nginx
  • NTP
  • Opendkim
  • Pacemaker + Corosync
  • Postfix
  • Raspberrypi
  • SSH
  • SSL
  • Varnish

RSS RSS

  • Using qrencode January 16, 2022
  • Compile varnish module vmod_vsthrottle April 22, 2020
  • SSH vpn with sshuttle April 9, 2020
  • Disable swap in systemd December 16, 2019
  • Getting the parent process pid October 12, 2018
Proudly powered by WordPress | Theme: micro, developed by DevriX.