Documente Academic
Documente Profesional
Documente Cultură
Hostname
gluster2.itzgeek.local
localhost
State
Connected
Connected
Since we are going to use replicated volume, so create the volume named gv0 with two
replicas.
[root@gluster1 ~]# gluster volume create gv0 replica 2
gluster1.itzgeek.local:/data/gluster/gv0
gluster2.itzgeek.local:/data/gluster/gv0
volume create: gv0: success: please start the volume to access data
nfs.disable: on
Now, mount the GlusterFS filesystem to /mnt/glusterfs using the following command.
mount -t glusterfs gluster1.itzgeek.local:/gv0 /mnt/glusterfs
Consider adding Firewall rules for client machine (client.itzgeek.local) to allow connections
on the gluster nodes (gluster1.itzgeek.local and gluster2.itzgeek.local). Run the below
command on both gluster nodes.
firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source
address="clientip" accept'
You can also use below command to verify the GlusterFS filesystem.
root@client:~# cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=480040k,nr_inodes=120010,mode=755 0
0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0
0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=99844k,mode=755 0 0
/dev/mapper/server--vg-root / ext4 rw,relatime,errors=remount-ro,data=ordered
0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup
rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemdcgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
Add below entry to /etc/fstab for automatically mounting during system boot.
gluster1.itzgeek.local:/gv0 /mnt/glusterfs glusterfs
defaults,_netdev 0 0
Data inside the /mnt directory of both nodes will always be same (replication).
Test the both GlusterFS nodes whether they have same data inside /mnt.
[root@gluster1 ~]# ls -l /mnt/
total 0
-rw-r--r--. 1 root root 0 Sep 27
-rw-r--r--. 1 root root 0 Sep 27
2016 file1
2016 file2
Now test the availability of the files, you would see files that we created recently even
though the node is down.
root@client:~# ls -l /mnt/glusterfs/
total 0
-rw-r--r-- 1 root root 0 Sep 28 05:23 file1
-rw-r--r-- 1 root root 0 Sep 28 05:23 file2
-l /mnt/glusterfs/
root
root
root
root
0
0
0
0
Sep
Sep
Sep
Sep
28
28
28
28
05:23
05:23
05:28
05:28
file1
file2
file3
file4
Since the gluster1 is down, all your datas are now written on gluster2.itzgeek.local due to
High-Availability. Now power on the node1 (gluster1.itzgeek.local).
Check the /mnt of the gluster1.itzgeekk.local; you should see all four files in the directory,
this confirms the replication is working as expected.
[root@gluster1 ~]# mount -t glusterfs gluster1.itzgeek.local:/gv0 /mnt
[root@gluster1 ~]#
total 0
-rw-r--r--. 1 root
-rw-r--r--. 1 root
-rw-r--r--. 1 root
-rw-r--r--. 1 root
ls -l /mnt/
root
root
root
root
0
0
0
0
Sep
Sep
Sep
Sep
27
27
27
27
19:53
19:53
19:58
19:58
file1
file2
file3
file4
Thats All.
PAGES: 1 2