Sunteți pe pagina 1din 19

GlusterCloud

Running ownCloud serving from a replicated gluster volume

Version 1.0
June 2013

Table of Contents
About this document............................................................................................................................. 4 Overview.................................................................................................................................................. 5 So how was it built?................................................................................................................................ 6 Install & Configure GlusterFS............................................................................................................... 7 Step 01 : Install Fedora on XFVA001............................................................................................... 7 Step 02 : Clone the System............................................................................................................... 8 Step 03 : Prepare xfva001................................................................................................................ 9 Step 04 : Prepare xfva002.............................................................................................................. 10 Step 05 : Install GlusterFS.............................................................................................................. 10 Step 06 : Setup Trusted Storage Pool.......................................................................................... 10 Step 07 : Create the Gluster Volume............................................................................................ 11 Step 08 : Test it !............................................................................................................................... 12 Setup the ownCloud server................................................................................................................ 13 Step 01 : Install basic OS................................................................................................................. 13 Step 02 : Install GlusterFS Client................................................................................................... 13 Step 03 : Create Mountpoint & edit fstab................................................................................... 13 Step 04 : Install OwnCloud............................................................................................................. 14 Step 05 : Test it!................................................................................................................................ 16

About this document


My first real exposure to Enterprise Storage was back in 2001, right after Y2K did not hit us and in the middle of the whole shebang regarding some new monetary unit being introduced in Europe. At the time I was what they would call the Spaceman'. Storage was my thing! If you accidentaly (or stupidly) had deleted one of your most precious files, I would come to the rescue and restore that file for you. Not that you could not have used some tooling to restore that file yourself, but hey... I don't judge :). Besides that I also worked on various D&R setups (from PTAM all the way up to synchronous multi site data mirroring). Lately my attention has been drawn to this thing called glusterfs. After reading about it, I realized this would be a very nice way to 'emulate' the awesomeness storage devices I was working on when I first really started doing storage could provide. Storage devices with a fairly easy user interface where data storage options could be configured. Synchronous, and a-synchronous mirror options could be configured as well as groups of volumes that would always be kept consistent (keep in mind the 'consistancy over availability'-dogma can be a quite frustrating for end-users/clients at times......) So I set out to devise a good use case for such an awesome storage environment running just on my own little data-center. En-lieu of the current mubo-jumbo regarding PRISM I thought it would be nice to setup a 'private cloud' storage facility :) OpenCloud was the prime candidate for such an endeavor. Not that we would be using Endevour because for us Dutch people that really sounds like 'DuckFood' (eendevoer) but that's another story. I've managed to set up an environment doing just that : OpenCloud, accessible via the internet, storing it's (his? her?) data on a glusterfs-filesystem spread out over two servers. This MINI HOWTO describes how to built such an environment yourself. Happy glusterclouding!

henrikuiper@zdevops.com @henrikuiper, @zdevops

Overview

We used two servers (xfva001 and xfva002) who both run Fedora 18, and have the glusterfs--fuse and glusterfs-server packages installed (version 3.1.1). During tests the package glusterfs-client was needed too. On one of these servers (we'll call it the primary server) a 'replicate 2' volume was built. This volume (gv0) consists of two bricks, each of these e 'physically attached' to one of the these servers. The glusterfs volume thus created will have it's data stored on both of these bricks. In case of failure to one of the bricks or servers (or any other hardware in between) data will continue to be available to the client (the owncloud server in our example) via the other brick on the other server. All of is this is being done transparent to the client. The client will expose it's services (the ownCloud application) via a reverse proxy (pound) so end-users can make use of this service via 'the interwebs'.

So how was it built?


A great deal of the configuration as shown on the previous page runs as kvm-clients. All but the 'pfsense' server are actually running on a single AMD Athlon(tm) 64 X2 Dual Core Processor 4400+with a mere 3GB of memory and a 300GB SATA drive. First of all a virtual server, based on a Fedora image, was built from scratch on the kvmmachine.. This image was then semi-prepped and cloned twice. The original 'clean install' was held back for future use. The two clones were then augmented with the latest and greatest version of glusterfs (version 3.1.1, gently reminding to the times when we were happily chugging away on 'Windows 3.11'). The two clones then made a peer-to-peer connection and a volume was created that would be replicated (mirrored, distributed, redundantly stored or whatever nomenclature you prefer) between the two clones. Now all we needed was a server that would host the openCloud application. Thus another server was created. To keep things different (we were already running SLES and FEDORA) another virtual server was created based on Ubuntu 12.03 Server. The image was then upgraded so it was able to connect to a glusterfs filesystem opencloud was installed and configured. After reading this guide you too can be able to host your own private cloud from any 'old hardware' you might have lingering about. Take notice though, it will take you anywhere from four to twelve hours. Depending on your experience and/or non-available prerequisites your milage may vary.

Install & Configure GlusterFS


Step 01 : Install Fedora on XFVA001
Straightforward next,next finish installation and apply all available updates. Then activate sshd and partition the glusterFS storage disk (brick-disk bxfa001011)
[root@localhost dev]# fdisk -l Disk /dev/vda: 8589 MB, 8589934592 bytes, 16777216 sectors Disk /dev/vdb: 17.2 GB, 17179869184 bytes, 33554432 sectors our brick-disk Disk /dev/mapper/fedora-swap: 2113 MB, 2113929216 bytes, 4128768 sectors Disk /dev/mapper/fedora-root: 5947 MB, 5947523072 bytes, 11616256 sectors

So we know we have to 'partition and format' the /dev/vdb disk :)


[root@localhost dev]# fdisk /dev/vdb Welcome to fdisk (util-linux 2.22.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x8f20f8b9. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-33554431, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-33554431, default 33554431): Using default value 33554431 Partition 1 of type Linux and of size 16 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.

The'Naming Convention' used is explained briefly further on in this document......

Step 02 : Clone the System


Next shutdown xfva001 and clone it into xfva002 (and xvfa000 so we have a clean Fedora for possible future expansions......)

Up next, boot the clone and change hostname to xfva002. Make sure the hostanmes are available in dns. See network requirement.

Step 03 : Prepare xfva001


Format FileSystem
First format the filesystem (/dev/vdb1) as an xfs filesystem.
[root@xvfa001 henri]# mkfs.xfs -i size=512 /dev/vdb1 meta-data=/dev/vdb1 isize=512 agcount=4, agsize=1048512 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=4194048, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

Create MountPoint
[root@xvfa001 henri]# mkdir -p /export/ bfva00101 [root@xvfa001 henri]# echo /dev/vdb1 /export/bfva00101 /etc/fstab xfs defaults 1 2 >>

The 'naming convention' used here is a 'brick' will be stored on a disk. This disk will be mounted on /export/bxxxxxxnn.
xxxxxx nn : 6 charactert hostname-identifier (in this case 'fva001') : disk-number on the host (in this case the first disk '01')

Mount it & Check it


mount -a && mount /dev/vdb1 on /export/bfva00101 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

Create subdir
mkdir /export/bfva00101/vdb1

Step 04 : Prepare xfva002


All the same actions as done on xufa001, however, due to brick-location-namingconventions some paths are slightly different. Long story short :
mkfs.xfs -i size=512 /dev/vdb1 mkdir -p /export/bfva00201 echo /dev/vdb1 /export/bfva00201 xfs defaults 1 2 >> /etc/fstab mount -a && mount mkdir /export/bfva00201/vdb1

Step 05 : Install GlusterFS


Install (via yum) on both machines like so:
yum install wget (haha) wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/glusterfsfedora.repo yum install glusterfs{-fuse,-server,-client}

Then start 'glusterd' on xfva001 and xfva002


service glusterd start

Step 06 : Setup Trusted Storage Pool


The Firewall always messes stuff up :) We're runninig in a trusted, internal, virtual network so at this point we have no need for a FireWall. So just issue these commands on both servers and life will be good....
service firewalld stop chkconfig firewalld off

Then setup the peer connection between xfva001 and xfva002. From the xfva001 machine run the following command :
# gluster peer probe xfva002 Probe successful # gluster peer status Number of Peers: 1 Hostname: xfva002 Uuid: 6de2d3db-c770-4b0a-a90d-2c69fa8fc6f7 State: Peer in Cluster (Connected)

Step 07 : Create the Gluster Volume


From any server (I picked xfva001) issue the following command:
# gluster volume create gv0 replica 2 xfva001:/export/bfva00101/vdb1/ xfva002:/export/bfva00201/vdb1 Creation of volume gv0 has been successful. Please start the volume to access data. # gluster volume start gv0 Starting volume gv0 has been successful

This will create a volume that will have all of it's data replicated between xfva001 and xfva002. Data stored on the 'gv0' gluster volume will be written to both 'Brick1' and 'Brick2' as shown in the picture below. (courtesy of https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/htmlsingle/Administration_Guide)

We can then check the status of this volume on both systems...


[root@xfva001]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 665f8ffc-5f16-4a3b-9c0e-68319d496a2b Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: xfva001:/export/bfva00101/vdb1 Brick2: xfva002:/export/bfva00201/vdb1 [root@xfva002]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 665f8ffc-5f16-4a3b-9c0e-68319d496a2b Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: xfva001:/export/bfva00101/vdb1 Brick2: xfva002:/export/bfva00201/vdb1

Step 08 : Test it !
After all the hard work, it would be nice to see it actually working. Seeing as we installed the 'glusterfs-client' package at the start too we can easily just create a mountpoint and moint the glustervolume there :
mkdir /mnt/tstgv0 mount -t glusterfs xfva001:/gv0 /mnt/tstgv0/ When all goed well, the 'df' command should show output resembling as below: [root@xfva002 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 491440 0 491440 0% /dev tmpfs 509716 84 509632 1% /dev/shm tmpfs 509716 3984 505732 1% /run tmpfs 509716 0 509716 0% /sys/fs/cgroup /dev/mapper/fedora-root 5585732 4076452 1218876 77% / tmpfs 509716 12 509704 1% /tmp /dev/vda1 487652 103479 358573 23% /boot /dev/vdb1 16765952 33008 16732944 1% /export/bfva00201 xfva001:/gv0 16765952 33024 16732928 1% /mnt/tstgv0

Start writing some files (lets copy /var/log/messages a 100 times like in the GlusterFS the Quitstart Example)
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/tstgv0/copy-test-$i; done

Afterwards xfva001 looks like


[root@xfva001 vdb1]# ls -la total 56408 drwxr-xr-x. 3 root root 4096 drwxr-xr-x. 3 root root 17 -rw-------. 2 root root 576326 -rw-------. 2 root root 576326

Jun Jun Jun Jun

20 20 20 20

14:10 12:52 14:08 14:08

. .. copy-test-001 copy-test-002

And if all is configured properly xvfa002 wil show similar output.


[root@xfva002 vdb1]# ls -la total 56408 drwxr-xr-x. 3 root root 4096 drwxr-xr-x. 3 root root 17 -rw-------. 2 root root 576326 -rw-------. 2 root root 576326

Jun Jun Jun Jun

20 20 20 20

14:10 13:05 14:08 14:08

. .. copy-test-001 copy-test-002

(Don't forget to clean this up!)

Setup the ownCloud server


Step 01 : Install basic OS
For some reason or the other2 I picked Ubuntu 12.03 Server for this and selected OpenSSH Server and LAMP Server to be installed during installation. The guides at : http://www.gluster.org/category/ubuntu/ were very helpful too.

Step 02 : Install GlusterFS Client


This is somewhat of a crucial bit. The server will be unable to mount the 'gv0' volume we created earlier in the GlusterFS setup. The server will also be unable to properly access this volume when glusterfs-client version is not 'compatible' with the glusterfs-server version running on the nodes (xfva001 and xfva002 in this document).
# # # # apt-get install software-properties-common add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.3 apt-get update apt-get install glusterfs-client

Step 03 : Create Mountpoint & edit fstab


# mkdir /mnt/gv0 # echo xfva001:/gv0 /mnt/gv0 glusterfs # mount -a (unknown option _netdev (ignored) ) defaults,_netdev 0 0 >> /etc/fstab

According to the GlusterFS documentation the '_netdev' option is essential to make sure the volume will be properly mounted in case of server reboots. However, on Ubuntu this option seems to be unimplemented and thus ignored3. See also this post at unixheaven.org : http://unix-heaven.org/glusterfs-fails-to-mount-after-reboot

2 3

To be honest, keeping a 'healthy' mix of various distros was the main reason for this :) At this point in time it was not deemed neccesary to 'fix' this. Just make sure te manually mount -a after owncloud reboots :)

Step 04 : Install OwnCloud


The documentation over at http://doc.owncloud.org/server/4.5/admin_manual/installation.html is pretty complete. So we will not go into great detail here : The following can be considered a mini-mini-howto :)
apt-get install owncloud mysql -u root -p mysql> create database owncloud; Query OK, 1 row affected (0.00 sec) mysql> create user 'ownclouduser'@'loalhost' IDENTIFIED BY 'PASSWORD_HERE'; Query OK, 0 rows affected (0.00 sec) mysql> GRANT ALL ON owncloud.* to 'ownclouduser'@'localhost'; Query OK, 0 rows affected (0.03 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> exit

Make sure to set the ownership of the intended owncoud-datastore (in our case /mnt/gv0) in such a way the web-server processes can do what they need to do. I've settled for a plain and simple :
chown www-data:www-data /mnt/gv0

Then head over to 'http://owncloud/owncloud' and you will be presented with a post-install owncloud configure screen. Complete the form and your 'up and running'

Step 05 : Test it!


Testing might be considered 'futile' at this point in time as we used the OwnCloud installation for completing the installation :) Just to be sure all worked fine I did go ahead and install the 'sync-client' (for MacOS) and tried to make a connection to the owncloud instance using the LAN:

Which then results in :

However : whenever I set reverseproxy (pound) to expose the OwnCloud to internet it will NOT CONNECT via client (web access is ok). WEBDAV errors 301 and such will appear. The pound configuration used : (port 80 traffic on 'pfSense' external interface is forwarded to 'pound-box'.)
ListenHTTP Address 192.168.1.88 Port 80 Service HeadRequire "Host: .*owncloud.zdevops.com*" BackEnd Address 192.168.1.42 Port 80 End

End End

Luckily we tested 'local access' and determined everything was ok. Simple deduction tells us the cause of this error lies within the reverseproxy (as this is the added component). For the 'pound' reverse proxy an extra setting is needed to alllow for the extra HTTPrequests to be accepted. ('man pound' is actually very educational reading material!)
xHTTP value Defines which HTTP verbs are accepted. The possible values are: 0 (default) accept only standard HTTP requests (GET, POST, HEAD). 1 additionally allow extended HTTP requests (PUT, DELETE). 2 additionally allow standard WebDAV verbs (LOCK, UNLOCK, PROPFIND, PROPPATCH, SEARCH, MKCOL, MOVE, COPY, OPTIONS, TRACE, MKACTIVITY, CHECKOUT, MERGE, REPORT).

3 additionally allow MS extensions WebDAV verbs (SUBSCRIBE, UNSUBSCRIBE, NOTIFY, BPROPFIND, BPROPPATCH, POLL, BMOVE, BCOPY, BDELETE, CONNECT). 4 additionally allow MS RPC extensions verbs (RPC_IN_DATA, RPC_OUT_DATA).

Seeing as we had nothing configured in the pound.cfg the default was being used. After upgrading to 'xHTTP 2' everything worked as intended! Below the altered pound.cfg
ListenHTTP Address 192.168.1.88 Port 80 xHTTP 2 Service HeadRequire "Host: .*owncloud.zdevops.com*" BackEnd Address 192.168.1.42 Port 80 End End End

Next Steps...
Sync-Client on iOS/iPhone
First of all, I installed/bought the ownCloud client app for the iPhone. It works like a charm. And it even had the option to sync address book and calendar entries. To do so, use the guide provided at the owncloud website (http://doc.owncloud.org/server/4.5/user_manual/sync_ios.html) it's pretty straight forward as you will see

Experiment with failure


As the owncloud setup links to the brick on one server, (remember the echo xfva001:/gv0 /mnt/gv0 glusterfs defaults,_netdev 0 0 >> /etc/fstab earlier?) test and see if your owncloud application still functions as intended when powering off 'xfva001'. You will not be surprised stuff will just conform to the KeepShitRunning pricipals :)

Further research Ubuntu mounting


As stated the _netdev option in fstab is not being honored in Ubuntu. I still have to implement a solution to this, however as stated, this is 'for some other time'. Remember to manually mount after reboots....

Geo-Replication
I'd really like to setup 'geo-Repliacation' too. For this, I will need to implement/connect to some GlusterFS stuff at a remote site. This also is something 'for later'.

Glusterception
The one thing I am mentally designing is a 'glusterception' setup. With this I mean to make such a setup where the 'brick-disks' are in fact some glusterfs-volumes...from another Gluster, it might not be useful, but it will be funny.... It looks something like the following :

Closing Notes....
Thanks for reading through all of this, it's my first 'open sourced' documentation so any remarks, tips, tricks or comments are more than welcome. As stated this document is based on installation notes, which consist of 'copy-pasted' terminal output, screengrabs and hand-scribbled notes :) It should however get you up ad running with your own 'GlusterCloud'. In the odd case you are really implementing a GlusterCloud based on this MINI-HOWTO I am really curious to your opinions and results :) Thanks to @tomsan68 for bringing my attention towards the GlusterFS 'product'. A big shout out to the wonderful people who make GlusterFS possible and a big salute to the creators of ownCloud for such a marvelous product! That's me done.......see you next time?

S-ar putea să vă placă și