Documente Academic
Documente Profesional
Documente Cultură
Mail Server. But, starting from Zimbra 9.x.x, SLES has been deprecated (end of life) and
may be will not supported by Zimbra. Therefore, i attempt to using CentOS as operating
system for Zimbra. For easy understanding, this is my information system
Domain
: imanudin.net
Hostname
: mail
IP Address : 192.168.26.11
# Configure Network
First, we must configure network on CentOS. Assuming name of your network interface is
eth0
1.vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=192.168.26.11
NETMASK=255.255.255.0
DNS1=192.168.26.11
GATEWAY=192.168.26.2
DNS2=192.168.26.2
USERCTL=no
localhost
Create database for new zone has been created on folder /var/named/
1.touch /var/named/db.imanudin.net
2.chgrp named /var/named/db.imanudin.net
3.vi /var/named/db.imanudin.net
fill as follows
$TTL 1D
@
IN SOA
@
@
ns1
mail
IN
IN
IN
IN
ns1.imanudin.net. root.imanudin.net. (
0
; serial
1D
; refresh
1H
; retry
1W
; expire
3H )
; minimum
NS
ns1.imanudin.net.
MX
0 mail.imanudin.net.
A
192.168.26.11
A
192.168.26.11
If results from above command as follows, your configuration dns has been success
[root@mail opt]# nslookup mail.imanudin.net
Server:
192.168.26.11
Address:
192.168.26.11#53
Name: mail.imanudin.net
Address: 192.168.26.11
Preparation for install Zimbra has been finished. Now we can install zimbra and will
explained on next section
After previously preparation for install Zimbra has been finished, we can install
Zimbra right now. First, we can download Zimbra Binary from this link
http://www.zimbra.com/downloads/zimbra-collaboration-open-source or if you
are in Indonesia region, you can download on follows link
http://mirror.linux.or.id/zimbra/binary/.
1.
cd /opt/
2.
wget -c http://files2.zimbra.com/downloads/8.5.0_GA/zcs8.5.0_GA_3042.RHEL7_64.20140828204420.tgz
After finished download, extract Zimbra, cd to folder result of extract and install
Zimbra
1.
tar -zxvf zcs-8.5.0_GA_3042.RHEL7_64.20140828204420.tgz
2.
cd zcs-8.5.0_GA_3042.RHEL7_64.20140828204420
3.
sh install.sh
type Y if asking license agreement
Do you agree with the terms of the software license agreement? [N] Y
Select the packages to install like follows
Install
Install
Install
Install
Install
Install
Install
Install
Install
Install
zimbra-ldap [Y] Y
zimbra-logger [Y] Y
zimbra-mta [Y] Y
zimbra-dnscache [Y] N
zimbra-snmp [Y] Y
zimbra-store [Y] Y
zimbra-apache [Y] Y
zimbra-spell [Y] Y
zimbra-memcached [Y] Y
zimbra-proxy [Y] Y
Common Configuration:
zimbra-ldap:
zimbra-logger:
zimbra-mta:
zimbra-snmp:
zimbra-store:
+Create Admin User:
+Admin user to create:
Enabled
Enabled
Enabled
Enabled
Enabled
yes
admin@imanudin.net
1012Livia
https://192.168.1.51:7071/zimbraAdmin/
# Configure Network
First, we must configure network on CentOS. Assuming name of your network
interface is eth0. Do the following configuration on all nodes (node1 and
node2) and adjust on node2
1.vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=192.168.80.91
NETMASK=255.255.255.0
DNS1=192.168.80.91
GATEWAY=192.168.80.11
DNS2=192.168.80.11
DNS3=8.8.8.8
USERCTL=no
Restart network service and setup for automatic boot on all nodes (node1 and
node2)
1.service network restart
2.chkconfig network on
# Configure Disable Selinux & Firewall on all nodes (node1 and node2)
On node2
1.hostname node2.imanudin.net
2.vi /etc/sysconfig/network
# Update repos and install packages Heartbeat on all nodes (node1 and node2)
1.yum update
2.yum install epel-release
3.yum -y install heartbeat
If you cannot get epel repo, please use this repo and
install : http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-68.noarch.rpm
# Configure Heartbeat
Create a file /etc/ha.d/ha.cf (enough on node1 only)
1.vi /etc/ha.d/ha.cf
Note :
eth0 is interface on your system. If your system using eth1 for interface name,
please change eth0 to the eth1. 192.168.80.92 is IP Address of node2
Create a file /etc/ha.d/authkeys (enough on node1 only)
1.vi /etc/ha.d/authkeys
node1.imanudin.net IPaddr::192.168.80.93/24/eth0:0
Note :
node1.imanudin.net will become as a master server. 192.168.80.93 is an alias
IP for testing online/failover
# Copy those files from node1 to node2 (run the following command on node1)
view sourceprint?
1.cd /etc/ha.d/
2.scp authkeys ha.cf haresources root@192.168.80.92:/etc/ha.d/
Please try to access node1 via browser. You will see a text This is node1
Create an index.html on DocumentRoot node2
view sourceprint?
1.vi /var/www/html/index.html
Please try to access node2 via browser. You will see a text This is node2
Integrate Apache with Heartbeat
Please change file /etc/ha.d/haresources on all nodes
view sourceprint?
1.vi /etc/ha.d/haresources
Stop service Apache and configure automatic off at boot on all nodes (Service
Apache will be handled by Heartbeat)
view sourceprint?
1.service httpd stop
2.chkconfig httpd off
Please try to access an alias IP from browser. You will see a text This is node1.
Please try to stop Heartbeat service on node1 and refresh browser. You will see
a text This is node2 (all services handled by Heartbeat on node1 will be taken
by node2). For failback, please start again Heartbeat service on node1 (all
services handled by Heartbeat on node2 will be taken again by node1)
You could also experiment with other services for online failover such
as Samba, MySQL, MariaDB etc. The Heartbeat application only configure
failover/failback, not data synchronize.
Good luck and hopefully useful
# Configure Network
Restart network service and setup for automatic boot on all nodes (node1 and
node2)
1.service network restart
2.chkconfig network on
# Configure Disable Selinux & Firewall on all nodes (node1 and node2)
Open file /etc/sysconfig/selinux and change SELINUX=enforcing
become SELINUX=disabled. Also disable some service such as iptables and
ip6tables.
1.setenforce 0
2.service iptables stop
3.service ip6tables stop
4.chkconfig iptables off
5.chkconfig ip6tables off
On node2
1.hostname node2.imanudin.net
2.vi /etc/sysconfig/network
# Update repos and install packages DRBD on all nodes (node1 and node2)
1.wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
2.rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm
3.yum update
4.yum install kmod-drbd83 drbd83-utils
# Configure DRBD
Configure file /etc/drbd.conf (enough on node1 only)
1.vi /etc/drbd.conf
global {
dialog-refresh 1;
usage-count yes;
minor-count 5;
}
common {
syncer {
rate 10M;
}
}
resource r0 {
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 120;
}
protocol C;
disk {
on-io-error detach;
}
syncer {
rate 10M;
al-extents 257;
}
on node1.imanudin.net {
device /dev/drbd0;
address 192.168.80.91:7788;
meta-disk internal;
disk /dev/sdb;
}
on node2.imanudin.net {
device /dev/drbd0;
address 192.168.80.92:7788;
meta-disk internal;
disk /dev/sdb;
}
}
Note :
r0 is resources name for DRBD.
/dev/sdb is second drive on each machines that will configured for
DRBD. Please check with fdisk -l command to make sure the name of second
drive. Recommended capacity harddrive on each machines are same. If
different, the bigger harddrive will adjust with the lower harddrive.
Recommended using 2 NIC for each machines. 1 NIC for communication with
clients and 1 NIC for communication between servers using Cross cable (DRBD
communication)
# Copy drbd.conf from node1 to node2 (run the following command on node1)
1.scp /etc/drbd.conf root@node2:/etc/drbd.conf
Please wait until 100% synchronize devices among node1 and node2. You also
could check the progress with the following command
1.service drbd status
2.watch service drbd status
After finish synchronization, you will see Connected and UpToDate among
node1 and node2
drbd driver loaded OK; device status:
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by
phil@Build64R6, 2014-10-28 10:32:53
m:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C
Hooray, finally DRBD has been finished configured and connected each other
Good luck and hopefully useful
Mount DRBD devices into tmp folder who has been created
1.mount /dev/drbd0 /mnt/tmp
1.touch /mnt/tmp/created-on-node1.txt
For testing/check on node2, umount DRBD devices and demoting into secondary
1.umount /dev/drbd0
2.drbdadm secondary r0
# TESTING ON NODE2
Create folder tmp in /mnt folder
1.mkdir /mnt/tmp
Mount DRBD devices into tmp folder who has been created
1.mount /dev/drbd0 /mnt/tmp
If the file has been there, its meant DRBD has been replicated
Umount DRBD devices, demoting again into secondary and promoting node1 into
primary
1.umount /dev/drbd0
2.drbdadm secondary r0
3.service drbd status
Alias IP will be used for access clients/users. This alias IP will be configured
online failover
# After installed Zimbra, install and configure Heartbeat on all nodes (node1
and node2) as described at this link : How To Configure Online Failover/Failback
on CentOS 6 Using Heartbeat
# After installed Heartbeat and online failover/failback working fine, then install
DRBD for data replication on all nodes (node1 and node2) as described at this
link : How To Configure Data Replication/Synchronize on CentOS 6 Using DRBD
# Testing data replication DRBD that has been worked : Testing Data
Replication/Synchronize on DRBD
# After DRBD has been worked, copy file/folder /opt/zimbra into DRBD
devices.
Do the following command only at node1
Rysnc Zimbra
1.drbdadm primary r0
2.mount /dev/drbd0 /mnt/tmp
3.rsync -avP --exclude=data.mdb /opt/ /mnt/tmp
data.mdb will be huge if copied by rsync so that take a long time. For the
tricks, use cp for copy data.mdb to DRBD devices
Copy data.mdb
1.cp
/opt/zimbra/data/ldap/mdb/db/data.mdb /mnt/tmp/zimbra/data/ldap/mdb/db/da
ta.mdb
2.chown zimbra.zimbra /mnt/tmp/zimbra/data/ldap/mdb/db/data.mdb
# Move folder /opt existing to another folder, do the following command on all
nodes (node1 and node2)
1.mv /opt /backupopt
2.mkdir /opt
# Configure /etc/hosts and dns records on all nodes (node1 and node2)
1.vi /etc/hosts
127.0.0.1
localhost
192.168.80.91
node1.imanudin.net
192.168.80.92
node2.imanudin.net
192.168.80.93
mail.imanudin.net
1.vi /var/named/db.imanudin.net
node1
node2
mail
$TTL 1D
@
IN SOA
@
@
ns1
mail
IN
IN
IN
IN
ns1.imanudin.net. root.imanudin.net. (
0
; serial
1D
; refresh
1H
; retry
1W
; expire
3H )
; minimum
NS
ns1.imanudin.net.
MX
0 mail.imanudin.net.
A
192.168.80.91
A
192.168.80.93
TESTING HA
Failover
After Zimbra running well on node1, please stop service Heartbeat on node1 or
force off machine
1.service heartbeat stop
All services that taken over by Heartbeat will automatically stopped and taken
over by node2. How long node2 can start all services worked again, depends
how long start services (named and zimbra)
Failback
Please start again service Heartbeat on node1 or power on machine
1.service heartbeat start
All running services on node2 will automatically stopped and taken over by
node1
Hooray, finally you could build Zimbra HA with DRBD+Heartbeat
For log information about HA process, you can see at /var/log/ha-log
Good luck and hopefully useful