Documente Academic
Documente Profesional
Documente Cultură
Esquema Utilizado
Al contar con dos servidores físicos se particiona en dos dominios cada equipo, uno como Balanceador (2
vcpus y 2 Gb RAM) y otro como Nodo Apache (6 vcpus y 8 Gb RAM); todas las direcciones IP son
reservadas en el mismo segmento, se necesita una para equipo virtual, más la virtual del cluster.
Se realiza una instalación estándar del sistema operativo, con el siguiente esquema de particionamiento
de disco, aplica para los dos nodos físicos:
orimat220:~# fdisk -l
SI EL SELLO EN ESTA PORTADA NO ESTÁ EN ORIGINAL Y EL CONTENIDO DEL DOCUMENTO NO COINCIDE CON LOS ARCHIVOS ELECTRONICOS
RESGUARDADOS POR EL ASEGURADOR DE LA CALIDAD, ESTE NO ES DOCUMENTO CONTROLADO
Se configuran las interfaces de red del servidor, de forma que podamos utilizar posteriormente una para
cada dominio:
#
# deb cdrom:[Debian GNU/Linux 5.0.3 _Lenny_ - Official amd64 CD Binary-1 20090905-11:02]/
lenny main
#deb cdrom:[Debian GNU/Linux 5.0.3 _Lenny_ - Official amd64 CD Binary-1 20090905-11:02]/ lenny
main
Instalar los paquetes relacionados con LVM y Xen para futuro uso:
A partir de ahora se configuran los nodos del Cluster de Balanceo de cargas y el Cluster SQL.
3. Configuración de Xen
Para poder aprovechar al máximo el rendimiento de las imágenes de los dominios Xen es necesario
utilizar LVM y no imágenes en disco para las máquinas virtuales, para esto utilizamos un volumen LVM2
donde estaŕan todos los archivos de cada dominio, por lo tanto iniciamos con la creación del volumen físico,
creación de un volumen lógico y la activación del volumen:
orimat220:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda6
VG Name VM_Apache1
PV Size 267,69 GB / not usable 897,00 KB
Allocatable yes
PE Size (KByte) 4096
Total PE 68529
Free PE 254
Allocated PE 68275
PV UUID S9Egq2-3zO7-r2ad-IyWw-Kxg5-6qWs-cn4aM3
orimat220:~# vgdisplay
--- Volume group ---
VG Name VM_Apache1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 41
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 267,69 GB
PE Size 4,00 MB
Total PE 68529
Alloc PE / Size 68275 / 266,70 GB
Free PE / Size 254 / 1016,00 MB
VG UUID Vs11vP-1p6Q-auM1-2MPr-jFTh-JdN9-LcpDTG
##
# /etc/xen-tools/xen-tools.conf
##
#
# This is the global configuration file for the scripts included
# within the xen-tools package.
#
# For more details please see:
#
# http://xen-tools.org/
#
##
##
#
# File Format
# -----------
#
# Anything following a '#' character is ignored as a comment.
#
# Otherwise the format of this file "key = value". The value of
# any keys in this file may be constructed via the output of a command.
#
# For example:
#
# kernel = /boot/vmlinuz-`uname -r`
#
##
#
##
# Output directory for storing loopback images.
#
# If you choose to use loopback images, which are simple to manage but
# slower than LVM partitions, then specify a directory here and uncomment
# the line.
#
# New instances will be stored in subdirectories named after their
# hostnames.
#
##
# dir = /home/xen
#
#
##
#
# If you don't wish to use loopback images then you may specify an
# LVM volume group here instead
#
##
lvm = VM_Apache1
#
##
#
# Installation method.
#
# There are four distinct methods which you may to install a new copy
# of Linux to use in your Xen guest domain:
#
# - Installation via the debootstrap command.
# - Installation via the rpmstrap command.
# - Installation via the rinse command.
# - Installation by copying a directory containing a previous installation.
# - Installation by untarring a previously archived image.
#
# NOTE That if you use the "untar", or "copy" options you should ensure
# that the image you're left with matches the 'dist' setting later in
# this file.
#
#
##
#
#
# install-method = [ debootstrap | rinse | rpmstrap | copy | tar ]
#
#
install-method = debootstrap
#
# If you're using the "copy", or "tar" installation methods you must
# need to specify the source location to copy from, or the source
# .tar file to unpack.
#
# You may specify that with a line such as:
#
# install-source = /path/to/copy
# install-source = /some/path/img.tar
#
#
#
##
# Command definitions.
##
#
# The "rinse", and "rpmstrap" commands are hardwired into
# the script, but if you wish to modify the commands which are executed
# when installing new systems by a "copy", "debootstrap", or "tar" method
# you can do so here:
#
# (This allows you to install from a .tar.bz file, rather than a plain
# tar file, use cdebootstrap, etc.)
#
# install-method=copy:
# copy-cmd = /bin/cp -a $src/* $dest
#
# install-method=debootstrap:
# debootstrap-cmd=/usr/sbin/debootstrap
#
# install-method=tar:
# tar-cmd = /bin/tar --numeric-owner -xvf $src
#
#
#
##
# Disk and Sizing options.
##
#
size = 4Gb # Disk image size.
memory = 128Mb # Memory size
swap = 128Mb # Swap size
# noswap = 1 # Don't use swap at all for the new system.
fs = ext3 # use the EXT3 filesystem for the disk image.
dist = lenny # Default distribution to install.
image = sparse # Specify sparse vs. full disk images.
#
# Currently supported and tested distributions include:
#
# via Debootstrap:
#
# Debian:
# sid, sarge, etch, lenny.
#
# Ubuntu:
# edgy, feisty, dapper.
#
# via Rinse:
# centos-4, centos-5.
# fedora-core-4, fedora-core-5, fedora-core-6, fedora-core-7
#
#
##
# Networking setup values.
##
#
# Uncomment and adjust these network settings if you wish to give your
# new instances static IP addresses.
#
gateway = 167.175.201.1
netmask = 255.255.255.0
broadcast = 167.175.201.255
#
# Uncomment this if you wish the images to use DHCP
#
# dhcp = 1
##
# Misc options
##
#
# Uncomment the following line if you wish to disable the caching
# of downloaded .deb files when using debootstrap to install images.
#
# cache = no
#
#
# Uncomment the following line if you wish to interactively setup
# a new root password for images.
#
passwd = 1
#
# If you'd like all accounts on your host system which are not present
# on the guest system to be copied over then uncomment the following line.
#
# accounts = 1
#
#
# Default kernel and ramdisk to use for the virtual servers
#
kernel = /boot/vmlinuz-`uname -r`
initrd = /boot/initrd.img-`uname -r`
#
# The architecture to use when using debootstrap, rinse, or rpmstrap.
#
# This is most useful on 64 bit host machines, for other systems it
# doesn't need to be used.
#
# arch=[i386|amd64]
#
#
# The default mirror for debootstrap to install Debian-derived distributions
#
mirror = http://ftp.us.debian.org/debian/
#
# A mirror suitable for use when installing the Dapper release of Ubuntu.
#
# mirror = http://gb.archive.ubuntu.com/ubuntu/
#
# If you like you could use per-distribution mirrors, which will
# be more useful if you're working in an environment where you want
# to regularly use multiple distributions:
#
# mirror_sid=http://ftp.us.debian.org/debian
# mirror_sarge=http://ftp.us.debian.org/debian
# mirror_etch=http://ftp.us.debian.org/debian
# mirror_dapper=http://archive.ubuntu.com/ubuntu
# mirror_edgy=http://archive.ubuntu.com/ubuntu
# mirror_feisty=http://archive.ubuntu.com/ubuntu
# mirror_gutsy=http://archive.ubuntu.com/ubuntu
#
# Filesystem options for the different filesystems we support.
#
ext3_options = noatime,nodiratime,errors=remount-ro
ext2_options = noatime,nodiratime,errors=remount-ro
xfs_options = defaults
reiser_options = defaults
#
# Uncomment if you wish newly created images to boot once they've been
# created.
#
# boot = 1
#
# If you're using the lenny or later version of the Xen guest kernel you will
# need to make sure that you use 'hvc0' for the guest serial device,
# and 'xvdX' instead of 'sdX' for serial devices.
#
# You may specify the things to use here:
#
serial_device = hvc0 #default
# serial_device = tty1
#
disk_device = xvda #default
# disk_device = sda
#
#
# Here we specify the output directory which the Xen configuration
# files will be written to, and the suffix to give them.
#
# Historically xen-tools have created configuration files in /etc/xen,
# and given each file the name $hostname.cfg. If you want to change
# that behaviour you may do so here.
#
#
# output = /etc/xen
# extension = .cfg
#
# -*- sh -*-
#
# Xend configuration file.
#
# Commented out entries show the default for that entry, unless otherwise
# specified.
(logfile /var/log/xen/xend.log)
#(loglevel DEBUG)
#(xend-http-server no)
#(xend-unix-server no)
#(xend-tcp-xmlrpc-server no)
#(xend-unix-xmlrpc-server yes)
#(xend-relocation-server no)
#(xend-unix-path /var/lib/xend/xend-socket)
# Address and port xend should use for the legacy TCP XMLRPC interface,
# if xen-tcp-xmlrpc-server is set.
#(xen-tcp-xmlrpc-server-address 'localhost')
#(xen-tcp-xmlrpc-server-port 8006)
# SSL key and certificate to use for the legacy TCP XMLRPC interface.
# Setting these will mean that this port serves only SSL connections as
# opposed to plaintext ones.
#(xend-tcp-xmlrpc-server-ssl-key-file /etc/xen/xmlrpc.key)
#(xend-tcp-xmlrpc-server-ssl-cert-file /etc/xen/xmlrpc.crt)
# Port xend should use for the HTTP interface, if xend-http-server is set.
#(xend-port 8000)
# The hosts allowed to talk to the relocation port. If this is empty (the
# default), then all connections are allowed (assuming that the connection
# arrives on a port and interface on which we are listening; see
# xend-relocation-port and xend-relocation-address above). Otherwise, this
# should be a space-separated sequence of regular expressions. Any host with
# a fully-qualified domain name or an IP address that matches one of these
# regular expressions will be accepted.
#
# For example:
# (xend-relocation-hosts-allow '^localhost$ ^.*\\.example\\.org$')
#
#(xend-relocation-hosts-allow '')
##
# To bridge network traffic, like this:
#
# dom0: ----------------- bridge -> real eth0 -> the network
# |
# domU: fake eth0 -> vifN.0 -+
#
# use
#
#(network-script network-custom)
(network-script network-bridge-wrapper)
#
# Your default ethernet device is used as the outgoing interface, by default.
# To use a different one (e.g. eth1) use
#
# (network-script 'network-bridge netdev=eth1')
#
# The bridge is named xenbr0, by default. To rename the bridge, use
#
# (network-script 'network-bridge bridge=<name>')
#
# It is possible to use the network-bridge script in more complicated
# scenarios, such as having two outgoing interfaces, with two bridges, and
# two fake interfaces per guest domain. To do things like this, write
# yourself a wrapper script, and call network-bridge from it, as appropriate.
#
#(network-script network-dummy)
# Dom0 will balloon out when needed to free memory for domU.
# dom0-min-mem is the lowest memory level (in MB) dom0 will get down to.
# If dom0-min-mem=0, dom0 will never balloon out.
(dom0-min-mem 196)
#!/bin/sh
/etc/xen/scripts/network-bridge "$@" netdev=eth1
/etc/xen/scripts/network-bridge "$@" netdev=eth2
El kernel del Sistema Operativo funciona con el modo Xen activado y con soporte a dominios.
De forma que el archivo para los nodos de balanceo de carga quedará de la siguiente manera, se debe
agragar la línea vcpus y modificar la línea correspondiente a vif:
#
# Configuration file for the Xen instance zorimat221, created
# by xen-tools 3.9 on Tue Dec 1 16:24:48 2009.
#
#
# Kernel + memory size
#
kernel = '/boot/vmlinuz-2.6.26-2-xen-amd64'
ramdisk = '/boot/initrd.img-2.6.26-2-xen-amd64'
memory = '4096'
vcpus = '2'
#
# Disk device(s).
#
root = '/dev/xvda2 ro'
disk = [
'phy:/dev/VM_Apche1/zorimat221-swap,xvda1,w',
'phy:/dev/VM_Apache1/zorimat221-disk,xvda2,w',
]
#
# Hostname
#
name = 'zorimat221'
#
# Networking
#
vif = [ 'ip=167.175.214.82,mac=00:16:3E:4B:A5:70,bridge=eth2' ]
#
# Behaviour
#
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
#
# Configuration file for the Xen instance zorimat220, created
# by xen-tools 3.9 on Tue Dec 1 16:23:11 2009.
#
#
# Kernel + memory size
#
kernel = '/boot/vmlinuz-2.6.26-2-xen-amd64'
ramdisk = '/boot/initrd.img-2.6.26-2-xen-amd64'
memory = '7168'
vcpus = '6'
#
# Disk device(s).
#
root = '/dev/xvda2 ro'
disk = [
'phy:/dev/VM_Apache1/zorimat220-swap,xvda1,w',
'phy:/dev/VM_Apache1/zorimat220-disk,xvda2,w',
]
#
# Hostname
#
name = 'zorimat220'
#
# Networking
#
vif = [ 'ip=167.175.214.81,mac=00:16:3E:AB:E3:DE,bridge=eth1' ]
#
# Behaviour
#
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
Una vez modificados estos archivos creamos los dominios, que estarán destinados a volúmenes lógicos
dentro del LVM2 configurado:
orimat220:~# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 830 8 r----- 925.3
zorimat220 17 7168 6 -b---- 43.4
zorimat221 18 4096 2 -b---- 3.9
Y por último verificamos la configuración de LVM con los nuevos volúmenes, creados automáticamente
por Xen:
orimat220:~# lvdisplay
--- Logical volume ---
LV Name /dev/VM_Apache1/zorimat220-swap
VG Name VM_Apache1
LV UUID vzj2Q0-lKNz-11hQ-r1Ha-pGk0-Kdvk-JGwZ49
LV Write Access read/write
LV Status available
# open 1
LV Size 5,86 GB
Current LE 1501
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0
Los dominios están listos con una configuración de sistema operativo base, y se deben configurar los
servicios.
Se instala el servidor Apache con soporte a PHP y a algunas extensiones de PHP desde los repositorios
de Debian:
Para que el cluster funcione ahora con una IP virtual y con balanceo de cargas, se deben
configurar los paquetes heartbeat y ldirectord en zorimat221 y zorimat223, modificamos el
archivo /etc/modules, agregando las siguientes líneas, luego reiniciamos para que estos valores
queden activos en el kernel que se está ejecutando:
ip_vs_dh
ip_vs_ftp
ip_vs
ip_vs_lblc
ip_vs_lblcr
ip_vs_lc
ip_vs_nq
ip_vs_rr
ip_vs_sed
ip_vs_sh
ip_vs_wlc
ip_vs_wrr
zorimat221:~# sysctl -p
net.ipv4.ip_forward = 1
logfacility local0
bcast eth0
mcast eth0 225.0.0.1 694 1 0
auto_failback off
node zorimat221
node zorimat223
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster
zorimat221 \
ldirectord::ldirectord.cf \
LVSSyncDaemonSwap::master \
IPaddr2::167.175.214.80/24/eth0/167.175.214.255
auth 3
3 md5 apacherandom
Modificar los permisos del archivo authkeys, con acceso de lectura-escritura para root únicamente:
# Global Directives
checktimeout=10
checkinterval=2
autoreload=no
logfile="local0"
quiescent=yes
virtual = orimat222:80
real = zorimat220:80 gate
real = zorimat222:80 gate
fallback = 127.0.0.1:80 gate
service = http
request = "ldirector.html"
receive = "Test Page"
scheduler = rr
protocol = tcp
checktype = negotiate
Se modifican los enlaces para los servicios durante el booteo del sistema:
zorimat221:~# update-rc.d -f heartbeat remove
zorimat221:~# update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
zorimat221:~# update-rc.d -f ldirectord remove
Preparamos el cluster Apache para el balanceo de cargas con el paquete iproute, esto aplica para ambos
equipos (zorimat220 y zorimat222):
###############################################################
# Apache Cluster Configs
###############################################################
# Enable configuration of arp_ignore option
net.ipv4.conf.all.arp_ignore = 1
# When making an ARP request sent through eth0 Always use an address that
# is configured on eth0 as the source address of the ARP request. If this
# is not set, and packets are being sent out eth0 for an address that is on
# lo, and an arp request is required, then the address on lo will be used.
# As the source IP address of arp requests is entered into the ARP cache on
# the destination, it has the effect of announcing this address. This is
# not desirable in this case as adresses on lo on the real-servers should
# be announced only by the linux-director.
net.ipv4.conf.eth0.arp_announce = 2
zorimat220:~# sysctl -p
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
Modificamos el archivo /etc/network/interfaces y agregamos la interfaz lo:0, que recibirá las peticiones a la
ip 167.175.214.80, en ambos nodos Apache:
auto lo:0
iface lo:0 inet static
address 167.175.214.80
netmask 255.255.255.255
pre-up sysctl -p > /dev/null
Levantamos el balanceador en zorimat221 y zorimat223 por primera vez, para comenzar las pruebas:
En el otro nodo:
Desde otro equipo se procede a conectarse para realizar pruebas con un Browser. Al conectarse el
cluster está funcionando perfectamente.