Sunteți pe pagina 1din 16

1.

Esquema Utilizado

Al contar con dos servidores físicos se particiona en dos dominios cada equipo, uno como Balanceador (2
vcpus y 2 Gb RAM) y otro como Nodo Apache (6 vcpus y 8 Gb RAM); todas las direcciones IP son
reservadas en el mismo segmento, se necesita una para equipo virtual, más la virtual del cluster.

2. Instalación de Sistema Operativo en servidores físicos

Se realiza una instalación estándar del sistema operativo, con el siguiente esquema de particionamiento
de disco, aplica para los dos nodos físicos:

orimat220:~# fdisk -l

Disco /dev/sda: 299.4 GB, 299439751168 bytes


255 heads, 63 sectors/track, 36404 cylinders
Units = cilindros of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000080

Disposit. Inicio Comienzo Fin Bloques Id Sistema


/dev/sda1 * 1 1216 9767488+ 83 Linux
/dev/sda2 1217 36404 282647610 5 Extendida
/dev/sda5 1217 1459 1951866 82 Linux swap / Solaris
/dev/sda6 1460 36404 280695681 8e Linux LVM

SI EL SELLO EN ESTA PORTADA NO ESTÁ EN ORIGINAL Y EL CONTENIDO DEL DOCUMENTO NO COINCIDE CON LOS ARCHIVOS ELECTRONICOS
RESGUARDADOS POR EL ASEGURADOR DE LA CALIDAD, ESTE NO ES DOCUMENTO CONTROLADO
Se configuran las interfaces de red del servidor, de forma que podamos utilizar posteriormente una para
cada dominio:

eth1 Link encap:Ethernet HWaddr 00:24:e8:30:a6:7d


inet addr:167.175.215.220 Bcast:167.175.215.255 Mask:255.255.255.0
inet6 addr: fe80::224:e8ff:fe30:a67d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1762059 errors:0 dropped:0 overruns:0 frame:0
TX packets:30966 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:142263572 (135.6 MiB) TX bytes:287122045 (273.8 MiB)

eth2 Link encap:Ethernet HWaddr 00:24:e8:30:a6:7f


inet addr:167.175.202.6 Bcast:167.175.202.255 Mask:255.255.255.0
inet6 addr: fe80::224:e8ff:fe30:a67f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1746098 errors:0 dropped:0 overruns:0 frame:0
TX packets:40654 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:191993369 (183.0 MiB) TX bytes:3448324 (3.2 MiB)

Se modifica el archivo /etc/apt/sources.list para obtener los paquetes necesarios:

#
# deb cdrom:[Debian GNU/Linux 5.0.3 _Lenny_ - Official amd64 CD Binary-1 20090905-11:02]/
lenny main

#deb cdrom:[Debian GNU/Linux 5.0.3 _Lenny_ - Official amd64 CD Binary-1 20090905-11:02]/ lenny
main

# Line commented out by installer because it failed to verify:


deb http://security.debian.org/ lenny/updates main
# Line commented out by installer because it failed to verify:
#deb-src http://security.debian.org/ lenny/updates main

# Line commented out by installer because it failed to verify:


deb http://volatile.debian.org/debian-volatile lenny/volatile main
# Line commented out by installer because it failed to verify:
#deb-src http://volatile.debian.org/debian-volatile lenny/volatile main

deb http://ftp.debian.org/debian lenny main contrib non-free

Se actualizan los índices de paquetes:

orimat220:~# aptitude update

Se actualizan los paquetes:

orimat220:~# aptitude safe-upgrade

Instalar los paquetes relacionados con LVM y Xen para futuro uso:

orimat220:~# aptitude install xen-hypervisor-3.2-1-amd64 xen-linux-system-2.6.26-2-xen-amd64


xen-utils-3.2-1 xenstore-utils xenwatch xen-shell xen-tools lvm2

Reiniciar el equipo para cambios en el kernel:

orimat220:~# shutdown -r now

A partir de ahora se configuran los nodos del Cluster de Balanceo de cargas y el Cluster SQL.
3. Configuración de Xen

Para poder aprovechar al máximo el rendimiento de las imágenes de los dominios Xen es necesario
utilizar LVM y no imágenes en disco para las máquinas virtuales, para esto utilizamos un volumen LVM2
donde estaŕan todos los archivos de cada dominio, por lo tanto iniciamos con la creación del volumen físico,
creación de un volumen lógico y la activación del volumen:

orimat220:~# pvcreate /dev/sda6


orimat220:~# vgcreate VM_Apache1 /dev/sda6
orimat220:~# vgchange -a y VM_Apache1

Si verificamos estos volumenes se verá lo siguiente:

orimat220:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda6
VG Name VM_Apache1
PV Size 267,69 GB / not usable 897,00 KB
Allocatable yes
PE Size (KByte) 4096
Total PE 68529
Free PE 254
Allocated PE 68275
PV UUID S9Egq2-3zO7-r2ad-IyWw-Kxg5-6qWs-cn4aM3

orimat220:~# vgdisplay
--- Volume group ---
VG Name VM_Apache1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 41
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 267,69 GB
PE Size 4,00 MB
Total PE 68529
Alloc PE / Size 68275 / 266,70 GB
Free PE / Size 254 / 1016,00 MB
VG UUID Vs11vP-1p6Q-auM1-2MPr-jFTh-JdN9-LcpDTG

Se modifica el archivo /etc/xen-tools/xen-tools.conf, para la creación de los dominios Xen:

##
# /etc/xen-tools/xen-tools.conf
##
#
# This is the global configuration file for the scripts included
# within the xen-tools package.
#
# For more details please see:
#
# http://xen-tools.org/
#
##

##
#
# File Format
# -----------
#
# Anything following a '#' character is ignored as a comment.
#
# Otherwise the format of this file "key = value". The value of
# any keys in this file may be constructed via the output of a command.
#
# For example:
#
# kernel = /boot/vmlinuz-`uname -r`
#
##

#
##
# Output directory for storing loopback images.
#
# If you choose to use loopback images, which are simple to manage but
# slower than LVM partitions, then specify a directory here and uncomment
# the line.
#
# New instances will be stored in subdirectories named after their
# hostnames.
#
##
# dir = /home/xen
#

#
##
#
# If you don't wish to use loopback images then you may specify an
# LVM volume group here instead
#
##
lvm = VM_Apache1

#
##
#
# Installation method.
#
# There are four distinct methods which you may to install a new copy
# of Linux to use in your Xen guest domain:
#
# - Installation via the debootstrap command.
# - Installation via the rpmstrap command.
# - Installation via the rinse command.
# - Installation by copying a directory containing a previous installation.
# - Installation by untarring a previously archived image.
#
# NOTE That if you use the "untar", or "copy" options you should ensure
# that the image you're left with matches the 'dist' setting later in
# this file.
#
#
##
#
#
# install-method = [ debootstrap | rinse | rpmstrap | copy | tar ]
#
#
install-method = debootstrap

#
# If you're using the "copy", or "tar" installation methods you must
# need to specify the source location to copy from, or the source
# .tar file to unpack.
#
# You may specify that with a line such as:
#
# install-source = /path/to/copy
# install-source = /some/path/img.tar
#
#
#
##
# Command definitions.
##
#
# The "rinse", and "rpmstrap" commands are hardwired into
# the script, but if you wish to modify the commands which are executed
# when installing new systems by a "copy", "debootstrap", or "tar" method
# you can do so here:
#
# (This allows you to install from a .tar.bz file, rather than a plain
# tar file, use cdebootstrap, etc.)
#
# install-method=copy:
# copy-cmd = /bin/cp -a $src/* $dest
#
# install-method=debootstrap:
# debootstrap-cmd=/usr/sbin/debootstrap
#
# install-method=tar:
# tar-cmd = /bin/tar --numeric-owner -xvf $src
#
#

#
##
# Disk and Sizing options.
##
#
size = 4Gb # Disk image size.
memory = 128Mb # Memory size
swap = 128Mb # Swap size
# noswap = 1 # Don't use swap at all for the new system.
fs = ext3 # use the EXT3 filesystem for the disk image.
dist = lenny # Default distribution to install.
image = sparse # Specify sparse vs. full disk images.

#
# Currently supported and tested distributions include:
#
# via Debootstrap:
#
# Debian:
# sid, sarge, etch, lenny.
#
# Ubuntu:
# edgy, feisty, dapper.
#
# via Rinse:
# centos-4, centos-5.
# fedora-core-4, fedora-core-5, fedora-core-6, fedora-core-7
#
#

##
# Networking setup values.
##

#
# Uncomment and adjust these network settings if you wish to give your
# new instances static IP addresses.
#
gateway = 167.175.201.1
netmask = 255.255.255.0
broadcast = 167.175.201.255
#
# Uncomment this if you wish the images to use DHCP
#
# dhcp = 1

##
# Misc options
##

#
# Uncomment the following line if you wish to disable the caching
# of downloaded .deb files when using debootstrap to install images.
#
# cache = no
#

#
# Uncomment the following line if you wish to interactively setup
# a new root password for images.
#
passwd = 1

#
# If you'd like all accounts on your host system which are not present
# on the guest system to be copied over then uncomment the following line.
#
# accounts = 1
#

#
# Default kernel and ramdisk to use for the virtual servers
#
kernel = /boot/vmlinuz-`uname -r`
initrd = /boot/initrd.img-`uname -r`

#
# The architecture to use when using debootstrap, rinse, or rpmstrap.
#
# This is most useful on 64 bit host machines, for other systems it
# doesn't need to be used.
#
# arch=[i386|amd64]
#

#
# The default mirror for debootstrap to install Debian-derived distributions
#
mirror = http://ftp.us.debian.org/debian/

#
# A mirror suitable for use when installing the Dapper release of Ubuntu.
#
# mirror = http://gb.archive.ubuntu.com/ubuntu/

#
# If you like you could use per-distribution mirrors, which will
# be more useful if you're working in an environment where you want
# to regularly use multiple distributions:
#
# mirror_sid=http://ftp.us.debian.org/debian
# mirror_sarge=http://ftp.us.debian.org/debian
# mirror_etch=http://ftp.us.debian.org/debian
# mirror_dapper=http://archive.ubuntu.com/ubuntu
# mirror_edgy=http://archive.ubuntu.com/ubuntu
# mirror_feisty=http://archive.ubuntu.com/ubuntu
# mirror_gutsy=http://archive.ubuntu.com/ubuntu

#
# Filesystem options for the different filesystems we support.
#
ext3_options = noatime,nodiratime,errors=remount-ro
ext2_options = noatime,nodiratime,errors=remount-ro
xfs_options = defaults
reiser_options = defaults

#
# Uncomment if you wish newly created images to boot once they've been
# created.
#
# boot = 1
#
# If you're using the lenny or later version of the Xen guest kernel you will
# need to make sure that you use 'hvc0' for the guest serial device,
# and 'xvdX' instead of 'sdX' for serial devices.
#
# You may specify the things to use here:
#
serial_device = hvc0 #default
# serial_device = tty1
#
disk_device = xvda #default
# disk_device = sda
#

#
# Here we specify the output directory which the Xen configuration
# files will be written to, and the suffix to give them.
#
# Historically xen-tools have created configuration files in /etc/xen,
# and given each file the name $hostname.cfg. If you want to change
# that behaviour you may do so here.
#
#
# output = /etc/xen
# extension = .cfg
#

Se configura las variables de networking para la creación de los dominios (/etc/xen/xend-config.sxp):

# -*- sh -*-

#
# Xend configuration file.
#

# This example configuration is appropriate for an installation that


# utilizes a bridged network configuration. Access to xend via http
# is disabled.

# Commented out entries show the default for that entry, unless otherwise
# specified.

(logfile /var/log/xen/xend.log)
#(loglevel DEBUG)

# The Xen-API server configuration. (Please note that this server is


# available as an UNSUPPORTED PREVIEW in Xen 3.0.4, and should not be relied
# upon).
#
# This value configures the ports, interfaces, and access controls for the
# Xen-API server. Each entry in the list starts with either unix, a port
# number, or an address:port pair. If this is "unix", then a UDP socket is
# opened, and this entry applies to that. If it is a port, then Xend will
# listen on all interfaces on that TCP port, and if it is an address:port
# pair, then Xend will listen on the specified port, using the interface with
# the specified address.
#
# The subsequent string configures the user-based access control for the
# listener in question. This can be one of "none" or "pam", indicating either
# that users should be allowed access unconditionally, or that the local
# Pluggable Authentication Modules configuration should be used. If this
# string is missing or empty, then "pam" is used.
#
# The final string gives the host-based access control for that listener. If
# this is missing or empty, then all connections are accepted. Otherwise,
# this should be a space-separated sequence of regular expressions; any host
# with a fully-qualified domain name or an IP address that matches one of
# these regular expressions will be accepted.
#
# Example: listen on TCP port 9363 on all interfaces, accepting connections
# only from machines in example.com or localhost, and allow access through
# the unix domain socket unconditionally:
#
# (xen-api-server ((9363 pam '^localhost$ example\\.com$')
# (unix none)))
#
# Optionally, the TCP Xen-API server can use SSL by specifying the private
# key and certificate location:
#
# (9367 pam '' /etc/xen/xen-api.key /etc/xen/xen-api.crt)
#
# Default:
# (xen-api-server ((unix)))

#(xend-http-server no)
#(xend-unix-server no)
#(xend-tcp-xmlrpc-server no)
#(xend-unix-xmlrpc-server yes)
#(xend-relocation-server no)

#(xend-unix-path /var/lib/xend/xend-socket)

# Address and port xend should use for the legacy TCP XMLRPC interface,
# if xen-tcp-xmlrpc-server is set.
#(xen-tcp-xmlrpc-server-address 'localhost')
#(xen-tcp-xmlrpc-server-port 8006)

# SSL key and certificate to use for the legacy TCP XMLRPC interface.
# Setting these will mean that this port serves only SSL connections as
# opposed to plaintext ones.
#(xend-tcp-xmlrpc-server-ssl-key-file /etc/xen/xmlrpc.key)
#(xend-tcp-xmlrpc-server-ssl-cert-file /etc/xen/xmlrpc.crt)

# Port xend should use for the HTTP interface, if xend-http-server is set.
#(xend-port 8000)

# Port xend should use for the relocation interface, if xend-relocation-server


# is set.
#(xend-relocation-port 8002)

# Address xend should listen on for HTTP connections, if xend-http-server is


# set.
# Specifying 'localhost' prevents remote connections.
# Specifying the empty string '' (the default) allows all connections.
#(xend-address '')
#(xend-address localhost)

# Address xend should listen on for relocation-socket connections, if


# xend-relocation-server is set.
# Meaning and default as for xend-address above.
#(xend-relocation-address '')

# The hosts allowed to talk to the relocation port. If this is empty (the
# default), then all connections are allowed (assuming that the connection
# arrives on a port and interface on which we are listening; see
# xend-relocation-port and xend-relocation-address above). Otherwise, this
# should be a space-separated sequence of regular expressions. Any host with
# a fully-qualified domain name or an IP address that matches one of these
# regular expressions will be accepted.
#
# For example:
# (xend-relocation-hosts-allow '^localhost$ ^.*\\.example\\.org$')
#
#(xend-relocation-hosts-allow '')

# The limit (in kilobytes) on the size of the console buffer


#(console-limit 1024)

##
# To bridge network traffic, like this:
#
# dom0: ----------------- bridge -> real eth0 -> the network
# |
# domU: fake eth0 -> vifN.0 -+
#
# use
#
#(network-script network-custom)
(network-script network-bridge-wrapper)
#
# Your default ethernet device is used as the outgoing interface, by default.
# To use a different one (e.g. eth1) use
#
# (network-script 'network-bridge netdev=eth1')
#
# The bridge is named xenbr0, by default. To rename the bridge, use
#
# (network-script 'network-bridge bridge=<name>')
#
# It is possible to use the network-bridge script in more complicated
# scenarios, such as having two outgoing interfaces, with two bridges, and
# two fake interfaces per guest domain. To do things like this, write
# yourself a wrapper script, and call network-bridge from it, as appropriate.
#
#(network-script network-dummy)

# The script used to control virtual interfaces. This can be overridden on a


# per-vif basis when creating a domain or a configuring a new vif. The
# vif-bridge script is designed for use with the network-bridge script, or
# similar configurations.
#
# If you have overridden the bridge name using
# (network-script 'network-bridge bridge=<name>') then you may wish to do the
# same here. The bridge name can also be set when creating a domain or
# configuring a new vif, but a value specified here would act as a default.
#
# If you are using only one bridge, the vif-bridge script will discover that,
# so there is no need to specify it explicitly.
#
(vif-script vif-bridge)

## Use the following if network traffic is routed, as an alternative to the


# settings for bridged networking given above.
#(network-script network-route)
#(vif-script vif-route)

## Use the following if network traffic is routed with NAT, as an alternative


# to the settings for bridged networking given above.
#(network-script network-nat)
#(vif-script vif-nat)

# Dom0 will balloon out when needed to free memory for domU.
# dom0-min-mem is the lowest memory level (in MB) dom0 will get down to.
# If dom0-min-mem=0, dom0 will never balloon out.
(dom0-min-mem 196)

# In SMP system, dom0 will use dom0-cpus # of CPUS


# If dom0-cpus = 0, dom0 will take all cpus available
(dom0-cpus 0)

# Whether to enable core-dumps when domains crash.


#(enable-dump no)

# The tool used for initiating virtual TPM migration


#(external-migration-tool '')

# The interface for VNC servers to listen on. Defaults


# to 127.0.0.1 To restore old 'listen everywhere' behaviour
# set this to 0.0.0.0
#(vnc-listen '127.0.0.1')

# The default password for VNC console on HVM domain.


# Empty string is no authentication.
(vncpasswd '')

# The VNC server can be told to negotiate a TLS session


# to encryption all traffic, and provide x509 cert to
# clients enalbing them to verify server identity. The
# GTK-VNC widget, virt-viewer, virt-manager and VeNCrypt
# all support the VNC extension for TLS used in QEMU. The
# TightVNC/RealVNC/UltraVNC clients do not.
#
# To enable this create x509 certificates / keys in the
# directory /etc/xen/vnc
#
# ca-cert.pem - The CA certificate
# server-cert.pem - The Server certificate signed by the CA
# server-key.pem - The server private key
#
# and then uncomment this next line
# (vnc-tls 1)

# The certificate dir can be pointed elsewhere..


#
# (vnc-x509-cert-dir /etc/xen/vnc)

# The server can be told to request & validate an x509


# certificate from the client. Only clients with a cert
# signed by the trusted CA will be able to connect. This
# is more secure the password auth alone. Passwd auth can
# used at the same time if desired. To enable client cert
# checking uncomment this:
#
# (vnc-x509-verify 1)

# The default keymap to use for the VM's virtual keyboard


# when not specififed in VM's configuration
#(keymap 'en-us')

# Script to run when the label of a resource has changed.


#(resource-label-change-script '')

Crear el archivo /etc/xen/scripts/network-bridge-wrapper :

#!/bin/sh
/etc/xen/scripts/network-bridge "$@" netdev=eth1
/etc/xen/scripts/network-bridge "$@" netdev=eth2

Reiniciar el equipo y verificamos que levante con el kernel Xen:

orimat220:~# shutdown -r now


. . .
orimat220:~# uname -r
2.6.26-2-xen-amd64

El kernel del Sistema Operativo funciona con el modo Xen activado y con soporte a dominios.

4. Creación de dominios Xen

Se crean los archivos de configuración con xen-tools, para ambos dominios:

orimat220:~# xen-create-image --hostname=zorimat220 --size=129.94Gb --swap=5.86Gb


--ip=167.175.214.81 --memory=8Gb --arch=amd64 –role=udev
orimat220:~# xen-create-image --hostname=zorimat221 --size=128.94Gb --swap=1.95Gb
--ip=167.175.214.82 --memory=4Gb --arch=amd64 –role=udev

De forma que el archivo para los nodos de balanceo de carga quedará de la siguiente manera, se debe
agragar la línea vcpus y modificar la línea correspondiente a vif:
#
# Configuration file for the Xen instance zorimat221, created
# by xen-tools 3.9 on Tue Dec 1 16:24:48 2009.
#

#
# Kernel + memory size
#
kernel = '/boot/vmlinuz-2.6.26-2-xen-amd64'
ramdisk = '/boot/initrd.img-2.6.26-2-xen-amd64'
memory = '4096'
vcpus = '2'

#
# Disk device(s).
#
root = '/dev/xvda2 ro'
disk = [
'phy:/dev/VM_Apche1/zorimat221-swap,xvda1,w',
'phy:/dev/VM_Apache1/zorimat221-disk,xvda2,w',
]

#
# Hostname
#
name = 'zorimat221'

#
# Networking
#
vif = [ 'ip=167.175.214.82,mac=00:16:3E:4B:A5:70,bridge=eth2' ]

#
# Behaviour
#
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'

Y para los nodos Apache:

#
# Configuration file for the Xen instance zorimat220, created
# by xen-tools 3.9 on Tue Dec 1 16:23:11 2009.
#

#
# Kernel + memory size
#
kernel = '/boot/vmlinuz-2.6.26-2-xen-amd64'
ramdisk = '/boot/initrd.img-2.6.26-2-xen-amd64'
memory = '7168'
vcpus = '6'

#
# Disk device(s).
#
root = '/dev/xvda2 ro'
disk = [
'phy:/dev/VM_Apache1/zorimat220-swap,xvda1,w',
'phy:/dev/VM_Apache1/zorimat220-disk,xvda2,w',
]

#
# Hostname
#
name = 'zorimat220'

#
# Networking
#
vif = [ 'ip=167.175.214.81,mac=00:16:3E:AB:E3:DE,bridge=eth1' ]

#
# Behaviour
#
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'

Una vez modificados estos archivos creamos los dominios, que estarán destinados a volúmenes lógicos
dentro del LVM2 configurado:

orimat220:~# xm create /etc/xen/zorimat220.cfg


Using config file "/etc/xen/zorimat220.cfg".
Started domain zorimat220
orimat220:~# xm create /etc/xen/zorimat221.cfg
Using config file "/etc/xen/zorimat221.cfg".
Started domain zorimat221

Verificamos estos dominios:

orimat220:~# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 830 8 r----- 925.3
zorimat220 17 7168 6 -b---- 43.4
zorimat221 18 4096 2 -b---- 3.9

Y por último verificamos la configuración de LVM con los nuevos volúmenes, creados automáticamente
por Xen:

orimat220:~# lvdisplay
--- Logical volume ---
LV Name /dev/VM_Apache1/zorimat220-swap
VG Name VM_Apache1
LV UUID vzj2Q0-lKNz-11hQ-r1Ha-pGk0-Kdvk-JGwZ49
LV Write Access read/write
LV Status available
# open 1
LV Size 5,86 GB
Current LE 1501
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0

--- Logical volume ---


LV Name /dev/VM_Apache1/zorimat220-disk
VG Name VM_Apache1
LV UUID 8ULncs-hQNg-qecd-8KR2-GASt-rVYp-sl3sYu
LV Write Access read/write
LV Status available
# open 1
LV Size 129,94 GB
Current LE 33265
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:1

--- Logical volume ---


LV Name /dev/VM_Apache1/zorimat221-swap
VG Name VM_Apache1
LV UUID cbpUM9-macw-kfuW-oROJ-XQrL-2UDd-iUMGkR
LV Write Access read/write
LV Status available
# open 1
LV Size 1,95 GB
Current LE 500
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:2

--- Logical volume ---


LV Name /dev/VM_Apache1/zorimat221-disk
VG Name VM_Apache1
LV UUID 2CRAS0-8TZm-3YEm-mjn1-UxB2-8cG8-ehcCit
LV Write Access read/write
LV Status available
# open 1
LV Size 128,94 GB
Current LE 33009
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:3

Los dominios están listos con una configuración de sistema operativo base, y se deben configurar los
servicios.

5. Configuración de Cluster Apache

Se instala el servidor Apache con soporte a PHP y a algunas extensiones de PHP desde los repositorios
de Debian:

orimat221:~# aptitude install libapache2-mod-php5 php5 php5-curl php5-ffmpeg php5-gd php5-


imagick php5-json php5-ldap php5-mapscript php5-mcrypt php5-memcache php5-mysql php5-pgsql
php5-xmlrpc

6. Configuración de Balanceadores de Carga en Cluster Activo - Pasivo

Para que el cluster funcione ahora con una IP virtual y con balanceo de cargas, se deben
configurar los paquetes heartbeat y ldirectord en zorimat221 y zorimat223, modificamos el
archivo /etc/modules, agregando las siguientes líneas, luego reiniciamos para que estos valores
queden activos en el kernel que se está ejecutando:

ip_vs_dh
ip_vs_ftp
ip_vs
ip_vs_lblc
ip_vs_lblcr
ip_vs_lc
ip_vs_nq
ip_vs_rr
ip_vs_sed
ip_vs_sh
ip_vs_wlc
ip_vs_wrr

Se instalan los paquetes necesarios para el cluster del balanceador de cargas:

zorimat221:~# aptitude install ldirectord heartbeat


Modificamos el archivo /etc/sysctl.conf, descomentando la línea para forward de paquetes:

# Enables packet forwarding


net.ipv4.ip_forward = 1

Ejecutar el comando sysctl para que tome los cambios inmediatamente:

zorimat221:~# sysctl -p
net.ipv4.ip_forward = 1

Crear el archivo /etc/ha.d/ha.cf:

logfacility local0
bcast eth0
mcast eth0 225.0.0.1 694 1 0
auto_failback off
node zorimat221
node zorimat223
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster

Crear el archivo /etc/ha.d/haresources:

zorimat221 \
ldirectord::ldirectord.cf \
LVSSyncDaemonSwap::master \
IPaddr2::167.175.214.80/24/eth0/167.175.214.255

Crear el archivo /etc/ha.d/authkeys:

auth 3
3 md5 apacherandom

Modificar los permisos del archivo authkeys, con acceso de lectura-escritura para root únicamente:

zorimat221:~# chmod 600 /etc/ha.d/authkeys

Crear el archivo /etc/ha.d/ldirectord.cf:

# Global Directives
checktimeout=10
checkinterval=2
autoreload=no
logfile="local0"
quiescent=yes

virtual = orimat222:80
real = zorimat220:80 gate
real = zorimat222:80 gate
fallback = 127.0.0.1:80 gate
service = http
request = "ldirector.html"
receive = "Test Page"
scheduler = rr
protocol = tcp
checktype = negotiate

Se modifican los enlaces para los servicios durante el booteo del sistema:
zorimat221:~# update-rc.d -f heartbeat remove
zorimat221:~# update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
zorimat221:~# update-rc.d -f ldirectord remove

Preparamos el cluster Apache para el balanceo de cargas con el paquete iproute, esto aplica para ambos
equipos (zorimat220 y zorimat222):

zorimat220:~# aptitude install iproute

Se modifica el archivo /etc/sysctl.conf, agregando las siguientes líneas:

###############################################################
# Apache Cluster Configs
###############################################################
# Enable configuration of arp_ignore option
net.ipv4.conf.all.arp_ignore = 1

# When an arp request is received on eth0, only respond if that address is


# configured on eth0. In particular, do not respond if the address is
# configured on lo
net.ipv4.conf.eth0.arp_ignore = 1

# Ditto for eth1, add for all ARPing interfaces


#net.ipv4.conf.eth1.arp_ignore = 1

# Enable configuration of arp_announce option


net.ipv4.conf.all.arp_announce = 2

# When making an ARP request sent through eth0 Always use an address that
# is configured on eth0 as the source address of the ARP request. If this
# is not set, and packets are being sent out eth0 for an address that is on
# lo, and an arp request is required, then the address on lo will be used.
# As the source IP address of arp requests is entered into the ARP cache on
# the destination, it has the effect of announcing this address. This is
# not desirable in this case as adresses on lo on the real-servers should
# be announced only by the linux-director.
net.ipv4.conf.eth0.arp_announce = 2

# Ditto for eth1, add for all ARPing interfaces


#net.ipv4.conf.eth1.arp_announce = 2
###############################################################
# Apache Cluster Configs End
###############################################################

Ejecutar el comando sysctl para que tome los cambios inmediatamente:

zorimat220:~# sysctl -p
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2

Modificamos el archivo /etc/network/interfaces y agregamos la interfaz lo:0, que recibirá las peticiones a la
ip 167.175.214.80, en ambos nodos Apache:

auto lo:0
iface lo:0 inet static
address 167.175.214.80
netmask 255.255.255.255
pre-up sysctl -p > /dev/null

Luego levantamos la interfaz lo:0:


zorimat220:~# ifup lo:0

Levantamos el balanceador en zorimat221 y zorimat223 por primera vez, para comenzar las pruebas:

zorimat221:~# /etc/init.d/heartbeat stop


zorimat221:~# /etc/init.d/ldirectord stop
zorimat221:~# /etc/init.d/heartbeat start

Si todo funciona correctamente se reinician ambos equipos:

zorimat221:~# shutdown -r now

Para verificar el funcionamiento de ambos balanceadores probamos de la siguiente manera, en el nodo


activo debe mostrar lo siguiente:

zorimat223:/etc# ip addr sh eth0


2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:16:3e:95:fa:9a brd ff:ff:ff:ff:ff:ff
inet 167.175.202.60/24 brd 167.175.202.255 scope global eth0
inet 167.175.202.24/24 brd 167.175.202.255 scope global secondary eth0
inet6 fe80::216:3eff:fe95:fa9a/64 scope link
valid_lft forever preferred_lft forever
zorimat223:/etc# ldirectord ldirectord.cf status
ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 1771
zorimat223:/etc# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 167.175.202.24:3306 wrr
-> 167.175.202.18:3306 Route 1 0 0
-> 167.175.202.80:3306 Route 1 0 0
zorimat223:/etc# /etc/ha.d/resource.d/LVSSyncDaemonSwap master status
master running
(ipvs_syncmaster pid: 1854)

En el otro nodo:

zorimat221:~# ip addr sh eth0


2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:16:3e:4b:a5:70 brd ff:ff:ff:ff:ff:ff
inet 167.175.202.139/24 brd 167.175.202.255 scope global eth0
inet6 fe80::216:3eff:fe4b:a570/64 scope link
valid_lft forever preferred_lft forever
zorimat221:~# ldirectord ldirectord.cf status
ldirectord is stopped for /etc/ha.d/ldirectord.cf
zorimat221:~# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
zorimat221:~# /etc/ha.d/resource.d/LVSSyncDaemonSwap master status
master stopped
(ipvs_syncbackup pid: 2684)

Desde otro equipo se procede a conectarse para realizar pruebas con un Browser. Al conectarse el
cluster está funcionando perfectamente.

REVISADO POR: FECHA: APROBADO POR: FECHA:


ELVIS FERNÁNDEZ OCTAVIO ALFONZO
FERNANDEZEQ@PDVSA.COM 02/12/2009 ALFONZOO@PDVSA.COM 02/12/2009

S-ar putea să vă placă și