Sunteți pe pagina 1din 15

MySQL Cluster part 1: Installing Cluster Management

Node in Ubuntu Server v.12.04


There are lots of tutorial in the internet on how you can setup MySQL Cluster in Ubuntu
Server v.12.04, but none of them are suitable for a rookie like me, both in Linux and MySQL.
Important
1. OS which Im using for management node is: Ubuntu Desktop v.12.04,
2. MySQL binary which I use is: mysql-cluster-gpl-7.2.6-linux2.6-x86_64.tar.gz
Why do you think I need to mention the binary of MySQL which I use? Its because I found
that:
1. Most of the tutorial, doesnt mention on which binary release they use,
2. Or mostly, the link to download it are gone,
3. Some settings, they use in the tutorial (even in the book), are no longer supported by
the current MySQL cluster release,
4. And if you reading this tutorial, a year from the date I created this, some of the
settings might obsolete already, so beware!
So after struggling in a straight 5 days, at last I could make it work! yesss
Before we get our hands dirty, it is good to understand: the logical architecture of a cluster
environment we are about to build. From the below diagram, our cluster will consists of 3
VMs. Feel free to examine it before we go further.

A. Preparations:
1. Install VMWare player for Windows (Im using Windows 7 by the way),
2. Creates a VM of an Ubuntu Desktop instance,
3. Disabling IP filtering in this instance (read my post here)

B. Steps:
1. Login as root
2. Creates a new directory under /root/Downloads
1
2
3

cd ~/Downloads
mkdir mysql
cd /mysql

3. Download MySQL Cluster binary from the web


wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl-

17.2.6-linux2.6-x86_64.tar.gz

4. Extract the downloaded file to /usr/local/ directory


1 tar -C /usr/local -xzvf mysql-cluster-gpl-7.2.6-linux2.6-x86_64.tar.gz
5. Creates a link folder
1ln -s /usr/local/mysql-cluster-gpl-7.2.6-linux2.6-i686 /usr/local/mysql
6. Open mysql link directory, and creates another directory called mysql-cluster in it,
1
2

cd /usr/local/mysql/
mkdir mysql-cluster

7. Creates a new config file called config.ini, inside mysql link directory
1
2

cd /usr/local/mysql/
vi config.ini

8. Types below commands inside the config.ini file


1 # config.ini content
2 [ndb_mgmd]
NodeId=1
3 HostName=192.168.1.132
# This is an IP address of the cluster
4 management node
5 DataDir=/var/lib/mysql-cluster # This is a data directory for the cluster
6 management node
7
8 [ndbd default]
9 DataDir=/var/lib/mysql-cluster
NoOfReplicas=2
1 MaxNoOfConcurrentOperations=32000
0 MaxNoOfAttributes = 10000
11MaxNoOfOrderedIndexes=512
1 DataMemory=1G
IndexMemory=500M
2
1 [ndbd]
3 NodeId=3
1 HostName=192.168.45.131
4
1 [ndbd]
5 NodeId=4
HostName=192.168.45.137
1
6 [mysqld]
1 HostName=192.168.45.131
7
1 [mysqld]
8 HostName=192.168.45.137
1
9
2
0

2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
9. Create a new directory to stores data, as it is set inside the config.ini
1 mkdir /var/lib/mysql-cluster
10. Executes ndb_mgmd inside /usr/local/mysql/bin/, to run this management node
1 cd /usr/local/mysql
2 ./bin/ndb_mgmd --config-file=config.ini
11. To check, executes ndb_mgm inside /usr/local/mysql/bin
1

./bin/ndb_mgm

12. After ndb_mgm application running, type SHOW


1

ndb_mgm>SHOW

13. If we succeed on installing the manager node, our ndb_mgm console screen will shows
something like below.

Thats it guys,

MySQL Cluster part 2: Installing NDB Node in Ubuntu


Server v.12.04
So we are going to install NDB node now,

A. Preparations:
1. Turn on the Cluster Manager service, so when NDB node is running, it will
registered it self to it automatically,
2. Creates a VM of an Ubuntu Server instance (Im using v.12.04),
3. Disabling IP filtering in this Ubuntu instance (read my post here).

B. Steps:

1. Login as root
2. Open /var/tmp/ directory, and download MySQL cluster from the web
/var/tmp/
1cd
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl27.2.6-linux2.6-x86_64.tar.gz

3. Extract the downloaded file into /usr/local/ directory


1 tar -C /usr/local/ -xzvf mysql-cluster-gpl-7.2.6-linux2.6-x86_64.tar.gz
4. Creates a link folder
1ln -s /usr/local/mysql-cluster-gpl-7.2.6-linux2.6-x86_64 /usr/local/mysql
5. Creates a new directory called mysql under /etc/ directory
1

mkdir /etc/mysql

6. Creates a new file called my.cnf in /etc/mysql/


1
2

cd /etc/mysql/
vi my.cnf

7. Types below commands into the my.cnf file


1
2 # my.cnf content
3 [client]
port=3306
4 socket=/tmp/mysql-cluster
5
6 [mysqld]
7 port=3306
8 socket=/tmp/mysql-cluster
ndbcluster
9 ndb-connectstring=192.168.45.133 #IP address of your management node
10
11 [mysql_cluster]
12 ndb-connectstring=192.168.45.133 #IP address of your management node
13
Note: value of ndb-nodeid=n must matched with the one registered in ndb_mgm console at
management node, after you type SHOW command. In this case, the value is 6, because this
server which has an IP of 192.168.45.137, registered as a node number 6 in ndb_mgm
console.
8. Creates a new directory for socket, as it is set in the my.cnf
1
9. Creates a new directory for data, as it is set in the config.ini (refer to mypost here)

1 mkdir /var/lib/mysql-cluster
10. Execute ndbd
1
2

cd /usr/local/mysql
./bin/ndbd

11. If everything OK, our screen will shows something like below,

12. To further check, whether this NDB node has already registered in our Management node
1

ndb_mgm>SHOW

13. If our NDB node registered, our ndb_mgm console screen would shows something like
below

Thats it guys,

MySQL Cluster part 3: Installing SQL Node in Ubuntu


Server v.12.04
We are going to install the last piece of the first half of this cluster environment now, which
is: an SQL node.
In this learning, I deploy both: SQL and NDB, in a single instance of VM.
So instead of creating a new VM, Im using the same VM where I deployed an NDB node we
created before.
Inside our my.cnf file, which we created on the 7th step in our previous tutorial, contains a
code block like below.

1
2
3
4
5
6
7

......
[mysqld]
port=3306
socket=/tmp/mysql-cluster
ndbcluster
ndbconnectstring=192.168.45.133
......

This particular code block, is actually a minimum sets, for creating an SQL node.
So, what we need to do next are:

A. Prerequisites:
1. Keep the Cluster Manager service ON
2. Keep the NDB service ON

B. Steps:
1. Creates a new user, called mysql
1

useradd mysql

groupadd mysql
useradd -g mysql mysql

2. Download and install libaio package from the internet


1 apt-get install libaio1 libaio-dev
3. After libaio package fully installed, executes the script for installing the initial MySQL
system tables
1 cd /usr/local/mysql/
2 ./scripts/mysql_install_db --user=mysql
4. If everything runs well, our screen will shows something like below.

5. Set new permissions access to both MySQL server and its data
1
2
3

chown -R root .
chown -R mysql data
chgrp -R mysql .

6. Execute mysql.server to start the SQL instance


1 cd /usr/local/mysql/
2 ./support-files/mysql.server start
7. Execute SHOW command in ndb_mgm console, in the Management Cluster node; And
if everything runs well, our screen will shows something like below.

From the above pic, we can see that we already have the 1st half of the cluster we want to
make (1 ndbd node and 1 mysqld).

HOWTO: MySQL Cluster on Ubuntu 11.04


May 11, 2011MySQL, Networkingbalance, balancing, cluster, guide, howto, load,
load-balance, loadbalancing, mysql, ndb, ndbcluster, proxy

There are a few guides out for setting up a MySQL Cluster already; unfortunately the large
majority of them arent geared towards the beginner, and those that are generally involve a
single-server setup. This guide will aim to explain the purpose of each choice and will get
you up and running with a basic 3 server setup with a single load-balancing server.

Preliminary Requirements
There are a few things you will need first; 3 servers (or VMs) for the cluster and 1 server for
load-balancing. They should all be running Ubuntu 11.04 Server Edition.
Each node should have a minimum of 1GB of RAM and 16GB of hard drive space. The
management node can work with 512MB of ram and the default 8GB of hard drive space that
VMWare allocates.
Package Installation

These packages will be installed on all 3 servers (management node, data node 1 and data
node 2). There is a bug in the installation for the MySQL Cluster packages on Ubuntu, you
will need to install MySQL Server first then install the Cluster packages like so:
sudo apt-get install mysql-server
The mysql-server package installs some crucial configuration files and scripts that are
skipped and cause dpkg to get hung up during configuration. The root password you select

here WILL NOT be overwritten in the mysql-cluster-server installation, remember this


password.
sudo apt-get install mysql-cluster-server
Accept any extra packages these installations need, it will take about 200MB total for both.
Configuration

This is the bread and butter of a MySQL Cluster installation. In a production environment
you would run with more than 2 data nodes across more than one physical machine on the
same network. There should always be as little latency between nodes as possible. If you do
choose to run VMs on a physical host you should never overallocate RAM to the nodes, the
database is mainly stored in RAM and overallocation means some data will be placed into the
hosts swap space which increases latency in orders of 10s, 100s or even 1000s in the worst
cases.
The Management Node

This node is the brain of the cluster. Without the data nodes can lose sync and cause all sorts
of inconsistencies. In a production environment you have at least 2 management nodes (the
configuration changes slightly and will be noted in the files here). Here is the configuration
file (/etc/mysql/ndb_mgmd.cnf):
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=256M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the world database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Section for the cluster management node
[NDB_MGMD]
# IP address of the management node (this system)
HostName=172.16.59.134
#For multiple management nodes we just create more [NDB_MGMD] sections for each node
# Section for the storage nodes
[NDBD]
# IP address of the first storage node

HostName=172.16.59.132
DataDir=/var/lib/mysql-cluster
BackupDataDir=/var/lib/mysql-cluster/backup
DataMemory=512M
[NDBD]
# IP address of the second storage node
HostName=172.16.59.133
DataDir=/var/lib/mysql-cluster
BackupDataDir=/var/lib/mysql-cluster/backup
DataMemory=512M
# one [MYSQLD] per storage node
#These do not require any configuration, they are the front-end access to our data
#Their addresses can be assigned when they connect
[MYSQLD]
[MYSQLD]
This configuration assumes 2 things; first that your nodes are isolated on their own network
and all the machines on it are trusted (VLAN them onto their own network damnit). Second,
it assumes you are going to run two mysqld instances (I run them on the data nodes
themselves and balance the load with a 4th server using mysql-proxy).
The Data Nodes

The data nodes are much easier to configure, we can use the configuration that was installed
and add 4 lines. Here are the changes that need to be made (/etc/mysql/my.cnf):
We add this to the existing section
[mysqld]
ndbcluster
ndb-connectstring=172.16.59.134 #This is the management node
#ndb-connectstring=host=172.16.59.134, host=172.16.59.135 This is if we hade TWO
management nodes, one on 172.16.59.134 and one on 172.16.59.135
We add this section at the bottom of the file
[mysql_cluster]
ndb-connectstring=172.16.59.134
#ndb-connectstring=host=172.16.59.134, host=172.16.59.135 If we had two management
nodes
One thing missing on the data nodes is the backup directory I have referenced in the
ndb_mgmd.cnf file. The following commands will create them and set their permisisons (do
this on each data node):

sudo mkdir /var/lib/mysql-cluster/backup


sudo chown mysql:mysql -R /var/lib/mysql-cluster
Bringing it Online

Bringing the whole arrangement online involves a very specific ordering:


On the management node:
sudo /etc/init.d/mysql-ndb-mgm restart
On the data nodes (do this on all of the data nodes first):
sudo /etc/init.d/mysql-ndb restart
Once all the data nodes have their ndb daemons restarted:
sudo /etc/init.d/mysql restart
This last one will start the mysql daemons and assumes you are running them on the data
nodes.
Testing the Cluster
The Nodes are Connected

First off we want to verify that the cluster is running properly; run the following on the
management node:
sudo ndb_mgm
mgm> show
You should see at least 5 separate nodes, the first two are the data nodes, middle ones are the
management nodes and lastly you will see the mysql daemons. If the data nodes are stuck in
the starting state a quick restart should fix them, DO NOT JUST TYPE REBOOT.
From ndb_mgm
mgm> shutdown
Issuing the shutdown command from within ndb_mgm will bring the cluster down. You can
then safely reboot the data nodes, however make sure to restart the management node first as
the data nodes will come up without it otherwise (should probably just reboot the
management node(s) then the data nodes for good measure). If everything goes well you
should be set.
Test Databases

Connect to the first data node and run the following commands:
mysql -u root -p
mysql> show databases;
You should see a few databases, lets create a test database and add a table to it:
mysql> create databases test_db;
mysql> use test_db;
mysql> create table test_table (ival int(1)) engine=ndbcluster;
mysql> insert into test_table values(1);
mysql> select * from test_table;
You should see one value 1 in the table. Now connect to one of the other data nodes and you
should be able to do the following:
sudo mysql -u root -p
mysql> show databases;
This should show the database we created on the first node test_db;
mysql> use test_db;
mysql> select * from test_table;
If all is well this should show the same value as we had before, congratulations your cluster is
working.
Advanced Setup: A Load Balancer

This is actually the easier part of the guide. On a 4th server install mysql-proxy:
sudo apt-get install mysql-proxy
Next lets start the proxy and have it connect to our two data nodes:
screen -S proxy
mysql-proxy proxy-backend-addresses=172.16.59.132:3306 proxy-backendaddresses=172.16.59.133:3306

CTRL+A D
This starts the proxy and allows it to balance across the two nodes I specified in my
configuration. If you want to specify a node as read-only substitute proxy-backendaddresses= with proxy-read-only-backend-addresses=
Lets connect to the proxy and see if it works:
mysql -u root -p -h 127.0.0.1 -P 4040
mysql> use test_db;
mysql> select * from test_table;
If all is working well you will see the same things as if you were connected to your actual
mysql instances

S-ar putea să vă placă și