Skip to main content

Getting Started with Percona XtraDB Cluster

Percona XtraDB Cluster

As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture step by step details of installation of Percona XtraDB in Cluster mode. 

Why Cluster Mode Introduction:

Percona XtraDB cluster is High Availability and Scalability solution for MySQL users which provides
          Synchronous replication : Transaction either committed on all nodes or none.
          Multi-master replication : You can write to any node.
          Parallel applying events on slave : parallel event application on all slave nodes
          Automatic node provisioning
          Data consistency

Straight into the Act:Installing Percona XtraDB Cluster

Pre-requisites/Assumptions
  1. OS - Ububtu
  2. 3 Ubuntu nodes are available
For the sake of this discussion lets name the nodes as
node 1
hostname: percona_xtradb_cluster1
IP: 192.168.1.2

node 2
hostname: percona_xtradb_cluster2
IP: 192.168.1.3

node 3
hostname: percona_xtradb_cluster3
IP: 192.168.1.4
Repeat the below steps on all nodes

STEP 1 : Add the Percona repository
$ echo "deb http://repo.percona.com/apt precise main" >> /etc/apt/sources.list.d/percona.list
$ echo "deb-src http://repo.percona.com/apt precise main" >> /etc/apt/sources.list.d/percona.list
$ apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
STEP 2 : After adding percona repository, Update apt cache so that new packages can be included in our apt-cache.
$ apt-get update
STEP 3 : Install Percona XtraDB Cluster :
$ apt-get install -y percona-xtradb-cluster-56 qpress xtrabackup
STEP 4 : Install additional package for editing files, downloading etc :
$ apt-get install -y python-software-properties vim wget curl netcat

With the above steps we have, installed Percona XtraDB Cluster on every node. Now we'll configure each node, so that a cluster of three nodes can be formed.

Node Configuration:

Add/Modify file /etc/mysql/my.cnf on first node :
[MYSQLD] #This section is for mysql configuration
user = mysql                                                   
default_storage_engine = InnoDB                  
basedir = /usr
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
port = 3306
innodb_autoinc_lock_mode = 2
log_queries_not_using_indexes = 1
max_allowed_packet = 128M
binlog_format = ROW
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_node_address = 192.168.1.2
wsrep_cluster_name="newcluster"
wsrep_cluster_address = gcomm://192.168.1.2,192.168.1.3,192.168.1.4
wsrep_node_name = cluster1
wsrep_slave_threads = 4
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sst:secret

[sst] #This section is for sst(state snapshot transfer) configuration 
streamfmt = xbstream

[xtrabackup] #This section is defines tuning configuration for xtrabackup
compress
compact
parallel = 2
compress_threads = 2
rebuild_threads = 2
Note :
         wsrep_node_address = {IP of current node}
         wsrep_cluster_name= {Name of cluster}
         wsrep_cluster_address = gcomm://{Comma separated IP address’s which are in cluster}
         wsrep_node_name = {This is name of current node which is used to identify it in cluster}

Now as we have done node configuration. Now start first node services:
Start the node :
$ service mysql bootstrap-pxc
Make sst user for authentication of cluster nodes :
$ mysql -e "GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret';"
Check cluster status :
$ mysql -e "show global status like 'wsrep%';"

Configuration file for second node:
[MYSQLD]
user = mysql
default_storage_engine = InnoDB
basedir = /usr
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
port = 3306
innodb_autoinc_lock_mode = 2
log_queries_not_using_indexes = 1
max_allowed_packet = 128M
binlog_format = ROW
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_node_address = 192.168.1.3
wsrep_cluster_name="newcluster"
wsrep_cluster_address = gcomm://192.168.1.2,192.168.1.3,192.168.1.4
wsrep_node_name = cluster2
wsrep_slave_threads = 4
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sst:secret

[sst]
streamfmt = xbstream

[xtrabackup]
compress
compact
parallel = 2
After doing configuration, start services of node 2.
Start node 2 :
$ service mysql start
Check cluster status :
$ mysql -e "show global status like 'wsrep%';"
Now similarly you have to configure node 3. Changes are listed below.
Changes in configuration for node 3 :
      wsrep_node_address = 192.168.1.4
      wsrep_node_name = cluster3
Start node 3 :
$ service mysql start

Test percona XtraDb cluster:

Log-in by mysql client in any node:
mysql>create database opstree;
mysql>use opstree;
mysql>create table nu113r(name varchar(50));
mysql>insert into nu113r values("zukin");
mysql>select * from nu113r;
Check the database on other node by mysql client:
mysql>show databases;
Note : There should be a database named “opstree”.
mysql>use opstree;
mysql>select * from nu113r; 
Note : Data will be same as in the previous node.

Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.

Comments

  1. Thanks for the post. I used similar post for installation - http://sysadm.pp.ua/linux/px-cluster.html . Did you made some performance tests(high load) ? Thanks in advance.

    ReplyDelete

Post a Comment

Popular posts from this blog

EC2 Ssh Connection Refused

When ssh: connect to host ip_address port 22 Connection refused



Unable to access server???
Exactly when you see the error - “ssh: connect to host ip_address port 22: Connection refused” while connecting your AWS EC2 Instance. In order to find solution of the problem, you will go to AWS forum and other channels where you need to answers several questions first. But it's very difficult to find the actual problem. In order to get clues what the problem is, we should provide as many details as possible about what we have tried and the results we are getting. Because there are hundreds of reason why a server or service might not be accessible, also connectivity is one of the toughest issue to diagnose, especially when you are hosting something critical on your box. I've seen several topics on this problem, but none offers a solution to it.  I was not aware for what should I look at first. So I walk through from the very basics and investigated the following thing Use of verbose while ss…

jgit-flow maven plugin to Release Java Application

Introduction As a DevOps I need a smooth way to release the java application, so I compared two maven plugin that are used to release the java application and in the end I found that Jgit-flow plugin is far better than maven-release plugin on the basis of following points: Maven-release plugin creates .backup and release.properties files to your working directory which can be committed mistakenly, when they should not be. jgit-flow maven plugin doesn't create these files or any other file in your working directory.Maven-release plugin create two tags.Maven-release plugin does a build in the prepare goal and a build in the perform goal causing tests to run 2 times but jgit-flow maven plugin builds project once so tests run only once.If something goes wrong during the maven plugin execution, It become very tough to roll it back, on the other hand jgit-flow maven plugin makes all changes into the branch and if you want to roll back just delete that branch.jgit-flow maven plugin doesn…

VPC per envrionvment versus Single VPC for all environments

This blog talks about the two possible ways of hosting your infrastructure in Cloud, though it will be more close to hosting on AWS as it is a real life example but this problem can be applied to any cloud infrastructure set-up. I'm just sharing my thoughts and pros & cons of both approaches but I would love to hear from the people reading this blog about their take as well what do they think.


Before jumping right away into the real talk I would like to give a bit of background on how I come up with this blog, I was working with a client in managing his cloud infrastructure where we had 4 environments dev, QA, Pre Production and Production and each environment had close to 20 instances, apart from applications instances there were some admin instances as well such as Icinga for monitoring, logstash for consolidating logs, Graphite Server to view the logs, VPN server to manage access of people.




At this point we got into a discussion that whether the current infrastructure set-u…