Tuesday, October 27, 2015

Marrying Nginx with ELB

Few weeks back I got a requirement to setup a highly available API server. I said not a big deal! I'll have Nginx as a reverse proxy(Why not directly exposing API via ELB a different story) and my API auto scaled setup will sit behind an internal ELB and things would be in place TA DA.

Things worked perfectly fine for few days, but one day the API consumer reported that they are not getting response back, what? When I checked the API url was indeed returning a 502 error code. It was really strange for nginx to be sending 502 response back that meant the highly scalable setup was down? Well I was proven wrong ELB was working perfectly fine as the curl request to internal ELB was returning proper response, so yes the highly available API setup was in place. What next, yes! Nginx error logs. I did saw Nginx reporting connection timeout with 502 error code. The interesting thing to note that it was an IP(random IP assigned to ELB), when I tried to do curl hit on that IP for API request, it did failed EUREKA EUREKA!! I reproduced the problem.

Well now I've to collect all this information and infer what is the logical cause of the problem, and yes there are lot of smart people available who would have fond the solution to this problem so I've to ask right question in Google :). The question was "Nginx using Ip instead of domain name" and the answer was "Nginx caches the IP at the startup and obviously as ELB is Elastic in nature so it's IP changes over the period of time". That was the reason Nginx was trying to talk to the older un-associated IP's of Internal ELB.

Finding solution was not a big task as it was just about making sure  Nginx should talk to ELB not the IP's associated with it, that's why said marrying nginx with ELB :).

I'll not go into the actual solution as there are already solutions available in web. I referred this really good blog as a solution.


Monday, February 23, 2015


As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture step by step details of installation of Percona XtraDB in Standalone mode


Percona Server is an enhanced drop-in replacement for Mysql. It offers breakthrough performance, scalability, features, and instrumentation.
Percona focus on providing a solution for the most demanding applications, empowering users to get the best performance and lowest downtime possible.

The Percona XtraDB Storage Engine:

  • Percona XtraDB is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. 
  • Percona XtraDB includes all of InnoDB's robust, reliable ACID-compliant design and advanced MVCC architecture, and builds on that solid foundation with more features, more tunability. more metrics, and more scalability.
  • It is designed to scale better on many cores, to use memory more efficiently, and to be more convenient and useful.

Installation on ubuntu:

STEP 1: Add Percona Software Repositories
$ apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
STEP 2: Add this to /etc/apt/sources.list:
deb http://repo.percona.com/apt precise main
deb-src http://repo.percona.com/apt precise main
STEP 3: Update the local cache
$ apt-get update
STEP 4: Install the server and client packages
$ apt-get install percona-server-server-5.6 percona-server-client-5.6
STEP 5: Start Percona Server
$ service mysql start
Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.

Understanding Percona XtraDB cluster

As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture theoretical knowledge of Percona XtraDB Cluster.


  1. You should have basic knowledge of mysql. 
  2. OS - Ubuntu

What is Percona?

Percona XtraDB cluster is an open source, free MySql high availability and scalability software.
It provides:
  1. Synchronous Replication: Transaction either committed on all nodes or none.
  2. Multi-Master Replication: You can write to any node
  3. Parallel applying events on slave. Real “parallel replication”.
  4. Automatic node provisioning.
  5. Data consistency. No more unsynchronized slaves.


  1. The cluster consists of nodes. The cluster’s recommended configuration is to have 3 nodes, however 2 nodes can be used as well.
  2. Every node is a regular Mysql / Percona server setup. You can convert your existing MySQL / Percona Server into Node and roll Cluster using it as a base or you can detach Node from Cluster and use it as a regular server.
  3. Each node will contain full copy of data.


Benefits of this approach:

  • Whenever you execute a query, it is executed locally. All data is available locally, so no remote access is required.
  • No central management. You can loose any node at any time, and cluster will continue functioning.
  • It is a good solution for scaling read workload. You can put read queries to any of the nodes.


  • Overhead of joining new node. New node will copy all data from an existing node. If it is 100 GB, it will copy 100 GB.
  • Not an effective write scaling solution. All writes have to go on all nodes.
  • Duplication of data. If you have 3 nodes, there will be 3 duplicates.

Difference between Percona XtraDB Cluster and MySQL Replication

For this we will have to look into the well known CAP theorem for distributed systems. According to this theorem, characteristics of Distributed systems are:
C - Consistency (all your data is consistent on all nodes),
A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes),
P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests).
CAP theorem says that any Distributed system can have any two out of these three.
  • MySQL replication has: Availability and Partitioning tolerance.
  • Percona XtraDB Cluster has: Consistency and Availability.
So, MySql replication does not guarantee Consistency of data, while Percona XtraDB cluster provides consistency while it looses partitioning tolerance.


Percona XtraDb Cluster is based on:
  • Percona Server with XtraDB and includes Write Set Replication patches.
It uses:
  • Galera Library: A generic synchronous Multi-Master replication plugin for transactional applications.
  • Galera supports:
    • Incremental State Transfer (IST), useful in WAN deployments.
    • RSU, Rolling Schema Update. Schema change does not block operations against table.

Percona XtraDB cluster limitations

  • Currently replication work only with InnoDB storage engine.
That means writes to table of other types, including (mysql.*) tables, are not replicated.
DDL statements are replicated in statement level and changes to mysql.* tables will get replicated that way.
So you can issue: CREATE USER …. , this will be replicated,
but issuing: INSERT INTO mysql.user …. , will not be replicated.
You can also enable experimental MyISAM replication support with wsrep_replicate_myisam.
  • Unsupported queries:
    • LOCK/UNLOCK tables
    • lock function (GET_LOCK(), RELEASE_LOCK()....)
  • Due to cluster level concurrency control, transaction issuing COMMIT may be aborted at that stage.
There can be two transactions writing to same rows and committing in separate Percona XtraDB Cluster nodes, and only one of the them can successfully commit. The failing one will be aborted. For cluster level aborts, Percona will give back deadlock error code.
  • The write throughput of whole cluster is limited by weakest node. If one node becomes slow, whole cluster will become slow.


High Availability

In a basic setup with 3 nodes, the Percona XtraDB cluster will continue to function if you take any of the nodes down. Even in a situation of node crash, or if node becomes unavailable over network, the cluster will continue to work, and queries can be issued on working nodes.
In case, when there are changes in data while node was down, there are two options that Node may use when it joins the cluster:
  1. State Snapshot Transfer (SST): SST method performs full copy of data from one node to other. It’s used when a new node joins the cluster. One of the existing node will transfer data to it.
     There are three available methods of SST:
    • mysqldump
    • rsync
    • xtrabackup
Downside of “mysqldump” and “rsync” is that your cluster becomes READ-ONLY while data is copied from one node to other.
xtrabackup SST does not require this for entire syncing process.
  1. Incremental State Transfer (IST): If a node is down for a short period of time, and then starts up, the node is able to fetch only those changes made during the period it was down.
This is done using caching mechanism on nodes. Each node contains a cache, ring-buffer of last N changes, and the node is able to transfer part of this cache. IST can be done only if the amount of changes needed to transfer is less than N. If it exceeds N, then the joining node has to perform SST.

Multi-Master Replication

  • Multi-Master replication stands for the ability to write to any node in the cluster, and not to worry that it will get out-of-sync situation, as it regularly happens with regular MySQL replication if you imprudently write to the wrong server.
  • With Percona XtraDB Cluster you can write to any node, and the Cluster guarantees consistency of writes. That is, the write is either committed on all the nodes or not committed at all.
All queries are executed locally on the node, and there is a special handling only on COMMIT. When the COMMIT is issued, the transaction has to pass certification on all the nodes. If it does not pass, you will receive ERROR as a response on that query. After that, transaction is applied on the local node.

Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.

Tuesday, February 17, 2015

Getting Started with Percona XtraDB Cluster

Percona XtraDB Cluster

As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture step by step details of installation of Percona XtraDB in Cluster mode. 

Why Cluster Mode Introduction:

Percona XtraDB cluster is High Availability and Scalability solution for MySQL users which provides
          Synchronous replication : Transaction either committed on all nodes or none.
          Multi-master replication : You can write to any node.
          Parallel applying events on slave : parallel event application on all slave nodes
          Automatic node provisioning
          Data consistency

Straight into the Act:Installing Percona XtraDB Cluster

  1. OS - Ububtu
  2. 3 Ubuntu nodes are available
For the sake of this discussion lets name the nodes as
node 1
hostname: percona_xtradb_cluster1

node 2
hostname: percona_xtradb_cluster2

node 3
hostname: percona_xtradb_cluster3
Repeat the below steps on all nodes

STEP 1 : Add the Percona repository
$ echo "deb http://repo.percona.com/apt precise main" >> /etc/apt/sources.list.d/percona.list
$ echo "deb-src http://repo.percona.com/apt precise main" >> /etc/apt/sources.list.d/percona.list
$ apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
STEP 2 : After adding percona repository, Update apt cache so that new packages can be included in our apt-cache.
$ apt-get update
STEP 3 : Install Percona XtraDB Cluster :
$ apt-get install -y percona-xtradb-cluster-56 qpress xtrabackup
STEP 4 : Install additional package for editing files, downloading etc :
$ apt-get install -y python-software-properties vim wget curl netcat

With the above steps we have, installed Percona XtraDB Cluster on every node. Now we'll configure each node, so that a cluster of three nodes can be formed.

Node Configuration:

Add/Modify file /etc/mysql/my.cnf on first node :
[MYSQLD] #This section is for mysql configuration
user = mysql                                                   
default_storage_engine = InnoDB                  
basedir = /usr
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
port = 3306
innodb_autoinc_lock_mode = 2
log_queries_not_using_indexes = 1
max_allowed_packet = 128M
binlog_format = ROW
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_node_address =
wsrep_cluster_address = gcomm://,,
wsrep_node_name = cluster1
wsrep_slave_threads = 4
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sst:secret

[sst] #This section is for sst(state snapshot transfer) configuration 
streamfmt = xbstream

[xtrabackup] #This section is defines tuning configuration for xtrabackup
parallel = 2
compress_threads = 2
rebuild_threads = 2
Note :
         wsrep_node_address = {IP of current node}
         wsrep_cluster_name= {Name of cluster}
         wsrep_cluster_address = gcomm://{Comma separated IP address’s which are in cluster}
         wsrep_node_name = {This is name of current node which is used to identify it in cluster}

Now as we have done node configuration. Now start first node services:
Start the node :
$ service mysql bootstrap-pxc
Make sst user for authentication of cluster nodes :
$ mysql -e "GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret';"
Check cluster status :
$ mysql -e "show global status like 'wsrep%';"

Configuration file for second node:
user = mysql
default_storage_engine = InnoDB
basedir = /usr
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
port = 3306
innodb_autoinc_lock_mode = 2
log_queries_not_using_indexes = 1
max_allowed_packet = 128M
binlog_format = ROW
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_node_address =
wsrep_cluster_address = gcomm://,,
wsrep_node_name = cluster2
wsrep_slave_threads = 4
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sst:secret

streamfmt = xbstream

parallel = 2
After doing configuration, start services of node 2.
Start node 2 :
$ service mysql start
Check cluster status :
$ mysql -e "show global status like 'wsrep%';"
Now similarly you have to configure node 3. Changes are listed below.
Changes in configuration for node 3 :
      wsrep_node_address =
      wsrep_node_name = cluster3
Start node 3 :
$ service mysql start

Test percona XtraDb cluster:

Log-in by mysql client in any node:
mysql>create database opstree;
mysql>use opstree;
mysql>create table nu113r(name varchar(50));
mysql>insert into nu113r values("zukin");
mysql>select * from nu113r;
Check the database on other node by mysql client:
mysql>show databases;
Note : There should be a database named “opstree”.
mysql>use opstree;
mysql>select * from nu113r; 
Note : Data will be same as in the previous node.

Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.

Wednesday, January 14, 2015

Kernel-based Virtualization Machine

What is virtualizatin?
"Virtualization is a technology that combines or divides computing resources to present one or many operating environments using methodologies like hardware and software partitioning or aggregation, partial or complete machine simulation, emulation, time-sharing, and many others."

This means, that virtualization uses technology to abstract from the real hardware and provides isolated environments, so called Virtual Machines. They are capable to run various applications or even a whole operating system. A goal not mentioned in the definition is to have nearly to native performance for running VMS. This is a very important point, because the users always want to get the most out of their hardware. Most of them are not willing to introduce virtualiztion technology, if a huge amount of CPU power is wasted by managing VMs.

Kernel-based Virtual Machine :
KVM is the first virtualization solution that has been integration into the vanilla Linux kernel. KVM has been initially developed by Qumranet, a small company located in Isreal. Redhat acquired Qumranet in September 2008, when KVM become more production ready. They see KVM as next generation of virtualiztion technology.

KVM Architecture:
Linux has all the mechanism a VMM needs to operate several VMs. KVM is implemented as a kernel module that can be loaded to extend Linux by these capabilities. 
In a normal Linux environment each process runs either in user-mode or in kernel-mode. KVM introduces a third, the guest-mode. Therefore it relies on a 

virtualization capable CPU either Intel VT or AMD-SVM extensions. A process in guest-mode has its own kernel-mode and user-mode. Thus, it is able to run an opeating system. Such processes are representing the VMs running on a KVM host. What the modes are used for from a hosts point of view:
  • user-mode: I/O when guest needs to access devices.
  • kernel-node: switch into guest-mode and handle exits due to I/O operations
  • guest-mode: execute guest code, which is the guest OS except I/O
Resource Management:
To increase the reuse code as mush as possible they mainly modified the Linux memory management, to allow mapping physical memory into the VMs address space. Therefore they added shadow page tables, that were needed in the early days of x8 virtualization.
The scheduler of an operating system computers an order in that each process is assigned to one of the variable CPUs. In this way, all running process is assigned to one of the available CPUs. In this way, all running processes are share the computing time. Since the KVM developers wanted to reuse most of the mechanisms of Linux. They simply implemented each VM as a process, relying on its scheduler to assign computing power to the VMs.

The KVM control interface: 
Once the KVM kernel module has been loaded, the /dev/kvm device node appears in the file-system that represents the interface of KVM. It allows to control the hypervisor through a set of ioctls, commonly used in certain operating systems as an interface for processes running in user-mode to communicate with a driver. The ioctl() system call allows to execute several operations to create new virtual machines, assign memory to a virtual machine, assign and start virtual CPUs.

Emulation of Hardware:
To provide hardware like hard disks, cd drivers or networks  cards to the VMs, KVM uses a highly modified QEMU, This is a so called platform virtulization tool, which allows to emulate  an entire PC platform including graphics, net working, disk drives and many more.

Figure describe the execution model of KVM. This is the loop of the actions used to operate the VMs. These actions are separated by the three modes.
let see the which tasks are done in which mode:
  • user-mode: The KVM module is called using ioctl() to execute guest code until I/O operations initiated by the guest or an external event occurs. Such an event may be the arrival of a network package, which could be the reply of a network package sent by the host earlier. Such events are expressed as signals that leads to an interruption of guest code execution.
  • Kernel-mode: The kernel causes the hardware to execute guest code natively. IF the processor exits the guest due to pending memory or I/O operations, the kernel performs the necessary tasks and resumes the flow of execution. If external events such as signals or I/O operations initiated by the guest exists, it exits to the user-mode.
  • guest-mode: This is on the hardware level, where the extended instruction set of  virtualization capable CPU is used to execute the native code, until an instruction called that needs assistance by KVM, a fault or an external interrupt.
While a VM runs, There are plenty of switches between these modes.

Friday, November 14, 2014

Chef Journey

I'm starting a blog series on chef where I would be taking you to a journey of managing my current infrastructure using Chef. To start with these are the high level tasks lists that I've in mind:
  • User Management : User's creation or deletion on an environment(Dev/QA/Staging/Production) should be managed by chef, along with kind of access on the environment i.e read-only access, root access, or adding a user to some groups.
  • VPN Setup : Currently we are using openvpnas for managing secured access to our environment, it is manual right now so the vpn set-up will also be done by chef.
  • Apache Setup : We are using apache as web server that sits in front of our app server and also provides SSL.
  • Jar App : We have a SOA based set-up in which we have multiple micro java services, so we would be using chef to manage those jar app i.e deploying those jar app's, starting/stopping those jar app's.
  • Tomcat : Another major component type in our application are web apps that are hosted on tomcat server, the tomcat server is not managed as a service instead we create tomcat as an app user along with tomcat management scripts.
  • Mongo : We use replicated mongo as No SQL database in our application.
  • Logstash : For managing logs we are using log stash in a clustered set-up where all the log agents publish the logs to a central server and then served by Kibana, so this complete setup should also be managed by chef
  • ActiveMQ : We are using ActiveMQ for our queuing purpose

This list is not complete surely, I'll be adding many more tasks in this list as I proceed in setting up my environment using chef as this is the first time I'll be doing a set-up using Chef, but this list will be a good starting point.

Before jumping into creating the Chef cookbooks, runlists or data bags I've to setup the base infrastructure of Chef that is Chef Server to which all chef agents talk to, a chef workstation which would be updating the server with the configurations and a git repo to keep track of all my configuration as shown in the image given below.

In the next blog I'll talk about how I'll set-up a chef server. Let me know if you have any inputs for me or suggestion that how I should proceed with the chef set-up.

Monday, October 27, 2014

VPC per envrionvment versus Single VPC for all environments

This blog talks about the two possible ways of hosting your infrastructure in Cloud, though it will be more close to hosting on AWS as it is a real life example but this problem can be applied to any cloud infrastructure set-up. I'm just sharing my thoughts and pros & cons of both approaches but I would love to hear from the people reading this blog about their take as well what do they think.

Before jumping right away into the real talk I would like to give a bit of background on how I come up with this blog, I was working with a client in managing his cloud infrastructure where we had 4 environments dev, QA, Pre Production and Production and each environment had close to 20 instances, apart from applications instances there were some admin instances as well such as Icinga for monitoring, logstash for consolidating logs, Graphite Server to view the logs, VPN server to manage access of people.

At this point we got into a discussion that whether the current infrastructure set-up is the right one where we are having a separate VPC per environment or the ideal setup would have been a single VPC and the environments could have been separated by subnet's i.e a pair of subnet(public private) for each environment

Both approaches had some pros & cons associated with them

Single VPC set-up


  1. You only have a single VPC to manage
  2. You can consolidate your admin app's such as Icinga, VPN server.


  1. As you are separating your environments through subnets you need granular access control at your subnet level i.e instances in staging environment should not be allowed to talk to dev environment instances. Similarly you have to control access of people at granular level as well
  2. Scope of human error is high as all the instances will be on same VPC.

VPC per environment setup


  1. You have a clear separation between your environments due to separate VPC's.
  2. You will have finer access control on your environment as the access rules for VPC will effectively be access rules for your environments.
  3. As an admin it gives you a clear picture of your environments and you have an option to clone you complete environment very easily.


  1. As mentioned in pros of Single VPC setup you are at some financial loss as you would be duplicating admin application's across environments

In my opinion the decision of choosing a specific set-up largely depends on the scale of your environment if you have a small or even medium sized environment then you can have your infrastructure set-up as "All environments in single VPC", in case of large set-up I strongly believe that VPC per environment set-up is the way to go.

Let me know your thoughts and also the points in favour or against of both of these approaches.