Redis Best Practices and Performance Tuning

One of the thing that I love about my organization is that you don’t have to do the same repetitive work, you will always get the chance to explore some new technologies. The same chance came across to me a few days back when one of our clients was facing issue with Redis.
They were using the Redis Cluster with Sentinel for which they were facing issue regarding performance, whenever the connection request was high the Redis Cluster was not able to bear the load.
Since they were using a decent configuration of the server in terms of CPU and Memory but the result was the same. So now what????
The Answer was to tune the performance.

There are plenty of Redis performance articles out there, but I wanted to share my experience as a DevOps with Redis by creating an article which will include the most essential and important stuff that is needed for a Developer or a DevOps Engineer.

So let’s get started.

 TCP-KeepAlive

Keepalive is a method to allow the same TCP connection for HTTP conversation instead of opening a new one with each new request.

In simple words, if the keepalive is off the Redis will open a new connection for every request which will slow down its performance. If the keepalive is on then Redis will use the same TCP connection for requests.

Let’s see the graph for more details. The Red Bar shows the output when keepalive is on and Blue Bar shows the output when keepalive is off

For enabling the TCP keepalive, Edit the redis configuration and update this value.

vim /etc/redis/redis.conf
# Update the value to 0
tcp-keepalive 0

Pipelining

This feature could be your lifesaver in terms of Redis Performance. Pipelining facilitates a client to send multiple requests to the server without waiting for the replies at all and finally reads the reply in a single step.

For example:-

1

You can also see in the graph as well.

Pipelining will increase the performance of redis drastically.

Max-Connection

Max-connection is the parameter in which is used to define the maximum connection limit to the Redis Server. You can set that value accordingly (Considering your server specification) with the following steps.

sudo vim /etc/rc.local

# make sure this line is just before of exit 0.
sysctl -w net.core.somaxconn=65365

This step requires the reboot if you don’t want to reboot the server execute the same sysctl command on the terminal itself.

Overcommit Memory

Overcommit memory is a kernel parameter which checks if the memory is available or not. If the overcommit memory value is 0 then there is a chance that your Redis will get OOM (Out of Memory) error. So do me a favor and change its value to 1 by using the following steps

echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf

RDB Persistence and Append Only File

RDB persistence and Append Only File options are used to persist data on disk. If you are using the cluster mode of Redis then the RDB persistence and AOF is not required. So simply comment out these lines in redis.conf

sudo vim /etc/redis/redis.conf

# Comment out these lines
save 900 1
save 300 10
save 60 10000

rdbcompression no
rdbchecksum no

appendonly no

Transparent Huge Page(THP)

Most of the people are not aware of this term. Basically, For making the translation of physical and virtual memory kernel uses the concept of paging. This feature was defined to enhance the memory mapping process but somehow it slows down the databases which are memory based (for example – in the case of Redis). To overcome this issue you can disable THP.

sudo vim /etc/rc.local # Add this line before exit 0 echo never > /sys/kernel/mm/transparent_hugepage/enabled

As graph also shows the difference in performance. The Red Bar is showing THP disabled performance and Blue Bar is showing THP disabled performance.

Some Other Basic Measures in Redis Configuration

Config Option

Value

Description

maxmemory

70% of the system

maxmemory should be 70 percent of the system so that it will not take all the resource of the server.

maxmemory-policy

volatile-lru

It adds a random key with an expiry time

loglevel

notice

Loglevel should be “notice”, so that log will not take too much resource

timeout

300

There should be a timeout value as well in redis configuration which prevents redis from spending too much time on the connection. It closes the connection of the client if it is ideal for more than 300 seconds.

So now your redis is ready to give a killer performance. In this blog, we have discussed redis best practices and performance tuning.
There are multiple factors which are yet to be explored to enhance the performance of Redis if you find that before I do, please let me know to improve this blog.

In my next blog, I will discuss around how can we do Redis Performance Testing and how we are doing it in our Organisation.

Migrate your data between various Databases

Data Migration Service

 
Have you ever thought about migrating your production database from one platform to another
and dropped this idea later, because it was too risky, you were not ready to
bare a downtime?
If yes, then please pay attention because this is what we are going to perform
in this article.
A few days back we’re trying to migrate our production MySQL RDS from AWS to GCP,  SQL, and we had to migrate data without downtime, accurate and
real-time and that too without the help
of any Database Administrator.
 
After doing a bit research and evaluating few services we finally started working on AWS DMS (Data Migration Service) and figured out this is a great service to migrate a
different kind of data.
 
You can migrate your data to and from the most widely used commercial and open-source databases, and database platforms. Databases like Oracle, Microsoft SQL Server, and
PostgreSQL, MongoDB.
The source database remains fully operational during the migration,
The service supports
homogeneous migrations such as Oracle to Oracle,
and also heterogeneous migrations between different database platforms.
 

Let’s discuss some important features of AWS DMS:

 
  • Migrates the database securely, quickly and accurately.
  • No downtime required, works as schema converter as well.
  • Supports various type or database like MySQL, MongoDB, PSQL etc.
  • Migrates real-time data also synchronize ongoing changes.
  • Data validation is available to verify database.
  • Compatible with a long range of database platforms like RDS, Google SQL, on-premises etc.
  • Inexpensive (Pricing is based on the compute resources used during the migration process).
This is a typical migration scenario.
Let’s perform step by step migration:

Note: We’ve performed migration from AWS RDS
to GCP SQL, you can choose database source and
destination as per your requirement.

  1. Create replication instance:
    A replication instance initiates the connection between the source and target databases, transfers the data, cache any changes that occur on the source database during the initial data load.
    Use the fields to below to configure the parameters of your new replication instance including network and security information, encryption details, select instance class as per requirement.

    After completion, all mandatory fields click the next tab, and you will be redirected
    to Replication Instance tab.
    Grab a coffee quickly while the instance is getting ready.

    Hope you are ready with your coffee because the instance is ready now.


  2. Now we are to create two endpoints “Source” and “Target” 2.1 Create Source Endpoint:

    Click on “Run test” tab after completing all fields, make sure your Replication instance IP is whitelisted
    under security group. 2.2 Create Target Endpoint


    Click on “Run test” tab again after completing all fields, make sure your Replication instance IP is whitelisted under target DB authorization.
    Now we’ve ready Replication Instance, Source Endpoint, and Target Endpoint.
  3. Finally, we’ll create a “Replication Task” to start replication.
    Fill the fields like:
  • Task Name: any name
  • Replication Instance: The instance we’ve created above
  • Source Endpoint: The source database
  • Target Endpoint: The target database
  • Migration Type: Here I choose “Migration existing data and replication
    ongoing” because we needed ongoing changes.
 
4. Verify the task status now.
Once all the fields are completed click on the “Create task” and you will be
redirected to “Tasks”
Tab.
Check your task status
 
The task has been successfully completed now, you can verify the inserts tabs and validation tab,
The migration is done successfully if Validation State is “Validated” that means migration has been performed successfully.

Automated DB Updater Release First Release

Initial version of Automated DB Updater Release ADU

With this blog I’m releasing the intial version of a python utility to provide automated db updates across various environments for different components.

The code for this utility is hosted on github
https://github.com/sandy724/ADU

You can clone the read only copy of this codebase by url given below
https://github.com/sandy724/ADU.git

To understand the basic idea about this utility go thorugh this blog
http://sandy4blogs.blogspot.in/2013/07/automated-db-updater.html

How to use this utility
Checkout the code at some directory, add the path of this directory in PYTHONPATH environment variable
Create a database with a script’s metadata table with given below ddl

CREATE TABLE `script_metadata` (
  `name` varchar(100) DEFAULT NOT NULL,
  `version` int(11) DEFAULT NOT NULL,
  `executed` tinyint(1) NOT NULL DEFAULT ‘0’,
  `env` varchar(30) DEFAULT NOT NULL,
  `releas` varchar(30) DEFAULT NOT NULL,
  `component` varchar(30) DEFAULT NOT NULL
)
Create a database.properties, containing connection properties of each environment database

[common_db]
dbHost=localhost
dbPort=3306
dbUser=root
dbPwd=root
db=test
 
 
[env1]
dbHost=localhost
dbPort=3306
dbUser=root
dbPwd=root
db=test

Here common_db represents connection to database which will contain metadata of scripts for monitoring

Now execute the pythong utility
Copy the client(updateDB.py) to directory of your choice, make sure that property configration file should also be at this directory
python updateDB.py -f -r –env

Build & Release Challenges : Manual DB Updates Part 2

Previos

This blog was supposed to be about the new system, I thought of building to solve the problem that I discussed in my previous blog. Well for your disappointment this blog will be not about that, the reason is scope of the problem changed. In this blog I’ll be discussing about the new scope and how discussion moved forward about it and what is the current state, which means that I’m still not able to solve this problem & suggestions are welcome :).

I’ll again state the problem which was very simple enough, “database updates were not automated in non prod environments as same db scripts were modified during development“. You need to refer to previous blog for more details about this problem. To solve this problem I came up with incremental db update approach, as per this approach all new modifications will be done as a new sql update which means that let’s say you had a file 1.sql, if you need to do any modification a new file 1′.sql should be committed. In this way the system don’t have to track the files, it just have to maintain what all files got executed, the new files which needs to be executed and execute the new files only. This solution can work in a normal setup very well, in fact in my last assignment I was using this approach only to have automated db updates across all environments.

The incremental db updates can’t be run in current setup, the reason for that is we have very huge database order of 100GB, you can easily imagine that we can’t afford to run same script with slight modifications i.e first script adding a column of size 20 then another script to change it’s size to 40 finally renaming it to some other name. Instead of that a single script should be created after consolidating all these scripts.

The first solution that came to my mind after this new issue emerged, during non prod deployments we should already have database dump of previous release, more preferably cold dump. During deployment 3 steps would be performed first load previous release db dump, run all the consolidated scripts which will be consolidated & do the code deployment. Initially this solution looked fine enough but QA team raised concern as loading previous release dump meant that all the test data  they have created on the QA server would be lost and I was at the beginning of square :).

Another solution that could be implemented  was to have rollback script for each & every script committed. This convention will have an advantage of supporting incremental update i.e whenever a script will be updated first it’s corresponding rollback script will be executed & then the script can be executed. This solution has it’s own challenges the first challenge was it’s really difficult to write rollback script of each & every script, another issues was you have to carefully manage the script files so that there will be no tight coupling between them as execution of rollback of one script will impact another script. Third issue although less significant is that you have to deal with data loss

We could also have used a hybrid approach that is combination of incremental & full db updates. Till QA phase we can use incremental db update mechanism in which all new script modifications are done as a new script and then they can be executed incrementally but for staging & production deployment db updates will be done as a full update which requires a human intervention i.e consolidation of scripts. This approach had 2 challenges the first & foremost was that it had manual intervention & second major issue was that we were duplicating the db scripts.

So these were the few approaches that we thought of & none was able to solve our problem completely, so we are still struggling with fully automating the db update process. Again any suggestions are most welcome 🙂

Automated Database Update Or Rollback

One of the important step during release is doing database update and rollback in case something goes wrong, usually people perform this operation manually. In this blog I’ll talk about how we can automate this process by following some convention.

Here I’m taking mysql database as an example we can have same conventions for other databases also

Convention to manage rollback/updates of a release

  • Each project codebase at it’s root will have a folder database_scripts
  • The database_scripts folder will contain folder for each release i.e Release1_1, Release2_0…
  • The database scripts release folder will in turn contains two folders update & rollback which will contain updates & rollbacks scripts for a release.

Automating the rollback/update

  • The update folder will have a source input file FileSequencer.txt. This file will point to all the update scripts in correct order that needs to be executed for the release
  • In the similar manner rollback folder will have a source input file FileSequencer.txt. This file will point to all the rollback scripts in correct order that needs to be executed for the release
  • At last we will have a utility shell script, this script will take db details and execute all the scripts referred in FileSequencer.txt using mysql command