Redis is a popular and opensource in-memory database that supports multiple data structures like strings, hashes, lists, and sets. But similar to other tools, we can scale standalone redis to a particular extent and not beyond that. That’s why we have a cluster mode setup in which we can scale Redis nodes horizontally and then distribute data among those nodes.
Since Kubernetes is becoming buzz technology and people are using it to manage their applications, databases, and middlewares at a single place. So in this blog, we will see how we can deploy the Redis cluster in production mode in the Kubernetes cluster and test failover.
Speed fascinates everyone, but only if its under control.
It is well said and a proven fact that everyone needs to implement a cache at some point in their application lifecycle, and this has become our requirement too.
During the initial phase we placed Redis in a Master Slave mode with next phase involving Sentinal setup to withstand Master failover. I would like to throw some light on their architecture along with pros and cons so I can put emphasis on why I finally migrated to Redis Cluster.
Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers.
What forced me to look for Redis Sentinel
When using Master-Slave architecture
There will be only one Master with multiple slaves for replication.
All write goes to Master, which creates more load on master node.
If Master goes down, the whole architecture is prone to SPOF (Single point of failure).
M-S architecture does not helps in scaling, when your user base grows.
So we need a process to Monitor Master in case of failure or shutdown, that is Sentinel.
I was still concerned about the below Sharding of data for best performance
Concept of Redis Cluster
“A query that used to take an hour can run in seconds on cache”.
Redis Cluster is an active-passive cluster implementation that consists of master and slave nodes. The cluster uses hash partitioning to split the key space into 16,384 key slots, with each master responsible for a subset of those slots.
Each slave replicates a specific master and can be reassigned to replicate another master or be elected to a master node as needed.
Each node in a cluster requires two TCP ports.
One port is used for client connections and communications. This is the port you would configure into client applications or command line tools.
Second required port is reserved for node-to-node communication that occurs in a binary protocol and allows the nodes to discuss configuration and node availability.
When a master fails or is found to be unreachable by the majority of the cluster as determined by the nodes communication via the gossip port, the remaining masters hold a vote and elect one of the failing masters’ slaves to take its place.
Rejoining The Cluster
When the failing master eventually rejoins the cluster, it will join as a slave and begin to replicate another master.
Redis sharded data automatically into the servers. Redis has a concept of hash slot in order to split data. All the data are divided into slots. There are 16384 slots. These slots are divided by the number of servers.
If there are 3 servers; A, B and C then
Server 1 contains hash slots from 0 to 5500.
Server 2 contains hash slots from 5501 to 11000.
Server 3 contains hash slots from 11001 to 16383.
6 Node M/S Cluster
In a 6 node cluster mode, 3 nodes will be serving as a master and the 3 node will be their respective slave.
Here, Redis service will be running on port 6379 on all servers in the cluster. Each master server is replicating the keys to its respective redis slave node assigned during cluster creation process.
3 Node M/S Cluster
In a 3 node cluster mode, there will be 2 redis services running on each server on different ports. All 3 nodes will be serving as a master with redis slave on cross nodes.
Here, two redis services will be running on each server on two different ports and each master is replicating the keys to its respective redis slave running on other node.
WHAT IF Redis Goes Down
1 node goes down in a 6 node Redis Cluster
If one of the node goes down in Redis 6-node cluster setup, its respective slave will be promoted as master.
In above example, master Server3 goes down and it slave Server6 is promoted as master.
1 node goes down in a 3 node Redis Cluster
If one of the node goes down in Redis 3-node cluster setup, its respective slave running on the separate node will be promoted to master.
In above example, Server 3 goes down and slave running on Server1 is promoted to master.
Redis service goes down on one of the 3 node Redis Cluster
If redis service goes down on one of the node in Redis 3-node cluster setup, its respective slave will be promoted as master.
Although, this methodology will prevent Redis Cluster in partial Failover scenarios only, but if we want full failover we need to look for Disaster Recovery techniques as well.
Well this implementation helped me having a sound sleep while thinking of Redis availability, sharding and performance.
A few days back I came across a problem of migrating a Redis Master-Slave setup to Redis Cluster. Initially, I thought it to be a piece of cake since I have been already working on Redis, but there was a hitch, “Zero Downtime Migration”. Also, the redis was getting used as a database, not as Caching Server. So I started to think of different ways for migrating Redis Master-Slave setup to Redis Cluster and finally, I came up with an idea of migration.
Before we jump to migration, I want to give an overview regarding when we can use Redis as a database, and how to choose which setup we should go with Master-Slave or Cluster mode.
Redis as a Database
Sometimes getting data from disks can be time-consuming. In order to increase the performance, we can put the requests those either need to be served first or rapidly in Redis memory and then the Redis service there will keep rest of the data in the main database. So the whole architecture will look like this:-
Redis Master-Slave Replication
Beginning with the explanation about Redis Master-Slave. In this phenomenon, Redis can replicate data to any number of nodes. ie. it lets the slave have the exact copy of their master. This helps in performance optimizations.
I bet now you can use Redis as a Database.
A Redis cluster is simply a data sharding strategy. It automatically partitions data across multiple Redis nodes. It is an advanced feature of Redis which achieves distributed storage and prevents a single point of failure.
Replication vs Sharding
Replication is also known as mirroring of data. In replication, all the data get copied from the master node to the slave node.
Sharding is also known as partitioning. It splits up the data by the key to multiple nodes.
As shown in the above figure, all keys 1, 2, 3, 4 are getting stored on both machine A and B.
In sharding, the keys are getting distributed across both machine A and B. That is, the machine A will hold the 1, 3 key and machine B will hold 2, 4 key.
I guess now everyone has a good idea about Redis working mechanism. So let’s start discussing the migration of Redis.
Unfortunately, redis doesn’t have a direct way of migrating data from Redis-Master Slave to Redis Cluster. Let me explain it to you why?
We can start Redis service in either cluster mode or standalone mode. Now your solution would be that we can change the Redis Configuration value on-fly(means without restarting the Redis Service) with redis-cli. Yes, you are absolutely correct we can change the Redis configuration on-fly but unfortunately, Redis Mode(cluster or standalone) can’t be decided on-fly, for that we have to restart the service.
I guess now you guys will understand my situation :).
For migration, there are multiple ways of doing it. However, we needed to migrate the data without downtime or any interruptions to the service.
We decided the best course of action was a steps process:-
Firstly we needed to create a different Redis Cluster environment. The architecture of the cluster environment was something like
The next step was to update all the services (application) to send all the write operations to both servers(cluster and master-slave). The read commands (GET) will still go to the old setup.
But still, we don’t have the guarantee that all non-expirable data would make it over. So we can run a step to iterate through all of the keys and DUMP/RESTORE them into the new setup.
Once the new Redis Server looks good we could make the appropriate changes to the application to point solely to the new Redis Server.
I know the all steps are easy except the second step. Fortunately, redis provides a method of key scanning through which we can scan all the key and take a dump of it and then restore it in the new Redis Server.
To achieve this I have created a python utility in which you have to define the connection details of your old Redis Server and new Redis Server.
I have provided the detail information on using this utility in the README file itself. I guess my experience will help you guys while redis migration.
Replication or Clustering?
I know most people have a query that when should we use replication and when clustering :).
If you have more data than RAM in a single machine, use Redis Cluster to shard the data across multiple databases.
If you have less data than RAM in a machine, set up a master-slave replication with sentinel in front to handle the fai-lover.
The main idea of writing this blog was to spread information about Replication and Sharding mechanism and how to choose the right one and if mistakenly you have chosen the wrong one, how to migrate it from :).
There are multiple factors yet to be explored to enhance the flow of migration if you find that before I do, please let me know to improve this blog.
I hope I explained everything and clear enough to understand.
Thanks for reading. I’d really appreciate any and all feedback, please leave your comment below if you guys have some feedbacks.
As I mentioned in my previous blog on Redis Best Practices that in my upcoming blog I will discuss about load testing on Redis so here I am ready with my blog in which I will explain how can we measure our Redis Performance. Although there are plenty of articles out there on this similar topic, but I want to share my experience as a DevOps. Also, I want to share the methods which we are implementing at our organization.
So, Load testing What, Why?
The first thought which comes to our mind is why do we need load testing, as our environment is already working fine. Or it is working fine since the first launch.
But Hey!!! let me tell you something it’s not simple as that, as we know everything has its own limits and knowing your limits is always helpful. When we are not ready to face the problem of the increasing load, our environment can easily collapse. There is saying as well
Prevention is better than cure
In simple words, it means it’s easier to stop something bad happening in the first place than to repair the damage after it happened.
So when I decided to test the redis performance, it was not quite easy to start, there are plenty of load testing frameworks were present but which to choose? So I did the comparison between various load testing framework. Although there is already Jmeter was available which is a popular load testing tool but I found it a bit complex to learn easily and rapidly and also it requires a hell lot of resources so I have chosen an awesome Python load testing framework, Locust, which is very lightweight and easy to setup load testing framework.
The golden rule before starting the load testing is that you should have metrics or a number in your mind which you want to achieve.
So I will be using my own organization load testing utility which we have created for Redis Load or Performance Testing.
As Locust is a python based project so it doesn’t have long dependencies list but surely it does have some dependency. So after repo cloning, we have to install these dependencies
pip3 install -r requirments.txt
Once the dependency hurdle is crossed we can move to the step in which we will connect our utility to Redis. To achieve this we have a file called redis.json in the Scripts folder of the repo, you just have to update the redis details in that file. For example:-
Once you are done with connection details, kaboom you are all set to use the performance testing utility. Just go on the terminal and run this command.
locust -f redis_get_set.py
The output will be something like this
Now go and open up the URL on the browser by http://your_ip:8089 The UI page will look like this
You will have two empty blocks there:-
Number of users to simulate:- Total number of user connection request which you want to make.
Hatch Rate:- How quickly you want to spawn users.
After filling these details you can simply start swarming and you can wait until it completes its execution. Once the execution will be completed you will have this kind of details.
This page is loading data in the form of statistics but you can also see the data in a beautiful graph format on the same UI. For example-
I have also provided detailed information in the README file as well of the repo.
One of the benefits which I feel important in doing load testing is that you can set up a performance baseline according to your environment. And yes, if you are not getting the desired output of your Redis Performance you can check out our blog on Redis Best Practices and Performance Tuning here.
The main idea of writing this blog was to encourage people to know the limitations of their environment and to make it ready for any kind of challenge.
I hope I explained everything clearly enough to understand. If you do have any questions or suggestions, please feel free to ask.
One of the thing that I love about my organization is that you don’t have to do the same repetitive work, you will always get the chance to explore some new technologies. The same chance came across to me a few days back when one of our clients was facing issue with Redis.
They were using the Redis Cluster with Sentinel for which they were facing issue regarding performance, whenever the connection request was high the Redis Cluster was not able to bear the load.
Since they were using a decent configuration of the server in terms of CPU and Memory but the result was the same. So now what????
The Answer was to tune the performance.
There are plenty of Redis performance articles out there, but I wanted to share my experience as a DevOps with Redis by creating an article which will include the most essential and important stuff that is needed for a Developer or a DevOps Engineer.
So let’s get started.
Keepalive is a method to allow the same TCP connection for HTTP conversation instead of opening a new one with each new request.
In simple words, if the keepalive is off the Redis will open a new connection for every request which will slow down its performance. If the keepalive is on then Redis will use the same TCP connection for requests.
Let’s see the graph for more details. The Red Bar shows the output when keepalive is on and Blue Bar shows the output when keepalive is off
For enabling the TCP keepalive, Edit the redis configuration and update this value.
# Update the value to 0
This feature could be your lifesaver in terms of Redis Performance. Pipelining facilitates a client to send multiple requests to the server without waiting for the replies at all and finally reads the reply in a single step.
You can also see in the graph as well.
Pipelining will increase the performance of redis drastically.
Max-connection is the parameter in which is used to define the maximum connection limit to the Redis Server. You can set that value accordingly (Considering your server specification) with the following steps.
sudo vim /etc/rc.local
# make sure this line is just before of exit 0.
sysctl -w net.core.somaxconn=65365
This step requires the reboot if you don’t want to reboot the server execute the same sysctl command on the terminal itself.
Overcommit memory is a kernel parameter which checks if the memory is available or not. If the overcommit memory value is 0 then there is a chance that your Redis will get OOM (Out of Memory) error. So do me a favor and change its value to 1 by using the following steps
RDB persistence and Append Only File options are used to persist data on disk. If you are using the cluster mode of Redis then the RDB persistence and AOF is not required. So simply comment out these lines in redis.conf
sudo vim /etc/redis/redis.conf
# Comment out these lines
save 900 1
save 300 10
save 60 10000
Transparent Huge Page(THP)
Most of the people are not aware of this term. Basically, For making the translation of physical and virtual memory kernel uses the concept of paging. This feature was defined to enhance the memory mapping process but somehow it slows down the databases which are memory based (for example – in the case of Redis). To overcome this issue you can disable THP.
sudo vim /etc/rc.local # Add this line before exit 0echo never > /sys/kernel/mm/transparent_hugepage/enabled
As graph also shows the difference in performance. The Red Bar is showing THP disabled performance and Blue Bar is showing THP disabled performance.
Some Other Basic Measures in Redis Configuration
70% of the system
maxmemory should be 70 percent of the system so that it will not take all the resource of the server.
It adds a random key with an expiry time
Loglevel should be “notice”, so that log will not take too much resource
There should be a timeout value as well in redis configuration which prevents redis from spending too much time on the connection. It closes the connection of the client if it is ideal for more than 300 seconds.
So now your redis is ready to give a killer performance. In this blog, we have discussed redis best practices and performance tuning.
There are multiple factors which are yet to be explored to enhance the performance of Redis if you find that before I do, please let me know to improve this blog.
In my next blog, I will discuss around how can we do Redis Performance Testing and how we are doing it in our Organisation.