Redis Cluster: Setup, Sharding and Failover Testing

Watching cluster sharding and failover management is as gripping as visualizing a robotic machinery work.

My last blog on Redis Cluster was primarily focussed on its related concepts and requirements. I would highly recommend to go through the concepts first to have better understanding.

Here, I will straight forward move to its setup along with the behaviour of cluster when I intentionally turned down one Redis service on one of the node.
Let’s start from the scratch.

Redis Setup

Here, I will follow the approach of a 3-node Redis Cluster with Redis v5.0 on all the three CentOS 7.x nodes.

Setup Epel Repo

wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh epel-release-latest-7.noarch.rpm

Setup Remi Repo

yum install http://rpms.remirepo.net/enterprise/remi-release-7.rpm
yum --enablerepo=remi install redis

redis-server --version
Redis server v=5.0.5 sha=00000000:0 malloc=jemalloc-5.1.0 bits=64 build=619d60bfb0a92c36

3-Node Cluster Prerequisites

While setting up Redis cluster on 3 nodes, I will be following the strategy of having 3 master nodes and 3 slave nodes with one master and one slave running on each node serving redis at different ports. As shown in the diagram Redis service is running on Port 7000 and Port 7001

  • 7000 port will serve Redis Master
  • 7001 port will serve Redis Slave

Directory Structure

We need to design the directory structure to server both redis configurations.

tree /etc/redis
/etc/redis
`-- cluster
    |-- 7000
    |   `-- redis_7000.conf
    `-- 7001
        `-- redis_7001.conf

Redis Configuration

Configuration file for Redis service 1

cat /etc/redis/cluster/7000/redis_7000.conf
port 7000
dir /var/lib/redis/7000/
appendonly yes
protected-mode no
cluster-enabled yes
cluster-node-timeout 5000
cluster-config-file /etc/redis/cluster/7000/nodes_7000.conf
pidfile /var/run/redis_7000.pid

Configuration file for Redis service 2

cat /etc/redis/cluster/7000/redis_7000.conf
port 7001
dir /var/lib/redis/7001
appendonly yes
protected-mode no
cluster-enabled yes
cluster-node-timeout 5000
cluster-config-file /etc/redis/cluster/7001/nodes_7001.conf
pidfile /var/run/redis_7001.pid

Redis Service File

As we are managing multiple service on a single instance, we need to update service file for easier management of redis services.

Service management file for Redis service 1

cat /etc/systemd/system/redis_7000.service
[Unit]
Description=Redis persistent key-value database
After=network.target

[Service]
ExecStart=/usr/bin/redis-server /etc/redis/cluster/7000/redis_7000.conf --supervised systemd
ExecStop=/bin/redis-cli -h 127.0.0.1 -p 7000 shutdown
Type=notify
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

Service management file for Redis service 2

cat /etc/systemd/system/redis_7001.service
[Unit]
Description=Redis persistent key-value database
After=network.target

[Service]
ExecStart=/usr/bin/redis-server /etc/redis/cluster/7001/redis_7001.conf --supervised systemd
ExecStop=/bin/redis-cli -h 127.0.0.1 -p 7001 shutdown
Type=notify
User=redis
Group=root
RuntimeDirectory=/etc/redis/cluster/7001
RuntimeDirectoryMode=0755
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

Redis Service Status

Master Service
systemctl status redis_7000.service 
● redis_7000.service - Redis persistent key-value database
   Loaded: loaded (/etc/systemd/system/redis_7000.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-09-25 08:14:15 UTC; 30min ago
  Process: 2902 ExecStop=/bin/redis-cli -h 127.0.0.1 -p 7000 shutdown (code=exited, status=0/SUCCESS)
 Main PID: 2917 (redis-server)
   CGroup: /system.slice/redis_7000.service
           └─2917 /usr/bin/redis-server *:7000 [cluster]
systemd[1]: Starting Redis persistent key-value database...
redis-server[2917]: 2917:C 25 Sep 2019 08:14:15.752 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-server[2917]: 2917:C 25 Sep 2019 08:14:15.752 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=2917, just started
redis-server[2917]: 2917:C 25 Sep 2019 08:14:15.752 # Configuration loaded
redis-server[2917]: 2917:C 25 Sep 2019 08:14:15.752 * supervised by systemd, will signal readiness
systemd[1]: Started Redis persistent key-value database.
redis-server[2917]: 2917:M 25 Sep 2019 08:14:15.754 * No cluster configuration found, I'm ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2
redis-server[2917]: 2917:M 25 Sep 2019 08:14:15.756 * Running mode=cluster, port=7000.
redis-server[2917]: 2917:M 25 Sep 2019 08:14:15.756 # Server initialized
redis-server[2917]: 2917:M 25 Sep 2019 08:14:15.756 * Ready to accept connections
Slave Service
systemctl status redis_7001.service
● redis_7001.service - Redis persistent key-value database
   Loaded: loaded (/etc/systemd/system/redis_7001.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-09-25 08:14:15 UTC; 30min ago
  Process: 2902 ExecStop=/bin/redis-cli -h 127.0.0.1 -p 7001 shutdown (code=exited, status=0/SUCCESS)
 Main PID: 2919 (redis-server)
   CGroup: /system.slice/redis_7001.service
           └─2919 /usr/bin/redis-server *:7001 [cluster]
systemd[1]: Starting Redis persistent key-value database...
redis-server[2919]: 2917:C 25 Sep 2019 08:14:15.752 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-server[2919]: 2917:C 25 Sep 2019 08:14:15.752 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=2917, just started
redis-server[2919]: 2917:C 25 Sep 2019 08:14:15.752 # Configuration loaded
redis-server[2919]: 2917:C 25 Sep 2019 08:14:15.752 * supervised by systemd, will signal readiness
systemd[1]: Started Redis persistent key-value database.
redis-server[2919]: 2917:M 25 Sep 2019 08:14:15.754 * No cluster configuration found, I'm ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2
redis-server[2919]: 2917:M 25 Sep 2019 08:14:15.756 * Running mode=cluster, port=7001.
redis-server[2919]: 2917:M 25 Sep 2019 08:14:15.756 # Server initialized
redis-server[2919]: 2917:M 25 Sep 2019 08:14:15.756 * Ready to accept connections

Redis Cluster Setup

Redis itself provides cli tool to setup cluster.
In the current 3 node scenario, I opt 7000 port on all node to serve Redis master and 7001 port to serve Redis slave.

redis-cli --cluster create 172.19.33.7:7000 172.19.42.44:7000 172.19.45.201:7000 172.19.33.7:7001 172.19.42.44:7001 172.19.45.201:7001 --cluster-replicas 1

The first 3 address will be the master and the next 3 address will be the slaves. It will be a cross node replication, say, Slave of any Mater will reside on a different node and the cluster-replicas define the replication factor, i.e each master will have 1 slave.

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.19.42.44:7001 to 172.19.33.7:7000
Adding replica 172.19.45.201:7001 to 172.19.42.44:7000
Adding replica 172.19.33.7:7001 to 172.19.45.201:7000
M: ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2 172.19.33.7:7000
   slots:[0-5460] (5461 slots) master
M: 314038a48bda3224bad21c3357dbff8305735d72 172.19.42.44:7000
   slots:[5461-10922] (5462 slots) master
M: 19a2c81b7f489bec35eed474ae8e1ad787327db6 172.19.45.201:7000
   slots:[10923-16383] (5461 slots) master
S: 896b2a7195455787b5d8a50966f1034c269c0259 172.19.33.7:7001
   replicates 19a2c81b7f489bec35eed474ae8e1ad787327db6
S: 89206df4f41465bce81f44e25e5fdfa8566424b8 172.19.42.44:7001
   replicates ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2
S: 20ab4b30f3d6d25045909c6c33ab70feb635061c 172.19.45.201:7001
   replicates 314038a48bda3224bad21c3357dbff8305735d72
Can I set the above configuration? (type 'yes' to accept):

A dry run will showcase the cluster setup and ask for confirmation.

Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 172.19.33.7:7000)
M: ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2 172.19.33.7:7000
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 20ab4b30f3d6d25045909c6c33ab70feb635061c 172.19.45.201:7001
   slots: (0 slots) slave
   replicates 314038a48bda3224bad21c3357dbff8305735d72
M: 314038a48bda3224bad21c3357dbff8305735d72 172.19.42.44:7000
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 19a2c81b7f489bec35eed474ae8e1ad787327db6 172.19.45.201:7000
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 89206df4f41465bce81f44e25e5fdfa8566424b8 172.19.42.44:7001
   slots: (0 slots) slave
   replicates ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2
S: 896b2a7195455787b5d8a50966f1034c269c0259 172.19.33.7:7001
   slots: (0 slots) slave
   replicates 19a2c81b7f489bec35eed474ae8e1ad787327db6
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

Check Cluster Status

Connect to any of the cluster node to check the status of cluster.

redis-cli -c -h 172.19.33.7 -p 7000
172.19.33.7:7000> cluster nodes
20ab4b30f3d6d25045909c6c33ab70feb635061c 172.19.45.201:7001@17001 slave 314038a48bda3224bad21c3357dbff8305735d72 0 1569402961000 6 connected
314038a48bda3224bad21c3357dbff8305735d72 172.19.42.44:7000@17000 master - 0 1569402961543 2 connected 5461-10922
19a2c81b7f489bec35eed474ae8e1ad787327db6 172.19.45.201:7000@17000 master - 0 1569402960538 3 connected 10923-16383
ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2 172.19.33.7:7000@17000 myself,master - 0 1569402959000 1 connected 0-5460
89206df4f41465bce81f44e25e5fdfa8566424b8 172.19.42.44:7001@17001 slave ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2 0 1569402960000 5 connected
896b2a7195455787b5d8a50966f1034c269c0259 172.19.33.7:7001@17001 slave 19a2c81b7f489bec35eed474ae8e1ad787327db6 0 1569402959936 4 connected

Redis cluster itself manages the cross node replication, as seen in the above screen, 172.19.42.44:7000 master is associated with 172.19.45.201:7001 slave.

Data Sharding

There are 16384 slots. These slots are divided by the number of servers.
If there are 3 servers; 1, 2 and 3 then

  • Server 1 contains hash slots from 0 to 5500.
  • Server 2 contains hash slots from 5501 to 11000.
  • Server 3 contains hash slots from 11001 to 16383.
redis-cli -c -h 172.19.33.7 -p 7000
172.19.33.7:7000> set a 1
-> Redirected to slot [15495] located at 172.19.45.201:7000
OK
172.19.45.201:7000> set b 2
-> Redirected to slot [3300] located at 172.19.33.7:7000
OK
172.19.33.7:7000> set c 3
-> Redirected to slot [7365] located at 172.19.42.44:7000
OK
172.19.42.44:7000> set d 4
-> Redirected to slot [11298] located at 172.19.45.201:7000
OK
172.19.45.201:7000> get b
-> Redirected to slot [3300] located at 172.19.33.7:7000
"2"
172.19.33.7:7000> get a
-> Redirected to slot [15495] located at 172.19.45.201:7000
"1"
172.19.45.201:7000> get c
-> Redirected to slot [7365] located at 172.19.42.44:7000
"3"
172.19.42.44:7000> get d
-> Redirected to slot [11298] located at 172.19.45.201:7000
"4"
172.19.45.201:7000>

Redis Cluster Failover

Stop Master Service

Let’s stop the Redis master service on Server 3.

systemctl stop redis_7000.service
systemctl status redis_7000.service
● redis_7000.service - Redis persistent key-value database
   Loaded: loaded (/etc/systemd/system/redis_7000.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Wed 2019-09-25 09:32:37 UTC; 23s ago
  Process: 3232 ExecStop=/bin/redis-cli -h 127.0.0.1 -p 7000 shutdown (code=exited, status=0/SUCCESS)
  Process: 2892 ExecStart=/usr/bin/redis-server /etc/redis/cluster/7000/redis_7000.conf --supervised systemd (code=exited, status=0/SUCCESS)
 Main PID: 2892 (code=exited, status=0/SUCCESS)

Cluster State (Failover)

While checking the cluster status, Redis master service running on server 3 at port 7000 is shown fail and disconnected.

At the same moment its respective slave gets promoted to master which is running on port 7001 on server 1.

redis-cli -c -h 172.19.33.7 -p 7000
172.19.45.201:7000> CLUSTER NODES
314038a48bda3224bad21c3357dbff8305735d72 172.19.42.44:7000@17000 master,fail - 1569403957138 1569403956000 2 disconnected
ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2 172.19.33.7:7000@17000 master - 0 1569404037252 1 connected 0-5460
896b2a7195455787b5d8a50966f1034c269c0259 172.19.33.7:7001@17001 slave 19a2c81b7f489bec35eed474ae8e1ad787327db6 0 1569404036248 4 connected
89206df4f41465bce81f44e25e5fdfa8566424b8 172.19.42.44:7001@17001 slave ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2 0 1569404036752 5 connected
20ab4b30f3d6d25045909c6c33ab70feb635061c 172.19.45.201:7001@17001 master - 0 1569404036000 7 connected 5461-10922
19a2c81b7f489bec35eed474ae8e1ad787327db6 172.19.45.201:7000@17000 myself,master - 0 1569404035000 3 connected 10923-16383

Restarting Stopped Redis

Now we will check the behaviour of cluster once we fix or restart the redis service that we intentionally turned down earlier.

systemctl start redis_7000.service
systemctl status redis_7000.service
● redis_7000.service - Redis persistent key-value database
   Loaded: loaded (/etc/systemd/system/redis_7000.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-09-25 09:35:12 UTC; 8s ago
  Process: 3232 ExecStop=/bin/redis-cli -h 127.0.0.1 -p 7000 shutdown (code=exited, status=0/SUCCESS)
 Main PID: 3241 (redis-server)
   CGroup: /system.slice/redis_7000.service
           └─3241 /usr/bin/redis-server *:7000 [cluster]

Cluster State (Recovery)

Finally, all redis service are back in running state. The master service that we turned down and restarted has now become slave to its promoted master.

redis-cli -c -h 172.19.33.7 -p 7000
172.19.45.201:7000> CLUSTER NODES 314038a48bda3224bad21c3357dbff8305735d72 172.19.42.44:7000@17000 slave 20ab4b30f3d6d25045909c6c33ab70feb635061c 0 1569404162565 7 connected ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2 172.19.33.7:7000@17000 master - 0 1569404162000 1 connected 0-5460 896b2a7195455787b5d8a50966f1034c269c0259 172.19.33.7:7001@17001 slave 19a2c81b7f489bec35eed474ae8e1ad787327db6 0 1569404163567 4 connected 89206df4f41465bce81f44e25e5fdfa8566424b8 172.19.42.44:7001@17001 slave ff3e4300bec02ed4bd1be9af5d83a5b44249c2b2 0 1569404163000 5 connected 20ab4b30f3d6d25045909c6c33ab70feb635061c 172.19.45.201:7001@17001 master - 0 1569404162000 7 connected 5461-10922 19a2c81b7f489bec35eed474ae8e1ad787327db6 172.19.45.201:7000@17000 myself,master - 0 1569404161000 3 connected 10923-16383

It’s not done yet, further we can explore around having a single endpoint to point from the application. I will am currently working on that and soon will come up with the solution.
Apart from this monitoring the Redis Cluster will also be a major aspect to look forward.
Till then get your hands dirty playing around the Redis Cluster setup and failover.

Reference links:
Image: Google image search (blog.advids.co)

Redis Cluster: Architecture, Replication, Sharding and Failover

Speed fascinates everyone, but only if its under control.

It is well said and a proven fact that everyone needs to implement a cache at some point in their application lifecycle, and this has become our requirement too.

During the initial phase we placed Redis in a Master Slave mode with next phase involving Sentinal setup to withstand Master failover. I would like to throw some light on their architecture along with pros and cons so I can put emphasis on why I finally migrated to Redis Cluster.

Redis Master/Slave

Redis replication is a very simple to use and configure master-slave replication  that allows slave Redis servers to be exact copies of master servers.

What forced me to look for Redis Sentinel

When using Master-Slave architecture

  • There will be only one Master with multiple slaves for replication.
  • All write goes to Master, which creates more load on master node.
  • If Master goes down, the whole architecture is prone to SPOF (Single point of failure).
  • M-S architecture does not helps in scaling, when your user base grows.
  • So we need a process to Monitor Master in case of failure or shutdown, that is Sentinel.

Redis Sentinel

Initial Setup
Failover Handling

I was still concerned about the below Sharding of data for best performance

Concept of Redis Cluster

“A query that used to take an hour can run in seconds on cache”.

Redis Cluster is an active-passive cluster implementation that consists of master and slave nodes. The cluster uses hash partitioning to split the key space into 16,384 key slots, with each master responsible for a subset of those slots. 

Each slave replicates a specific master and can be reassigned to replicate another master or be elected to a master node as needed. 

Ports Communication

Each node in a cluster requires two TCP ports. 

  • One port is used for client connections and communications. This is the port you would configure into client applications or command line tools. 
  • Second required port is reserved for node-to-node communication that occurs in a binary protocol and allows the nodes to discuss configuration and node availability.

Failover

When a master fails or is found to be unreachable by the majority of the cluster as determined by the nodes communication via the gossip port, the remaining masters hold a vote and elect one of the failing masters’ slaves to take its place. 

Rejoining The Cluster

When the failing master eventually rejoins the cluster, it will join as a slave and begin to replicate another master.

Sharding

Redis sharded data automatically into the servers.
Redis has a concept of hash slot in order to split data. All the data are divided into slots.
There are 16384 slots. These slots are divided by the number of servers.

If there are 3 servers; A, B and C then

  • Server 1 contains hash slots from 0 to 5500.
  • Server 2 contains hash slots from 5501 to 11000.
  • Server 3 contains hash slots from 11001 to 16383.

6 Node M/S Cluster

In a 6 node cluster mode, 3 nodes will be serving as a master and the 3 node will be their respective slave.

Here, Redis service will be running on port 6379 on all servers in the cluster. Each master server is replicating the keys to its respective redis slave node assigned during cluster creation process.

3 Node M/S Cluster

In a 3 node cluster mode, there will be 2 redis services running on each server on different ports. All 3 nodes will be serving as a master with redis slave on cross nodes.

Here, two redis services will be running on each server on two different ports and each master is replicating the keys to its respective redis slave running on other node.

WHAT IF Redis Goes Down

1 node goes down in a 6 node Redis Cluster

If one of the node goes down in Redis 6-node cluster setup, its respective slave will be promoted as master.

In above example, master Server3 goes down and it slave Server6 is promoted as master.

1 node goes down in a 3 node Redis Cluster

If one of the node goes down in Redis 3-node cluster setup, its respective slave running on the separate node will be promoted to master.

In above example, Server 3 goes down and slave running on Server1 is promoted to master.

Redis service goes down on one of the 3 node Redis Cluster

If redis service goes down on one of the node in Redis 3-node cluster setup, its respective slave will be promoted as master.

Conclusion

Although, this methodology will prevent Redis Cluster in partial Failover scenarios only, but if we want full failover we need to look for Disaster Recovery techniques as well.

Well this implementation helped me having a sound sleep while thinking of Redis availability, sharding and performance.

Enough of reading, eager to know how this all works when it comes to implementation. Don’t worry, my next blog Redis Cluster: Setup, Sharding and Failover Testing will be guiding you through the process.

Enjoy happy and safe DIWALI

Stay Away Replication Lag !

Recently, I got a requirement to facilitate backup for the data and a way to analyze it without using the main database. MySQL replication is a process that allows you to easily maintain multiple copies of MySQL data by having them copied automatically from a master to a slave database. 

Panic Starts

Everything was running smoothly in the night I configured it. But the joy didn’t last for long as the traffic hits in the morning, slave starts getting behind the master with few seconds which increases with the activity on the application. At the peak time, it was playing in thousands of second.

 What now, I had to dig deep into MySQL Replication

  • How it works 
  • What can probably cause the lag
  • An approach that minimizes or eliminates it

How MySQL Replication Works

On the master

First of all, master writes replication events to a special log called binary log. This is usually a very lightweight activity because writes are buffered and they are sequential. The binary log file stores data that replication slave will be reading later.

On the replica

When you start replication, two threads are started on the slave:
1. IO thread
This process connects to a master, reads binary log events from the master as they come in and just copies them over to a local log file called relay log.
Even though there’s only one thread reading the binary log from the master and one writing relay log on the slave, very rarely copying of replication events is a slower element of the replication. There could be a network delay, causing a steady delay of a few hundred milliseconds.
If you want to see where IO thread currently is, check the following in “show slave status \G”
Master_Log_File – last file copied from the master (most of the time it would be the same as last binary log written by a master)
Read_Master_Log_Pos – binary log from the master is copied over to the relay log on the slave up until this position.
And then you can compare it to the output of “show master status/G” from the master.

mysql> show master status\G;
*************************** 1. row ***************************
             File: db01-binary-log.000032
         Position: 1008761891
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

2. SQL thread
The second process – SQL thread – reads events from a relay log stored locally on the replication slave (the file that was written by IO thread) and then applies them as fast as possible.
Going back to “show slave status /G”, you can get the current status of SQL thread from the following variables:
Relay_Master_Log_File – binary log from the master, that SQL thread is “working on” (in reality it is working on relay log, so it’s just a convenient way to display information)
Exec_Master_Log_Pos – which position from the master binary log is being executed by SQL thread.

mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: <master_ip>
                  Master_User: <replication_user>
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: db01-binary-log.000032
          Read_Master_Log_Pos: 1008768810
               Relay_Log_File: relay-bin.000093
                Relay_Log_Pos: 1008769035
        Relay_Master_Log_File: db01-binary-log.000032
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
.
.
          Exec_Master_Log_Pos: 1008768810
              Relay_Log_Space: 1008769305
.
.
        Seconds_Behind_Master: 0
.
.
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
.
.
1 row in set (0.00 sec)

Why Replication Lag Occurred

Replication lag occurs when the slaves cannot keep up with the updates occurring on the master. Unapplied changes accumulate in the slave’s relay logs and the version of the database on the slaves becomes increasingly different from that of the master.

Caught The Culprit

Let me take you through my journey how I crossed the river. 
First, I took the reference to multiple blogs and started gobbling my mind with possible reasons suggesting 

  • Hardware Faults (getting RAID in degraded mode)
  • MySQL Config Updates 
    • setting sync_binlog=1
    • enabling log_slave_updates
    • setting innodb_flush_log_at_trx_commit=1
    • updating slave_parallel_workers to a higher value
    • changing slave_parallel_type to support more parallel workers
  • Restarting Replication 

But unfortunately, or say, it was my benightedness towards Database Administration that I was still searching for that twig which can help me from drowning.
And finally, I found one, my DBA friend who suggested me to look for the Binary Log Format that I am using. Let’s see what it is

Binary Logging Formats

The server uses several logging formats to record information in the binary log. The exact format employed depends on the version of MySQL being used. There are three logging formats:
STATEMENT: With statement-based replication, every SQL statement that could modify data is logged on the master. Then those SQL statements are replayed on the slaves against the same dataset and in the same context. There is always less data that is to be transferred between the master and the slave. But, the data inconsistency issue between the master and the slave that creeps up due to the way this kind of replication works.
ROW: With row-based replication, every “row modification” is logged on the master and is then applied to the slave. With row-based replication, each and every change can be replicated and hence this is the safest form of replication. On a system that frequently UPDATE a large number of rows, it produces very large update logs and generates a lot of network traffic between the master and the slave.
MIXED: A third option is also available: mixed logging. With mixed logging, statement-based logging is used by default, but the logging mode switches automatically to row-based in certain cases.

Changing Binary Log Format

The Binary Log Format is updated on Master MySQL server and requires MySQL service restart to reflect. It can be done for Global, Runtime or Session.

  • set at runtime with –binlog-format=format
  • setting the global (with the SUPER privilege)
  • session value of the binlog_format server variable
mysql> SET GLOBAL binlog_format=MIXED;

mysql> SET SESSION binlog_format=ROW;

mysql> SET binlog_format=STATEMENT; 

So, earlier I was using STATEMENT BinLog Format, which is default one. Since I switched to MIXED BinLog Format, I am very delighted to share the below stats.
Current status of  Master Read and Slave Execute position difference and Slave Lag (in sec), both are ZERO.

Replication Lag (in Seconds) graph for a month, powered by Prometheus-Grafana.

Now, What’s next ??