Tuesday, April 16, 2019

Redis Best Practices and Performance Tuning

One of the thing that I love about my organization is that you don't have to do the same repetitive work, you will always get the chance to explore some new technologies. The same chance came across to me a few days back when one of our clients was facing issue with Redis.
They were using the Redis Cluster with Sentinel for which they were facing issue regarding performance, whenever the connection request was high the Redis Cluster was not able to bear the load.
Since they were using a decent configuration of the server in terms of CPU and Memory but the result was the same. So now what????
The Answer was to tune the performance.

There are plenty of Redis performance articles out there, but I wanted to share my experience as a DevOps with Redis by creating an article which will include the most essential and important stuff that is needed for a Developer or a DevOps Engineer.

So let's get started.


TCP-KeepAlive

Keepalive is a method to allow the same TCP connection for HTTP conversation instead of opening a new one with each new request.
In simple words, if the keepalive is off the Redis will open a new connection for every request which will slow down its performance. If the keepalive is on then Redis will use the same TCP connection for requests.

Let's see the graph for more details. The Red Bar shows the output when keepalive is on and Blue Bar shows the output when keepalive is off

For enabling the TCP keepalive, Edit the redis configuration and update this value.

vim /etc/redis/redis.conf
# Update the value to 0
tcp-keepalive 0

Pipelining

This feature could be your lifesaver in terms of Redis Performance. Pipelining facilitates a client to send multiple requests to the server without waiting for the replies at all and finally reads the reply in a single step. 

For example:-

Redis pipelining 1

You can also see in the graph as well.



Pipelining will increase the performance of redis drastically.

Max-Connection

Max-connection is the parameter in which is used to define the maximum connection limit to the Redis Server. You can set that value accordingly (Considering your server specification) with the following steps.

sudo vim /etc/rc.local

# make sure this line is just before of exit 0.
sysctl -w net.core.somaxconn=65365

This step requires the reboot if you don't want to reboot the server execute the same sysctl command on the terminal itself.

Overcommit Memory

Overcommit memory is a kernel parameter which checks if the memory is available or not. If the overcommit memory value is 0 then there is a chance that your Redis will get OOM (Out of Memory) error. So do me a favor and change its value to 1 by using the following steps

echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf

RDB Persistence and Append Only File

RDB persistence and Append Only File options are used to persist data on disk. If you are using the cluster mode of Redis then the RDB persistence and AOF is not required. So simply comment out these lines in redis.conf

sudo vim /etc/redis/redis.conf

# Comment out these lines
save 900 1
save 300 10
save 60 10000

rdbcompression no
rdbchecksum no

appendonly no

Transparent Huge Page(THP)

Most of the people are not aware of this term. Basically, For making the translation of physical and virtual memory kernel uses the concept of paging. This feature was defined to enhance the memory mapping process but somehow it slows down the databases which are memory based (for example - in the case of Redis). To overcome this issue you can disable THP.

sudo vim /etc/rc.local

# Add this line before exit 0
echo never > /sys/kernel/mm/transparent_hugepage/enabled

As graph also shows the difference in performance. The Red Bar is showing THP disabled performance and Blue Bar is showing THP disabled performance.


Some Other Basic Measures in Redis Configuration

 
Config Option
Value
Description
maxmemory
70% of the system
maxmemory should be 70 percent of the system so that it will not take all the resource of the server.
maxmemory-policy
volatile-lru
It adds a random key with an expiry time
loglevel
notice
Loglevel should be "notice", so that log will not take too much resource
timeout
300
There should be a timeout value as well in redis configuration which prevents redis from spending too much time on the connection. It closes the connection of the client if it is ideal for more than 300 seconds.

So now your redis is ready to give a killer performance. In this blog, we have discussed redis best practices and performance tuning.
There are multiple factors which are yet to be explored to enhance the performance of Redis if you find that before I do, please let me know to improve this blog.

In my next blog, I will discuss around how can we do Redis Performance Testing and how we are doing it in our Organisation.

Tuesday, April 9, 2019

Jenkins authorization using "Role-Based Strategy"



Jenkins is an open source automation server written in Java.
Jenkins helps to automate the non-human part of the software
development process, with continuous integration and facilitating
technical aspects of continuous deliveryIt is a server-based system
that runs in servlet containers such as Apache Tomcat.

The Security Realm, or authentication, indicates who can access the
Jenkins environment. The other piece of the puzzle is Authorization,
which indicates what they can access in the Jenkins environment. 

By default Jenkins supports a few different Authorization options: 

Anyone can do anything 

Everyone gets full control of Jenkins, including anonymous users who
haven’t logged in.

 Do not use this setting for anything other than local test Jenkins masters.

Legacy mode 

Behaves exactly the same as Jenkins <1.164. Namely, if a user has the
"admin" role, they will be granted full control over the system, and otherwise
(including anonymous users) will only have the read access.
Do not use this setting for anything other than local test Jenkins masters.
  
Logged-in users can do anything  

In this mode, every logged-in user gets full control of Jenkins. Depending
on an advanced option, anonymous users get read access to Jenkins, or
no access at all. This mode is useful to force users to log in before taking
actions, so that there is an audit trail of users' actions.

Matrix-based security  

This authorization scheme allows for granular control over which users
and groups are able to perform which actions in the Jenkins environment.

Project-based Matrix Authorization Strategy  

This authorization scheme is an extension to Matrix-based security which
allows additional access control lists (ACLs) to be defined for each project
separately in the Project configuration screen. This allows granting specific
users or groups access only to specified projects, instead of all projects in
the Jenkins environment.

Role-Based Strategy

The Role Strategy plugin is meant to be used from Jenkins to add a new
role-based mechanism to manage users' permissions. It uses regular
expressions to match project names that's why it is used when there are 
large number of projects present.
In this blog I am going walk you through the steps to enable
“Role-Based Strategy” so, you can use this to give permission of different
jobs to different users.

Install Role-based Authorization Plugin


To use “Role-Based Strategy”, first you need to install the plugin. You have
to perform the following steps in order to install the plugin.
  • Login to Jenkins with your admin account. 
  • Select “Manage Jenkins”. 
  • Select “Manage Plugins”. 
  • Select “Available” tab. 
  • Search for “role” in the Filter text box.



Enable Role-based Authorization

To enable this authorization method you need to follow these steps -
  • Click on “Manage Jenkins”. 
  • Choose “Configure Global Security”. 
  • Select “Role-Based Strategy” present under “Access Control” section.


Manage and Assign Roles

To manage Jenkins role you have to follow these steps -
  • Click on “Manage Jenkins”.
  • And then select “Manage and Assign Roles”.



After selecting “Manage and Assign Roles”, you will have the following
choices.



Create a New Global Role

Select “Manage Roles” and here you can create global roles which will be
applicable for all the objects in the Jenkins.
After creating the global role you have to create project roles as shown
below. Don’t forget to click on “Save”.


In this case the user will be able to see the jobs which end with the given
words given above (eg Dev, QA etc).

Assign Users to the Groups


After creating the roles with required permissions, you need to assign users
to this role.
Now, go back to previous page and click on “Assign Roles”.
First you have to assign one global role to all the users and then you can
assign project roles to users as per the requirement.



After this click on “Save” and now the users will be able to see those projects
for which they have permission.

References

https://www.thegeekstuff.com/2017/03/jenkins-users-groups-roles/
https://jenkins.io/doc/book/managing/security/#authorization



Tuesday, April 2, 2019

Resolving Segmentation Fault (“Core dumped”) in Ubuntu

This error may strike your ubuntu at any point of the moment. 


A few days ago when I was doing my routine work in my ubuntu laptop, suddenly
I encountered with an error "Segmentation fault ( core dumped)" then I got to know
that, this error can strike you ubuntu or any other operating system at any point of
the moment as binaries crashing doesn't depend on us. Segmentation fault is when
your system tries to access a page of memory that doesn’t exist. Core dumped
means when a part of code tries to perform read and write operation on a read-only
or free location. Segfaults are generally associated with the file named core and It
generally happens during upgradation.


While running some commands during the core-dump situation you may encounter
with “ Unable to open Lock filethis is because the system is trying to capture a
bit block which is not existing, This is due to crashing of binaries of some specific
programs.
You may do backtracking or debugging to resolve it but the solution is to repair the
broken packages and we can do it by performing the below-mentioned steps:

Command-line:
Step 1: Remove the lock files present at different locations.
sudo rm -rf /var/lib/apt/lists/lock /var/cache/apt/archives/lock /var/lib/dpkg/lock and restart your system.
Step 2: Remove repository cache.
sudo apt-get clean all
Step 3: Update and upgrade your repository cache.
sudo apt-get update && sudo apt-get upgrade
Step 4: Now upgrade your distribution, it will update your packages.
sudo apt-get dist-upgrade
Step 5: Find the broken packages and delete it forcefully.
sudo dpkg -l | grep ^..r | apt-get purge
Apart from the command line the best way which will always work is:
Step 1: Run ubuntu in startup mode by pressing Esc key after restart.
Step 2: Select Advanced options for Ubuntu

Step 3: Run Ubuntu in the recovery mode and you will be listed with many options.


Step 4: First select “Repair broken packages”

Step 5: Then select “Resume normal boot”

So, we have two methods of resolving segmentation fault: CLI and the GUI.
Sometimes, it may also happen that "apt" command is not working because of
segfault, so our CLI method will not work, in that case also don't
worry as GUI method gonna work for us always.

Sunday, March 31, 2019

The closer you think you are, the less you'll actually see

I hope you have seen the movie Now you see me, it has a famous quote The closer you think you are, the less you'll actually see. Well, this blog is not about this movie but how I got stuck into an issue, because I was not paying attention and looking at the things closely and seeing less hence not able to resolve the issue.

There is a lot happening in today’s DevOps world. And HashiCorp has emerged out to be a big player in this game. Terraform is one of the open source tools to manage infrastructure as code. It plays well with most of the cloud provider. But with all these continuous improvements and enhancements there comes a possibility of issues as well. Below article is about such a scenario. And in case you have found yourself in the same trouble. You are lucky to reach the right page.

I was learning terraform and performing a simple task to launch an Ubuntu EC2 instance in us-east-1 region. For which I required the AMI Id, which I copied from the AWS console as shown in below screenshot.


Once I got the AMI Id, I tried to create the instance using terraform, below is the screenshot of the code

provider "aws" {
  region     = "us-east-1"
  access_key = "XXXXXXXXXXXXXXXXXX"
  secret_key = "XXXXXXXXXXXXXXXXXXX"
}

resource "aws_instance" "sandy" {
        ami = "ami-036ede09922dadc9b"
        instance_type = "t2.micro"
        subnet_id = "subnet-0bf4261d26b8dc3fc"
}

I was expecting to see the magic of Terraform but what I got below ugly error.


Terraform was not allowing to spin up the instance. I tried couple of things which didn’t work. As you can see the error message didn't give too much information. Finally, I thought of giving it a try by  doing same task via AWS web console. I searched for the same ubuntu AMI and selected the image as shown below. Rest of the things, I kept to default. And well, this time it got launched.



And it confused me more. Through console, it was working fine but while using Terraform it says not allowed. After a lot of hair pulling finally, I found the culprit which is a perfect example of how overlooking small things can lead to blunder.

Culprit

While copying the AMI ID from AWS console, I had copied the 64-bit (ARM) AMI ID. Please look carefully, the below screenshot


But while creating it through console I was selecting the default configuration which by is 64-bit(x86). Look at the below screenshot.


To explain it further, I tried to launch the VM with 64-bit (ARM) manually. And while selecting the AMI, I selected the 64-bit (ARM).


And here is the culprit. 64-bit(ARM) only supports a1 instance type


Conclusion

While launching the instance with the terraform, I tried using 64-bit (ARM) AMI ID mistakenly, primarily because for same AMI there are 2 AMI IDs and it is not very visible to eyes unless you pay special attention.

So folks, next time choosing an AMI ID keep it in mind what type of AMI you are selecting. It will save you a lot of time.

Redis Best Practices and Performance Tuning

One of the thing that I love about my organization is that you don't have to do the same repetitive work, you will always get the chanc...