Linux Namespaces – Part 2

Before talking about the types of namespaces we are assuming that you have gone through our First Part of Linux Namespaces, if not you can check it here.

 

Types of Namespaces

So Basically we have seven types of Linux Namespaces:-
  1. CGroups:- Basically cgroups virtualize the view of process’s cgroups in /proc/[pid]/cgroups. Whenever a process creates a new cgroup it enters in a new namespace in which all current directories become cgroup root directories of the new namespace. So we can say that it isolates cgroup root directory.
  2. IPC(Interpolation Communication):- This namespace isolates interpolation communication. For example, In Linux, we have System V IPC(A communication mechanism) and Posfix (for message queues) which allows processes to exchange data in form of communication. So in simple words, we can say that IPC namespace isolates communication.
  3. Network:- This namespace isolates systems related to the network. For example:- network devices, IP protocols, Firewall Rules (That’s why we can use the single port with single service )
  4. Mount:- This namespace isolates mount points that can be seen by processes in each namespace. In simple words, you can take an example of filesystem mounting in which we can mount only one device or partition on a mount-point.
  5. PID:- This namespace isolates the PID. (In this child processes cannot see or trace the parent process but parent process can see or trace the child processes of the namespace. Processes in different PID namespace can have same PID.)
  6. User:- This namespace isolates security related identifier like group id and user id. In simple words, we can say that the process’s group and user id has full privilege inside the namespace but not outside the namespace.
  7. UTS:- This namespace provides the isolation on hostname and domain name. It means processes has a separate copy of domain name or hostname so while changing hostname or domain name it will not affect the rest of the system.

Namespace Management

This is the most advanced topic of Linux namespaces which should be done on kernel level. For the namespace management, you have to write a C program.
For management of namespace, we have these functions available in Linux:-
  • clone():-  If we use standalone clone() it will create a new process only, but if we pass one or more flags like CLONE_NEW*, then the new namespace will be created and child process will become the member of it.
  • setns():- This allows joining existing namespace. The namespace is specified by the file descriptor referenced to process.
  • unshare():- This allows calling process to disassociate from parts of current namespace. Basically, this function works on the processes that are being shared by other’s namespace as well for ex:- mount namespace.

Kafka Manager On Kubernetes

We likely know Kafka as a durable, scalable and fault-tolerant publish-subscribe messaging system. Recently I got a requirement to efficiently monitor and manage our Kafka cluster, and I started looking for different solutions. Kafka-manager is an open source tool introduced by Yahoo to manage and monitor the Apache Kafka cluster via UI.


Before I share my experience of configuring Kafka manager on Kubernetes, let’s go through its considerable features

As per their documentation on github below are the major features: 

Clusters:
  • Manage multiple clusters.
  • Easy inspection of the cluster state.

Brokers:

  • Run preferred replica election.
  • Generate partition assignments with the option to select brokers to use
  • Run reassignment of a partition (based on generated assignments)

Topics:

  • Create a topic with optional topic configs (0.8.1.1 has different configs than 0.8.2+)
  • Delete topic (only supported on 0.8.2+ and remember set delete.topic.enable=true in broker config)
  • The topic list now indicates topics marked for deletion (only supported on 0.8.2+)
  • Batch generate partition assignments for multiple topics with the option to select brokers to use
  • Batch run reassignment of partition for multiple topics
  • Add partitions to an existing topic
  • Update config for an existing topic

Metrics:

  • Optionally filter out consumers that do not have ids/ owners/ & offsets/ directories in zookeeper.
  • Optionally enable JMX polling for broker level and topic level metrics.

Prerequisites of Kafka Manager:

We should have a running Apache Kafka with Apache Zookeeper.

  • Apache Zookeeper
  • Apache Kafka

Deployment on Kubernetes: 

To deploy Kafka Manager on Kubernetes, we need to create deployment and service file as given below.

You can find these sample file at https://github.com/vishant07/kafka-manager




After deployment, we should able to access Kafka manager service via http://:8080

We have two files to Kafka-manager-service.yaml and kafka-manager.yaml to achieve above-mentioned setup. Let’s have a brief description of the different attributes used in these files. 

Deployment configuration file: 


namespace: provide a namespace to isolate application within Kubernetes.

replicas: number of containers to spun up.
image: provide the path of docker image to be used.
containerPorts: on which port you want to run your application.
environment: “ZK_HOSTS” provide the address of already running zookeeper.

Service configuration file:

This file contains the details to create Kafka manager service ok Kubernetes. For demo purpose, I have used the node port method to expose my service. 

As we are using Kubernetes for our underlying platform of deployment it is recommended not to use external IP to access any service. Either we should go with LoadBalancer or use ingress (recommended method) rather than exposing all microservices.  


To configure ingress, please take a note from Kubernetes Ingress.


Once we are able to access Kafka manager we can see similar screens. 

Cluster Management


Topic List


Major Issues

To get broker level and topic level metrics we have to enable JMX polling.



So what we will generally do is to set the environment variable in the kubernetes manifest but somehow it is not working most of the times.

To resolve this you need to update JMX settings while creating your docker image as given as below.

vim /opt/kafka/bin/kafka-run-class.sh

if [ -z "$KAFKA_JMX_OPTS" ]; then
#KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false  -Dcom.sun.management.jmxremote.ssl=false "

KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=$HOSTNAME -Djava.net.preferIPv4Stack=true"

fi

Conclusion

Deploying Kafka manager on Kubernetes encourages the easy setup, provides efficient manageability and all time availability. Managing Kafka cluster over CLI becomes a tedious task and here Kafka manager helps to focus more on the use of Kafka rather than investing our time to configure and manage it.  It becomes useful at Enterprise Level, where system engineers can manage multiple Kafka clusters easily via UI. 




Reference links: 
Image: google image search





Redis Best Practices and Performance Tuning

One of the thing that I love about my organization is that you don’t have to do the same repetitive work, you will always get the chance to explore some new technologies. The same chance came across to me a few days back when one of our clients was facing issue with Redis.
They were using the Redis Cluster with Sentinel for which they were facing issue regarding performance, whenever the connection request was high the Redis Cluster was not able to bear the load.
Since they were using a decent configuration of the server in terms of CPU and Memory but the result was the same. So now what????
The Answer was to tune the performance.

There are plenty of Redis performance articles out there, but I wanted to share my experience as a DevOps with Redis by creating an article which will include the most essential and important stuff that is needed for a Developer or a DevOps Engineer.

So let’s get started.

 TCP-KeepAlive

Keepalive is a method to allow the same TCP connection for HTTP conversation instead of opening a new one with each new request.

In simple words, if the keepalive is off the Redis will open a new connection for every request which will slow down its performance. If the keepalive is on then Redis will use the same TCP connection for requests.

Let’s see the graph for more details. The Red Bar shows the output when keepalive is on and Blue Bar shows the output when keepalive is off

For enabling the TCP keepalive, Edit the redis configuration and update this value.

vim /etc/redis/redis.conf
# Update the value to 0
tcp-keepalive 0

Pipelining

This feature could be your lifesaver in terms of Redis Performance. Pipelining facilitates a client to send multiple requests to the server without waiting for the replies at all and finally reads the reply in a single step.

For example:-

1

You can also see in the graph as well.

Pipelining will increase the performance of redis drastically.

Max-Connection

Max-connection is the parameter in which is used to define the maximum connection limit to the Redis Server. You can set that value accordingly (Considering your server specification) with the following steps.

sudo vim /etc/rc.local

# make sure this line is just before of exit 0.
sysctl -w net.core.somaxconn=65365

This step requires the reboot if you don’t want to reboot the server execute the same sysctl command on the terminal itself.

Overcommit Memory

Overcommit memory is a kernel parameter which checks if the memory is available or not. If the overcommit memory value is 0 then there is a chance that your Redis will get OOM (Out of Memory) error. So do me a favor and change its value to 1 by using the following steps

echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf

RDB Persistence and Append Only File

RDB persistence and Append Only File options are used to persist data on disk. If you are using the cluster mode of Redis then the RDB persistence and AOF is not required. So simply comment out these lines in redis.conf

sudo vim /etc/redis/redis.conf

# Comment out these lines
save 900 1
save 300 10
save 60 10000

rdbcompression no
rdbchecksum no

appendonly no

Transparent Huge Page(THP)

Most of the people are not aware of this term. Basically, For making the translation of physical and virtual memory kernel uses the concept of paging. This feature was defined to enhance the memory mapping process but somehow it slows down the databases which are memory based (for example – in the case of Redis). To overcome this issue you can disable THP.

sudo vim /etc/rc.local # Add this line before exit 0 echo never > /sys/kernel/mm/transparent_hugepage/enabled

As graph also shows the difference in performance. The Red Bar is showing THP disabled performance and Blue Bar is showing THP disabled performance.

Some Other Basic Measures in Redis Configuration

Config Option

Value

Description

maxmemory

70% of the system

maxmemory should be 70 percent of the system so that it will not take all the resource of the server.

maxmemory-policy

volatile-lru

It adds a random key with an expiry time

loglevel

notice

Loglevel should be “notice”, so that log will not take too much resource

timeout

300

There should be a timeout value as well in redis configuration which prevents redis from spending too much time on the connection. It closes the connection of the client if it is ideal for more than 300 seconds.

So now your redis is ready to give a killer performance. In this blog, we have discussed redis best practices and performance tuning.
There are multiple factors which are yet to be explored to enhance the performance of Redis if you find that before I do, please let me know to improve this blog.

In my next blog, I will discuss around how can we do Redis Performance Testing and how we are doing it in our Organisation.

Resolving Segmentation Fault (“Core dumped”) in Ubuntu

This error may strike your Ubuntu at any point of the moment.
A few days ago when I was doing my routine work in my Ubuntu laptop, suddenly
I encountered with an error “Segmentation fault ( core dumped)” then I got to know
that, this error can strike you Ubuntu or any other operating system at any point of the moment as binaries crashing doesn’t depend on us. Segmentation fault is when your system tries to access a page of memory that doesn’t exist. Core dumped means when a part of code tries to perform read and write operation on a read-only or free location. Segfaults are generally associated with the file named core and It generally happens during up gradation.
 
 

While running some commands during the core-dump situation you may encounter
with “ Unable to open Lock file” this is because the system is trying to capture a
bit block which is not existing, This is due to crashing of binaries of some specific
programs.

You may do backtracking or debugging to resolve it but the solution is to repair the
broken packages and we can do it by performing the below-mentioned steps:

Command-line:
Step 1: Remove the lock files present at different locations.

sudo rm -rf /var/lib/apt/lists/lock /var/cache/apt/archives/lock /var/lib/dpkg/lock and restart your system.

Step 2: Remove repository cache.

sudo apt-get clean all

Step 3: Update and upgrade your repository cache.

sudo apt-get update && sudo apt-get upgrade

Step 4: Now upgrade your distribution, it will update your packages.

sudo apt-get dist-upgrade

Step 5: Find the broken packages and delete it forcefully.

sudo dpkg -l | grep ^..r | apt-get purge

Apart from the command line the best way which will always work is:

Step 1: Run ubuntu in startup mode by pressing Esc key after restart.

Step 2: Select Advanced options for Ubuntu

 

Step 3: Run Ubuntu in the recovery mode and you will be listed with many options.

 
 

Step 4: First select “Repair broken packages”

 

Step 5: Then select “Resume normal boot”

So, we have two methods of resolving segmentation fault: CLI and the GUI. Sometimes, it may also happen that “apt” command is not working because of segfault, so our CLI method will not work, in that case also don’t worry as GUI method gonna work for us always.