Linux OS Hardening: CIS Benchmarks

As we’re going through a pandemic majority of business have taken things online with options like work from home and as things get more and moreover the internet our concerns regarding cybersecurity become more and more prominent. We start to dig a little to have standards in place and terms like  Compliance, Hardening, CIS, HIPPA, PCI-DSS are minted out. Today we’ll be discussing why to have CIS benchmarks in place in the least and how we at Opstree have automated this for our clients.

Before moving forward get familiar with basic terms:

CIS Benchmarks are the best security measures that are created by the Centre of Internet Security to improve the security configuration of an organization. Continue reading “Linux OS Hardening: CIS Benchmarks”

Create Your Own Container Using Linux Namespaces Part-1.

In this lock-down, everyone has to maintain a social distance and in this trying time, we can learn from docker to isolate ourselves. So before that, we need to learn how docker does it?
The best approach to learn is to simulate it. For that, we’ll be creating our own container tool for the application to isolate itself.

Self isolation | SpongeBob SquarePants | Know Your Meme

 

Continue reading “Create Your Own Container Using Linux Namespaces Part-1.”

Unix File Tree Part-2

For those who have surfed straight to this blog, please check out the previous part of this series Unix File Tree Part-1 and those who have stayed tuned for this part, welcome back.In the previous part, we discussed the philosophy and the need for file tree. In this part, we will dive deep into the significance of each directory.

Image result for horizontal file tree linux

Dayum!! that’s a lot of stuff to gulp at once, we’ll kick out things one after the other.

Major directories

Let’s talk about the crucial directories which play a major role.

  • /bin: When we started crawling on Linux this helped us to get on our feet yes, you read it right whether you want to copy any file, move it somewhere, create a directory, find out date, size of a file, all sorts of basic operations without which the OS won’t even listen to you (Linux yawning meanwhile) happens because of the executables present in this directory. Most of the programs in /bin are in binary format, having been created by a C compiler, but some are shell scripts in modern systems.
  • /etc: When you want things to behave the way you want, you go to /etc and put all your desired configuration there (Imagine if your girlfriend has an /etc life would have been easier). whether it is about various services or daemons running on your OS it will make sure things are working the way you want them to.
  • /var: He is the guy who has kept an eye over everything since the time you have booted the system (consider him like Heimdall from Thor). It contains files to which the system writes data during the course of its operation. Among the various sub-directories within /var are /var/cache (contains cached data from application programs), /var/games(contains variable data relating to games in /usr), /var/lib (contains dynamic data libraries and files), /var/lock (contains lock files created by programs to indicate that they are using a particular file or device), /var/log (contains log files), /var/run (contains PIDs and other system information that is valid until the system is booted again) and /var/spool (contains mail, news and printer queues).
  • /proc: You can think of /proc just like thoughts in your brain which are illusions and virtual. Being an illusionary file system it does not exist on disk instead, the kernel creates it in memory. It is used to provide information about the system (originally about processes, hence the name). If you navigate to /proc The first thing that you will notice is that there are some familiar-sounding files, and then a whole bunch of numbered directories. The numbered directories represent processes, better known as PIDs, and within them, a command that occupies them. The files contain system information such as memory (meminfo), CPU information (cpuinfo), and available filesystems.
  • /opt: It is like a guest room in your house where the guest stayed for prolong period and became part of your home. This directory is reserved for all the software and add-on packages that are not part of the default installation.
  • /usr: In the original Unix implementations, /usr was where the home directories of the users were placed (that is to say, /usr/someone was then the directory now known as /home/someone). In current Unixes, /usr is where user-land programs and data (as opposed to ‘system land’ programs and data) are. The name hasn’t changed, but its meaning has narrowed and lengthened from “everything user related” to “user usable programs and data”. As such, some people may now refer to this directory as meaning ‘User System Resources’ and not ‘user’ as was originally intended.

Potato or Potaaato what is the difference? 

We’ll be discussing those directories which confuse us always, which have almost a similar purpose but still are in separate locations and when asked about them we go like ummmm…….

/bin vs /usr/bin vs /sbin vs /usr/local/bin

This might get almost clear out when I explained the significance of /usr in the above paragraph. Since Unix designers planned /usr to be the local directories of individual users so it contained all of the sub-directories like /usr/bin, /usr/sbin, /usr/local/bin. But the question remains the same how the content is different?

/usr/bin:

  • /usr/bin is a standard directory on Unix-like operating systems that contains most of the executable files that are not needed for booting or repairing the system. 
  • A few of the most commonly used are awk, clear, diff, du, env, file, find, free, gzip, less, locate, man, sudo, tail, telnet, time, top, vim, wc, which, and zip.

/usr/sbin:

  • The /usr/sbin directory contains non-vital system utilities that are used after booting.
  • This is in contrast to the /sbin directory, whose contents include vital system utilities that are necessary before the /usr directory has been mounted (i.e., attached logically to the main filesystem). 
  • A few of the more familiar programs in /usr/sbin are adduser, chroot, groupadd, and userdel. 
  • It also contains some daemons, which are programs that run silently in the background, rather than under the direct control of a user, waiting until they are activated by a particular event or condition such as crond and sshd.

I hope I have covered most of the directories which you might come across frequently and your questions must have been answered.
Now that we know about the significance of each UNIX directory, It’s time to use them wisely the way they are supposed to be.
Please feel free to reach me out for any suggestions.
Goodbye till next time!

References: https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/usr.htmlhttps://askubuntu.com/questions/130186/what-is-the-rationale-for-the-usr-directoryhttps://askubuntu.com/questions/308045/differences-between-bin-sbin-usr-bin-usr-sbin-usr-local-bin-usr-localhttp://index-of.es/Varios-2/How%20Linux%20Works%20What%20Every%20Superuser%20Should%20Know.pdf
https://imgflip.com/memegenerator

Where there is a shell, There is a way.

Well, as a DevOps; I like to play around with shell scripts and shell commands especially on a remote system as it just adds some level of fun in it. But what’s more thrilling than running shell scripts and command on the remote server, making them return the dynamic web pages or JSON from that remote system.

Yes for most of us it comes as a surprise that just like PHP, JSP, ASP shell scripts can also return us dynamic web pages but, as long time ago a wise man said: “where there is a shell there is a way”.

Isn’t PHP or JSP a better option for web development?

For a web developer … yes, but as a DevOps, I want to do all possible stuff from a shell script. And it is quite useful for us to have a shell script as a server-side language for us as we all know the power of shell scripts.

Why do we need this exactly?

Isn’t ‘for fun’ is an obvious reason. But for those who want more than that, I got some points

  • We can use it as a time series based data exporter.
  • We might want an API that returns us the system info in the form of JSON, and we don’t have access to PHP.
  • We might want to see the system information as a web page when we hit a URL.
  • It’s not only limited to system info you can do whatever you want from it.
  • With bare minimum on your machine, you can get the max out of it

Let’s get started

Now let’s get done with the boring part i.e. configuring Apache
Now I am assuming that Apache is installed on that system as it is needed in order to serve your web pages. So, in order to let Apache serve your script, you need to enable the CGI config by simple commands.
$ cd /etc/apache2/mods-enabled
$ sudo ln -s ../mods-available/cgi.load
and you are ready to go.
Now move to dir where you are going to put your shell scripts.
$ cd /usr/lib/cgi-bin
Once in the dir create a new file hello.sh
$ vim hello.sh
and write the following scripts
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "hello world! from shell script"
Make sure you make that file executable.
Now I think you have got the pretty much idea what your webpage is going to display.
So restart the Apache server
$ sudo systemctl restart apache2.service

Let’s take it to the next level

Now let’s see what else can we do, Unlike PHP or JAVA or Python we don’t have any framework for shell scripts, so we might have to work a bit. But that’s the fun part, right?
So let’s get started

Now we are simply going to display that which user is using /usr/sbin/nologin shell
So here are some files that I created in cgi-bin directory in order to display that data as the web page
Header file

<!doctype html>
<html lang="en">
  <head>
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">

    <!-- Bootstrap CSS -->
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">

    <title>Hello, world!</title>
  </head>
  <body>
    <h1>All the user using /usr/sbin/nologin shell</h1>
 
 <table class="table">
  <thead>
    <tr>
      <th scope="col">Name</th>
      <th scope="col">User Id</th>
      <th scope="col">Group Id</th>
    </tr>
  </thead>
  <tbody>
Footer file

</tbody>
</table>

    <!-- Optional JavaScript -->
    <!-- jQuery first, then Popper.js, then Bootstrap JS -->
    https://code.jquery.com/jquery-3.3.1.slim.min.js
    https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js
    https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js
  </body>
</html>

hello.sh

#!/bin/bash
echo "Content-type: text/html"
echo ""
cat header
cat /etc/passwd | awk -F ':' '{if($7 == "/usr/sbin/nologin"){print ""$1""$3""$4""}}'
cat footer 
So let’s just see what all those files are
Header file and footer file basically contains the starter template of bootstrap that gives you a prebuild web template, and in hello.sh we are extracting those file by using cat and in the middle, we are writing a shell command in order to get the users that are using /usr/sbin/nologin shell and making a template from it using awk.
So now when you hit the same URL output will be like

Now I guess we got the base idea that how can we use a shell script to display web pages of our need. We can also use it as an API as it can return JSON as well. But it’s up to the individual how well we can use it for.

Summary

So, in this blog, we saw how with bare minimum we can get most out of it. It is not limited to just some use cases it can be used to create an API which can return valuable information of system or services running on the system. With some good scripting and some tricky HTML template designing, we can achieve a lot.

Linux Namespaces – Part 2

Before talking about the types of namespaces we are assuming that you have gone through our First Part of Linux Namespaces, if not you can check it here.

 

Types of Namespaces

So Basically we have seven types of Linux Namespaces:-
  1. CGroups:- Basically cgroups virtualize the view of process’s cgroups in /proc/[pid]/cgroups. Whenever a process creates a new cgroup it enters in a new namespace in which all current directories become cgroup root directories of the new namespace. So we can say that it isolates cgroup root directory.
  2. IPC(Interpolation Communication):- This namespace isolates interpolation communication. For example, In Linux, we have System V IPC(A communication mechanism) and Posfix (for message queues) which allows processes to exchange data in form of communication. So in simple words, we can say that IPC namespace isolates communication.
  3. Network:- This namespace isolates systems related to the network. For example:- network devices, IP protocols, Firewall Rules (That’s why we can use the single port with single service )
  4. Mount:- This namespace isolates mount points that can be seen by processes in each namespace. In simple words, you can take an example of filesystem mounting in which we can mount only one device or partition on a mount-point.
  5. PID:- This namespace isolates the PID. (In this child processes cannot see or trace the parent process but parent process can see or trace the child processes of the namespace. Processes in different PID namespace can have same PID.)
  6. User:- This namespace isolates security related identifier like group id and user id. In simple words, we can say that the process’s group and user id has full privilege inside the namespace but not outside the namespace.
  7. UTS:- This namespace provides the isolation on hostname and domain name. It means processes has a separate copy of domain name or hostname so while changing hostname or domain name it will not affect the rest of the system.

Namespace Management

This is the most advanced topic of Linux namespaces which should be done on kernel level. For the namespace management, you have to write a C program.
For management of namespace, we have these functions available in Linux:-
  • clone():-  If we use standalone clone() it will create a new process only, but if we pass one or more flags like CLONE_NEW*, then the new namespace will be created and child process will become the member of it.
  • setns():- This allows joining existing namespace. The namespace is specified by the file descriptor referenced to process.
  • unshare():- This allows calling process to disassociate from parts of current namespace. Basically, this function works on the processes that are being shared by other’s namespace as well for ex:- mount namespace.

Redis Best Practices and Performance Tuning

One of the thing that I love about my organization is that you don’t have to do the same repetitive work, you will always get the chance to explore some new technologies. The same chance came across to me a few days back when one of our clients was facing issue with Redis.
They were using the Redis Cluster with Sentinel for which they were facing issue regarding performance, whenever the connection request was high the Redis Cluster was not able to bear the load.
Since they were using a decent configuration of the server in terms of CPU and Memory but the result was the same. So now what????
The Answer was to tune the performance.

There are plenty of Redis performance articles out there, but I wanted to share my experience as a DevOps with Redis by creating an article which will include the most essential and important stuff that is needed for a Developer or a DevOps Engineer.

So let’s get started.

 TCP-KeepAlive

Keepalive is a method to allow the same TCP connection for HTTP conversation instead of opening a new one with each new request.

In simple words, if the keepalive is off the Redis will open a new connection for every request which will slow down its performance. If the keepalive is on then Redis will use the same TCP connection for requests.

Let’s see the graph for more details. The Red Bar shows the output when keepalive is on and Blue Bar shows the output when keepalive is off

For enabling the TCP keepalive, Edit the redis configuration and update this value.

vim /etc/redis/redis.conf
# Update the value to 0
tcp-keepalive 0

Pipelining

This feature could be your lifesaver in terms of Redis Performance. Pipelining facilitates a client to send multiple requests to the server without waiting for the replies at all and finally reads the reply in a single step.

For example:-

1

You can also see in the graph as well.

Pipelining will increase the performance of redis drastically.

Max-Connection

Max-connection is the parameter in which is used to define the maximum connection limit to the Redis Server. You can set that value accordingly (Considering your server specification) with the following steps.

sudo vim /etc/rc.local

# make sure this line is just before of exit 0.
sysctl -w net.core.somaxconn=65365

This step requires the reboot if you don’t want to reboot the server execute the same sysctl command on the terminal itself.

Overcommit Memory

Overcommit memory is a kernel parameter which checks if the memory is available or not. If the overcommit memory value is 0 then there is a chance that your Redis will get OOM (Out of Memory) error. So do me a favor and change its value to 1 by using the following steps

echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf

RDB Persistence and Append Only File

RDB persistence and Append Only File options are used to persist data on disk. If you are using the cluster mode of Redis then the RDB persistence and AOF is not required. So simply comment out these lines in redis.conf

sudo vim /etc/redis/redis.conf

# Comment out these lines
save 900 1
save 300 10
save 60 10000

rdbcompression no
rdbchecksum no

appendonly no

Transparent Huge Page(THP)

Most of the people are not aware of this term. Basically, For making the translation of physical and virtual memory kernel uses the concept of paging. This feature was defined to enhance the memory mapping process but somehow it slows down the databases which are memory based (for example – in the case of Redis). To overcome this issue you can disable THP.

sudo vim /etc/rc.local # Add this line before exit 0 echo never > /sys/kernel/mm/transparent_hugepage/enabled

As graph also shows the difference in performance. The Red Bar is showing THP disabled performance and Blue Bar is showing THP disabled performance.

Some Other Basic Measures in Redis Configuration

Config Option

Value

Description

maxmemory

70% of the system

maxmemory should be 70 percent of the system so that it will not take all the resource of the server.

maxmemory-policy

volatile-lru

It adds a random key with an expiry time

loglevel

notice

Loglevel should be “notice”, so that log will not take too much resource

timeout

300

There should be a timeout value as well in redis configuration which prevents redis from spending too much time on the connection. It closes the connection of the client if it is ideal for more than 300 seconds.

So now your redis is ready to give a killer performance. In this blog, we have discussed redis best practices and performance tuning.
There are multiple factors which are yet to be explored to enhance the performance of Redis if you find that before I do, please let me know to improve this blog.

In my next blog, I will discuss around how can we do Redis Performance Testing and how we are doing it in our Organisation.

Resolving Segmentation Fault (“Core dumped”) in Ubuntu

This error may strike your Ubuntu at any point of the moment.
A few days ago when I was doing my routine work in my Ubuntu laptop, suddenly
I encountered with an error “Segmentation fault ( core dumped)” then I got to know
that, this error can strike you Ubuntu or any other operating system at any point of the moment as binaries crashing doesn’t depend on us. Segmentation fault is when your system tries to access a page of memory that doesn’t exist. Core dumped means when a part of code tries to perform read and write operation on a read-only or free location. Segfaults are generally associated with the file named core and It generally happens during up gradation.
 
 

While running some commands during the core-dump situation you may encounter
with “ Unable to open Lock file” this is because the system is trying to capture a
bit block which is not existing, This is due to crashing of binaries of some specific
programs.

You may do backtracking or debugging to resolve it but the solution is to repair the
broken packages and we can do it by performing the below-mentioned steps:

Command-line:
Step 1: Remove the lock files present at different locations.

sudo rm -rf /var/lib/apt/lists/lock /var/cache/apt/archives/lock /var/lib/dpkg/lock and restart your system.

Step 2: Remove repository cache.

sudo apt-get clean all

Step 3: Update and upgrade your repository cache.

sudo apt-get update && sudo apt-get upgrade

Step 4: Now upgrade your distribution, it will update your packages.

sudo apt-get dist-upgrade

Step 5: Find the broken packages and delete it forcefully.

sudo dpkg -l | grep ^..r | apt-get purge

Apart from the command line the best way which will always work is:

Step 1: Run ubuntu in startup mode by pressing Esc key after restart.

Step 2: Select Advanced options for Ubuntu

 

Step 3: Run Ubuntu in the recovery mode and you will be listed with many options.

 
 

Step 4: First select “Repair broken packages”

 

Step 5: Then select “Resume normal boot”

So, we have two methods of resolving segmentation fault: CLI and the GUI. Sometimes, it may also happen that “apt” command is not working because of segfault, so our CLI method will not work, in that case also don’t worry as GUI method gonna work for us always.