Where there is a shell, There is a way.

Well, as a DevOps; I like to play around with shell scripts and shell commands especially on a remote system as it just adds some level of fun in it. But what’s more thrilling than running shell scripts and command on the remote server, making them return the dynamic web pages or JSON from that remote system.

Yes for most of us it comes as a surprise that just like PHP, JSP, ASP shell scripts can also return us dynamic web pages but, as long time ago a wise man said: “where there is a shell there is a way”.

Isn’t PHP or JSP a better option for web development?

For a web developer … yes, but as a DevOps, I want to do all possible stuff from a shell script. And it is quite useful for us to have a shell script as a server-side language for us as we all know the power of shell scripts.

Why do we need this exactly?

Isn’t ‘for fun’ is an obvious reason. But for those who want more than that, I got some points

  • We can use it as a time series based data exporter.
  • We might want an API that returns us the system info in the form of JSON, and we don’t have access to PHP.
  • We might want to see the system information as a web page when we hit a URL.
  • It’s not only limited to system info you can do whatever you want from it.
  • With bare minimum on your machine, you can get the max out of it

Let’s get started

Now let’s get done with the boring part i.e. configuring Apache
Now I am assuming that Apache is installed on that system as it is needed in order to serve your web pages. So, in order to let Apache serve your script, you need to enable the CGI config by simple commands.
$ cd /etc/apache2/mods-enabled
$ sudo ln -s ../mods-available/cgi.load
and you are ready to go.
Now move to dir where you are going to put your shell scripts.
$ cd /usr/lib/cgi-bin
Once in the dir create a new file hello.sh
$ vim hello.sh
and write the following scripts
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "hello world! from shell script"
Make sure you make that file executable.
Now I think you have got the pretty much idea what your webpage is going to display.
So restart the Apache server
$ sudo systemctl restart apache2.service

Let’s take it to the next level

Now let’s see what else can we do, Unlike PHP or JAVA or Python we don’t have any framework for shell scripts, so we might have to work a bit. But that’s the fun part, right?
So let’s get started

Now we are simply going to display that which user is using /usr/sbin/nologin shell
So here are some files that I created in cgi-bin directory in order to display that data as the web page
Header file

<!doctype html>
<html lang="en">
  <head>
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">

    <!-- Bootstrap CSS -->
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">

    <title>Hello, world!</title>
  </head>
  <body>
    <h1>All the user using /usr/sbin/nologin shell</h1>
 
 <table class="table">
  <thead>
    <tr>
      <th scope="col">Name</th>
      <th scope="col">User Id</th>
      <th scope="col">Group Id</th>
    </tr>
  </thead>
  <tbody>
Footer file

</tbody>
</table>

    <!-- Optional JavaScript -->
    <!-- jQuery first, then Popper.js, then Bootstrap JS -->
    https://code.jquery.com/jquery-3.3.1.slim.min.js
    https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js
    https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js
  </body>
</html>

hello.sh

#!/bin/bash
echo "Content-type: text/html"
echo ""
cat header
cat /etc/passwd | awk -F ':' '{if($7 == "/usr/sbin/nologin"){print ""$1""$3""$4""}}'
cat footer 
So let’s just see what all those files are
Header file and footer file basically contains the starter template of bootstrap that gives you a prebuild web template, and in hello.sh we are extracting those file by using cat and in the middle, we are writing a shell command in order to get the users that are using /usr/sbin/nologin shell and making a template from it using awk.
So now when you hit the same URL output will be like

Now I guess we got the base idea that how can we use a shell script to display web pages of our need. We can also use it as an API as it can return JSON as well. But it’s up to the individual how well we can use it for.

Summary

So, in this blog, we saw how with bare minimum we can get most out of it. It is not limited to just some use cases it can be used to create an API which can return valuable information of system or services running on the system. With some good scripting and some tricky HTML template designing, we can achieve a lot.

Lets Get Started With Packer

In this blogpost, we will see how to get started with packer. We will cover installation, writing a template for creating AWS AMI. To get the basic understanding of how packer works, You can refer to our previous blog “Intro To Packer“.
Installation 
  1. Official method to download packer as precompiled binary, packer does not provide system packages and neither they have any plan to make it avail as such:-$curl -L https://releases.hashicorp.com/packer/1.4.0/packer_1.4.0_linux_amd64.zip
  2. After downloading the binary unzip it to the location you want to keep it. If you want it to be installed such that it can be used by system-wide users, do  not unzip in user space $sudo unzip packer_1.4.0_linux_amd64.zip -d /usr/local/packer 
  3. After unzipping the package, the directory should contain a single binary program called packer . 
  4. The final step to installation is to make sure the directory you installed Packer to is set on the PATH, so that it can be used using a command line. Open the /etc/environment and append the below line to the end of the file export PATH=”$PATH:/usr/local/packer” After adding the line into the file to let the change reflect source the environment file $source /etc/environment 
  5. Verify the installation by firing packer command or simply check its version by    $packer –version . You should see the version of packer as an output.
Once installed, running packer is as simple as packer build , which will take the build-file and run the steps we provide within. Let’s get started with a simple build file.
 
Setting Up Stage
 
                                                                                                                         
As we are building an image for AWS cloud, there are certain prerequisites which need to be taken care of.
You should have IAM user who has access to create and destroy ec2 instance, create an AMI, create and destroy security groups etc. You can find sample IAM policy for packer user in sample minimum IAM user policy for Packer.
 
After setting up your IAM user for packer, generate the access key and id and save it.
Now having noted the key, you can either directly use it in your template (which is not suggested) or you can configure it as an environment variable or the AWS CLI config on which you have the packer installed.
 
I have configured it with AWS CLI config so I did not have to define in variable section or in the builder section. You can also pass your access keys as variable as an option while running packer build command.
Here we will be installing apache webserver in the image. I have named this json file as httpd.json and used httpd.sh script to install httpd under provisioner section.
 
 
Below is the sample httpd.json file
 
{
     “variables”: {
     “ami_id”: “ami-0a574895390037a62”,
     “app_name”: “httpd”
   },

   “builders”: [{
     “type”: “amazon-ebs”,
     “region”: “ap-south-1”,
     “vpc_id”: “vpc-df95d4b7”,
     “subnet_id”: “subnet-175b2d7f”,
     “source_ami”: “{{user `ami_id`}}”,
     “instance_type”: “t2.micro”,
     “ssh_username”: “ubuntu”,
     “ami_name”: “PACKER-DEMO-{{user `app_name` }}”,
     “tags”: {
         “Name”: “PACKER-DEMO-{{user `app_name` }}”,
         “Env”: “DEMO”

       }
   }],

  “provisioners”: [
   {
       “type”: “shell”,
       “script”: “httpd.sh”
    }
]

}

 
Below is the simple httpd.sh
 
#!/bin/bash

sudo apt-get update
sudo apt-get install -y httpd

 
First Validate your template by firing below command:-
packer validate httpd.json
 
You should get the output as a success, or as an error indicating the line number.
 
Now run packer build to build your image:-
 
packer build httpd.json
 
After a successful build, you will get AMI id as output and success message.
 
==> amazon-ebs: Prevalidating AMI Name: PACKER-DEMO-httpd
   amazon-ebs: Found Image ID: ami-0a574895390037a62
==> amazon-ebs: Creating temporary keypair: packer_5cd559df-84ce-ff8a-fa93-0c4477d988e4
==> amazon-ebs: Creating temporary security group for this instance: packer_5cd559e2-ea81-be15-b94a-c28493c0d3ff
==> amazon-ebs: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups…
==> amazon-ebs: Launching a source AWS instance…
==> amazon-ebs: Adding tags to source instance
   amazon-ebs: Adding tag: “Name”: “Packer Builder”
   amazon-ebs: Instance ID: i-06ed051a3435865c4
==> amazon-ebs: Waiting for instance (i-06ed051a3435865c4) to become ready…
==> amazon-ebs: Using ssh communicator to connect: *.*.*.*
==> amazon-ebs: Waiting for SSH to become available…
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Stopping the source instance…
   amazon-ebs: Stopping instance
==> amazon-ebs: Waiting for the instance to stop…
==> amazon-ebs: Creating AMI PACKER-DEMO-httpd from instance i-06ed051a3435865c4
   amazon-ebs: AMI: ami-0ce41081a3b649374
==> amazon-ebs: Waiting for AMI to become ready…
==> amazon-ebs: Adding tags to AMI (ami——)…
==> amazon-ebs: Tagging snapshot: snap-0ee3ce80ec289ed24
==> amazon-ebs: Creating AMI tags
   amazon-ebs: Adding tag: “Name”: “PACKER-DEMO-httpd”
   amazon-ebs: Adding tag: “Env”: “DEMO”
==> amazon-ebs: Creating snapshot tags
==> amazon-ebs: Terminating the source AWS instance…
==> amazon-ebs: Cleaning up any extra volumes…
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group…
==> amazon-ebs: Deleting temporary keypair…
Build ‘amazon-ebs’ finished.

==> Builds finished. The artifacts of successful builds are:
–> amazon-ebs: AMIs were created:
ap-south-1: ami——————–

 
Few things to keep in mind:-
 
  • Packer does not create the image of any running instance, instead, it spins a temporary instance and create the image, post image creation it destroys all the resources which were created by a packer in order to create images. 
  • Though packer gives us ease of taking machine AMI’s programmatically, purging of an older image should also be kept in mind because AMIs gets stored over s3 and it might add up to your cost. 
  • Though a rollback becomes a lot easier in immutable infra. It can become a pain in the neck if you frequently make changes in production. 
  • We cannot expect it to solve all our problems, its only job is to create an image. You will have to decide when to create an image and what post action needs to be taken or deployed after image creation.
I hope the above setup will help you in getting started with it. Later we will discuss how we can use it along with Ansible and Terraform to achieve immutable Infra.
I appreciate any suggestions and comments or any questions/doubts faced while implementing it.

Intro to Packer

Packer is an opensource tool developed by HashiCorp to create machine images for multiple cloud platforms like AWS, GCP, Azure or even VMWare. As the name suggests it packs all your software, packages, configurations while baking your machine images. Perhaps Packer is the only tool right now in the market which solely focuses on creating machine images and giving us the ability to automate the machine image creation process.

In this blog post, we will learn What Packer does and how it does things. Sounds Interesting!!!!

 
What is Packer and Machine Images
 
“Packer can be used to creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel.” 
https://www.packer.io/intro/ 
It does installs and configures the software by using different SCM tools such as Ansible, Chef or Puppet, shell scripts within your Packer-made images. You can either include your scripts in json template itself or you can source it from a file.
 
“A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines. Machine image formats change for each platform. Some examples include AMIs for EC2, VMDK/VMX files for VMware, OVF exports for VirtualBox, etc”
https://www.packer.io/intro/ 
 
Why the Heck We Should Consider to Learn Packer!!!!!!!!
 
Consider a couple of scenarios mentioned below:-
 
Scenario1
 
If you want to have an immutable infrastructure in place. The key guideline behind an immutable infrastructure is that you never modify a running server. If a change is required, you instead completely replace the server with a new instance that contains the update or change.
The new server instance is created with an origin image that is built upon or a restored image from a previously defined server state. Version control and tag your images for easy rollback and distribution. Image contains all the application code, runtime dependencies, and configuration–in essence, the state needed for the software to run as expected. You will want to minimize the time required to bake all your required stuff into your image which can be achieved if you have proper tag maintained over your previous release images which can be used as an origin-golden image to bake the new image. The entire process of baking and using images become outstandingly easy by using Packer.
 
Scenario2
 
If you have autoscaling in place, there must be a requirement to scale up a new serviceable VMs as soon as possible but there are some concerns which spoil your expectations of serviceable VMs in the least time:
  • OS boot 
  • OS configuration 
  • SCM with Ansible or Chef 
  • Setting up your application
With having a pre-baked image in place, your time to scale up your VMs will drastically decrease.
 
So How Does Packer Works!!!!!!!!
 
Packer uses the JSON file as a template, it takes template as in input rolls up a temporary VMs based on the details provided, does the required configuration and stops it. After stopping the VMs it starts creating the image and save it as the name/tag provided in the template.
 
json file packer engine EC2 AMI
              
                                                       
Basic Concepts of Packer
 
There are two things which you will need to know to get started with packer:
  • Templates 
  • Commands
 
Templates
 
There are four sections in the Packer template: 
  • Variables(optional)-is an object of one or more key/value strings that define user variables contained in the template. If it is not specified, then no variables are defined 
  • Builders(required)- is an array of one or more objects that defines the builders that will be used to create machine images for this template, and configures each of those builders. 
  • Provisioners(optional)- is an array of one or more objects that defines the provisioners that will be used to install and configure the software for the machines created by each of the builders 
  • post-processors(optional)-  is an array of one or more objects that defines the various post-processing steps to take with the built images. If not specified, then no post-processing will be done
Sub-Commands
 
Likewise, Unix packer also takes subcommand and options. There are three sub-commands:
  • build-The packer build command takes a template and runs all the builds within it in order to generate a set of artefacts. The various builds specified within a template are executed in parallel unless otherwise specified. And the artefacts that are created will be outputted at the end of the build. 
  • validate- The packer validate command is used to validate the syntax and configuration of a template. The command will return a zero exit status on success, and a non-zero exit status on failure. Additionally, if a template doesn’t validate, any error messages will be outputted. 
  • inspect -The packer inspect takes a template and outputs the various components a template defines. This can help you quickly learn about a template without having to dive into the JSON itself. The command will tell you things like what variables a template accepts, the builders it defines, the provisioners it defines and the order they’ll run, and more.

Hope this blog helps you understand the basics of Packer. Having covered all the basics understanding, we can now “Get Started With Packer“.

ANSIBLE DYNAMIC INVENTORY IS IT SO HARD?

Thinking what the above diagram is all about. Once you are done with this blog, you will know exactly what it is. Till one month ago, I was of the opinion that Dynamic Inventory is a cool way of managing your AWS infrastructure as you don’t have to track your servers you just have to apply proper tags and Ansible Dynamic Inventory magically manages the inventory for you. Having said that I was not really comfortable using dynamic inventory as it was a black box I tried going through the Python script which was very cryptic & difficult to understand. If you are of the same opinion, then this blog is worth reading as I will try to demystify how things work in Dynamic Inventory and how you can implement your own Dynamic inventory using a very simple python script.

You can refer below article if you want to implement Dynamic inventory for your AWS infrastructure.

https://aws.amazon.com/blogs/apn/getting-started-with-ansible-and-dynamic-amazon-ec2-inventory-management/

Now coming to what is dynamic inventory and how you can create one. You have to understand what Ansible accepts as an inventory file. Ansible expects a JSON in the below format. Below is the screenshot showing the bare minimum content which is required by Ansible. Ansible expects a dictionary of groups (each group having a list of group>hosts, and group variables in the group>vars dictionary), and a _meta dictionary that stores host variables for all hosts individually (inside a hostvars dictionary).

So as long as you can write a script which generates output in the above JSON format. Ansible won’t give you any trouble. So let’s start creating our own custom inventory.

I have created a python script customdynamicinventory.py which reads the data from input.csv and generates the JSON as mentioned above. For simplicity, I have kept my input.csv as simple as possible. You can find the code here:-

https://github.com/SUNIL23891YADAV/dynamicinventory.git

If you want to test it just clone the code and replace the IP, user and key details as per your environment in the input.csv file. To make sure that our python script is generating the output in standard JSON format as expected by Ansible. You can run ./customdynamicinventory.py –list
And it will generate the output in standard JSON format as shown in below screenshot.




If you want to check how the static inventory file would have looked for the above scenario. You can refer to the below screenshot. It would have served the same purpose as the above dynamic inventory

Now to make sure your custom inventory is working fine. You can run

ansible all -i  customdynamicinventory.py -m ping

It will try to ping all the hosts mentioned in the CSV. Let’s check it

See it is working, that’s how easy it is.

Instead of a static CSV file, we can have a database where all the hosts and related details are getting updated dynamically. Then Ansible dynamic inventory script can use this database as an inventory source as long as it returns a JSON structure, mentioned in the first screenshot.

Redis Load Testing

As I mentioned in my previous blog on Redis Best Practices that in my upcoming blog I will discuss about load testing on Redis so here I am ready with my blog in which I will explain how can we measure our Redis Performance. Although there are plenty of articles out there on this similar topic, but I want to share my experience as a DevOps. Also, I want to share the methods which we are implementing at our organization.

So, Load testing What, Why?

The first thought which comes to our mind is why do we need load testing, as our environment is already working fine. Or it is working fine since the first launch.

But Hey!!! let me tell you something it’s not simple as that, as we know everything has its own limits and knowing your limits is always helpful. When we are not ready to face the problem of the increasing load, our environment can easily collapse. There is saying as well

Prevention is better than cure

In simple words, it means it’s easier to stop something bad happening in the first place than to repair the damage after it happened.

So when I decided to test the redis performance, it was not quite easy to start, there are plenty of load testing frameworks were present but which to choose? So I did the comparison between various load testing framework. Although there is already Jmeter was available which is a popular load testing tool but I found it a bit complex to learn easily and rapidly and also it requires a hell lot of resources so I have chosen an awesome Python load testing framework, Locust, which is very lightweight and easy to setup load testing framework.

The golden rule before starting the load testing is that you should have metrics or a number in your mind which you want to achieve.

So I will be using my own organization load testing utility which we have created for Redis Load or Performance Testing.

You can find the code at- https://github.com/opstree/redis-load-test

So you can simply clone the git repo like this:-

git clone https://github.com/opstree/redis-load-test.git

As Locust is a python based project so it doesn’t have long dependencies list but surely it does have some dependency. So after repo cloning, we have to install these dependencies

cd Scripts
pip3 install -r requirments.txt

Once the dependency hurdle is crossed we can move to the step in which we will connect our utility to Redis. To achieve this we have a file called redis.json in the Scripts folder of the repo, you just have to update the redis details in that file. For example:-

{
    "redis_host": "10.1.1.100",
    "redis_port": "6379",
    "redis_password": ""
}

Once you are done with connection details, kaboom you are all set to use the performance testing utility. Just go on the terminal and run this command.

locust -f redis_get_set.py

The output will be something like this

Now go and open up the URL on the browser by http://your_ip:8089

The UI page will look like this

You will have two empty blocks there:-

Number of users to simulate:- Total number of user connection request which you want to make.

Hatch Rate:- How quickly you want to spawn users.

After filling these details you can simply start swarming and you can wait until it completes its execution. Once the execution will be completed you will have this kind of details.

This page is loading data in the form of statistics but you can also see the data in a beautiful graph format on the same UI. For example-

I have also provided detailed information in the README file as well of the repo.

One of the benefits which I feel important in doing load testing is that you can set up a performance baseline according to your environment. And yes, if you are not getting the desired output of your Redis Performance you can check out our blog on Redis Best Practices and Performance Tuning here.

The main idea of writing this blog was to encourage people to know the limitations of their environment and to make it ready for any kind of challenge.

I hope I explained everything clearly enough to understand. If you do have any questions or suggestions, please feel free to ask.

Cheers Till Next Time!!!