Tuesday, June 11, 2019

What Without Internet


What without Internet?




I had a dream a few days ago in which the existence of the internet was gone, When I woke up I thought about what would happen if there is no Internet for a day?


Sure, it would cause quite a bit of panic and uproar and it would be havoc for an organization to work without the internet, but if the internet resumed normally after 24 hours are over, things would return to normal pretty quickly.


Now, switch it off for a longer time, possibly a week or a month, that would have a more lasting impact, since, in that time, a significant number of people would find themselves unable to meet their obligations or do their business at all. This would be somewhat mitigated by the fact that the situation is a sort of a 'natural disaster', but still, those who really depend on the internet for their business would likely feel a lasting negative impact.
               
What if I say there are some organizations that work in a situation like there is no internet, yes it's right due to some sort of security reasons they don't prefer to use the public internet. Banks, space organizations, and many security agencies fall under this category.


Now, a question arises here how they manage to do regular updates and the installation of different packages in their different systems? The answer is quite simple: “the use of satellite server”.


Recently I got a task in relation to this context, in which:

1. A prerequisite here is that you don't have internet connectivity in your system but one of the systems with which you can connect has internet connectivity.
2. Setup individual satellite server in your local network.
3. Install packages and regular updates.


To do so here I prefer to use the FTP satellite server


How to implement Ftp satellite server

Pre-requisites

An Ubuntu Server, and a non-root user with sudo privileges.
The system is configured with vsftpd

Suppose we are doing the installation of Jenkins


Make a directory pkg.jenkins.io in /var/www/html/




 Contents of pkg.jenkins.io




 

 

 

 

 

 

 

 

 

 

 

Paste the host link of debian file in /etc/apt/source.list.d




 

 Run the command sudo apt-get update





 Now run the command for installation of the package

 



“ The internet made fame wack and anonymity cool ”

So, far from the above context, we have learned about setting up FTP for users with a local account. If you need to use an external authentication source, you might want to look into vsftpd's support of virtual users. This offers a rich set of options through the use of PAM, the Pluggable Authentication Modules, and is a good choice if you manage users in another system such as LDAP or Kerberos.

I hope I explained everything clearly enough to understand. If you have some better way of implementing a satellite server please help me to improve this blog.

Thanks for reading my writing. I’d really appreciate any kind of feedback in the comments.

Cheers till next time!!!!

Tuesday, June 4, 2019

Redis Zero Downtime Cluster Migration

A few days back I came across a problem of migrating a Redis Master-Slave setup to Redis Cluster. Initially, I thought it to be a piece of cake since I have been already working on Redis, but there was a hitch, "Zero Downtime Migration". Also, the redis was getting used as a database, not as Caching Server. So I started to think of different ways for migrating Redis Master-Slave setup to Redis Cluster and finally, I came up with an idea of migration.
Before we jump to migration, I want to give an overview regarding when we can use Redis as a database, and how to choose which setup we should go with Master-Slave or Cluster mode.

Redis as a Database

Sometimes getting data from disks can be time-consuming. In order to increase the performance, we can put the requests those either need to be served first or rapidly in Redis memory and then the Redis service there will keep rest of the data in the main database. So the whole architecture will look like this:-

Image result for redis as database

Redis Master-Slave Replication

Beginning with the explanation about Redis Master-Slave. In this phenomenon, Redis can replicate data to any number of nodes. ie. it lets the slave have the exact copy of their master. This helps in performance optimizations.

I bet now you can use Redis as a Database.

Redis Cluster

A Redis cluster is simply a data sharding strategy. It automatically partitions data across multiple Redis nodes. It is an advanced feature of Redis which achieves distributed storage and prevents a single point of failure.

Replication vs Sharding

Replication is also known as mirroring of data. In replication, all the data get copied from the master node to the slave node.

Sharding is also known as partitioning. It splits up the data by the key to multiple nodes.

As shown in the above figure,  all keys 1, 2, 3, 4 are getting stored on both machine A and B.

In sharding, the keys are getting distributed across both machine A and B. That is, the machine A will hold the 1, 3 key and machine B will hold 2, 4 key.


I guess now everyone has a good idea about Redis working mechanism. So let's start discussing the migration of Redis.

Migration

Unfortunately, redis doesn't have a direct way of migrating data from Redis-Master Slave to Redis Cluster. Let me explain it to you why?


We can start Redis service in either cluster mode or standalone mode. Now your solution would be that we can change the Redis Configuration value on-fly(means without restarting the Redis Service) with redis-cli. Yes, you are absolutely correct we can change the Redis configuration on-fly but unfortunately, Redis Mode(cluster or standalone) can't be decided on-fly, for that we have to restart the service.

I guess now you guys will understand my situation :).

For migration, there are multiple ways of doing it. However, we needed to migrate the data without downtime or any interruptions to the service.

We decided the best course of action was a steps process:-
  • Firstly we needed to create a different Redis Cluster environment. The architecture of the cluster environment was something like

  • The next step was to update all the services (application) to send all the write operations to both servers(cluster and master-slave). The read commands (GET) will still go to the old setup.
  • But still, we don't have the guarantee that all non-expirable data would make it over. So we can run a step to iterate through all of the keys and DUMP/RESTORE them into the new setup. 
  • Once the new Redis Server looks good we could make the appropriate changes to the application to point solely to the new Redis Server.

I know the all steps are easy except the second step. Fortunately, redis provides a method of key scanning through which we can scan all the key and take a dump of it and then restore it in the new Redis Server.
To achieve this I have created a python utility in which you have to define the connection details of your old Redis Server and new Redis Server.

You can find the utility here.

https://github.com/opstree/redis-migration

I have provided the detail information on using this utility in the README file itself. I guess my experience will help you guys while redis migration.

Replication or Clustering?

I know most people have a query that when should we use replication and when clustering :). 
If you have more data than RAM in a single machine, use Redis Cluster to shard the data across multiple databases.
If you have less data than RAM in a machine, set up a master-slave replication with sentinel in front to handle the fai-lover.

The main idea of writing this blog was to spread information about Replication and Sharding mechanism and how to choose the right one and if mistakenly you have chosen the wrong one, how to migrate it from :).

There are multiple factors yet to be explored to enhance the flow of migration if you find that before I do, please let me know to improve this blog.

I hope I explained everything and clear enough to understand.

Thanks for reading. I'd really appreciate any and all feedback, please leave your comment below if you guys have some feedbacks.

Happy Coding!!!!

Tuesday, May 28, 2019

Where there is a shell, There is a way.


Well, as a DevOps; I like to play around with shell scripts and shell commands especially on a remote system as it just adds some level of fun in it. But what's more thrilling than running shell scripts and command on the remote server, making them return the dynamic web pages or JSON from that remote system.

Yes for most of us it comes as a surprise that just like PHP, JSP, ASP shell scripts can also return us dynamic web pages but, as long time ago a wise man said: "where there is a shell there is a way".

Isn't PHP or JSP a better option for web development?

For a web developer ... yes, but as a DevOps, I want to do all possible stuff from a shell script. And it is quite useful for us to have a shell script as a server-side language for us as we all know the power of shell scripts.

Why do we need this exactly?

Isn't 'for fun' is an obvious reason. But for those who want more than that, I got some points

  • We can use it as a time series based data exporter. 
  • We might want an API that returns us the system info in the form of JSON, and we don't have access to PHP.
  • We might want to see the system information as a web page when we hit a URL.
  • It's not only limited to system info you can do whatever you want from it.
  • With bare minimum on your machine, you can get the max out of it.

Let's get started

Now let's get done with the boring part i.e. configuring Apache
Now I am assuming that Apache is installed on that system as it is needed in order to serve your web pages. So, in order to let Apache serve your script, you need to enable the CGI config by simple commands.
$ cd /etc/apache2/mods-enabled
$ sudo ln -s ../mods-available/cgi.load
and you are ready to go.
Now move to dir where you are going to put your shell scripts.
$ cd /usr/lib/cgi-bin
Once in the dir create a new file hello.sh
$ vim hello.sh
and write the following scripts
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "hello world! from shell script"
Make sure you make that file executable.
Now I think you have got the pretty much idea what your webpage is going to display.
So restart the Apache server
$ sudo systemctl restart apache2.service

Let's take it to the next level

Now let's see what else can we do, Unlike PHP or JAVA or Python we don't have any framework for shell scripts, so we might have to work a bit. But that's the fun part, right?
So let's get started

Now we are simply going to display that which user is using /usr/sbin/nologin shell

So here are some files that I created in cgi-bin directory in order to display that data as the web page

Header file
<!doctype html>
<html lang="en">
  <head>
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">

    <!-- Bootstrap CSS -->
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">

    <title>Hello, world!</title>
  </head>
  <body>
    <h1>All the user using /usr/sbin/nologin shell</h1>
 
 <table class="table">
  <thead>
    <tr>
      <th scope="col">Name</th>
      <th scope="col">User Id</th>
      <th scope="col">Group Id</th>
    </tr>
  </thead>
  <tbody>

Footer file

</tbody>
</table>

    <!-- Optional JavaScript -->
    <!-- jQuery first, then Popper.js, then Bootstrap JS -->
    <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
    <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
  </body>
</html>

hello.sh

#!/bin/bash
echo "Content-type: text/html"
echo ""
cat header
cat /etc/passwd | awk -F ':' '{if($7 == "/usr/sbin/nologin"){print "<tr><td>"$1"</td><td>"$3"</td><td>"$4"</td></tr>"}}'
cat footer 

So let's just see what all those files are

Header file and footer file basically contains the starter template of bootstrap that gives you a prebuild web template, and in hello.sh we are extracting those file by using cat and in the middle, we are writing a shell command in order to get the users that are using /usr/sbin/nologin shell and making a template from it using awk.

So now when you hit the same URL output will be like


Now I guess we got the base idea that how can we use a shell script to display web pages of our need. We can also use it as an API as it can return JSON as well. But it's up to the individual how well we can use it for.

Summary

So, in this blog, we saw how with bare minimum we can get most out of it. It is not limited to just some use cases it can be used to create an API which can return valuable information of system or services running on the system. With some good scripting and some tricky HTML template designing, we can achieve a lot.

Tuesday, May 21, 2019

Lets Get Started With Packer

In this blogpost, we will see how to get started with packer. We will cover installation, writing a template for creating AWS AMI. To get the basic understanding of how packer works, You can refer to our previous blog "Intro To Packer".
  Installation  
  1. Official method to download packer as precompiled binary, packer does not provide system packages and neither they have any plan to make it avail as such:-$curl -L https://releases.hashicorp.com/packer/1.4.0/packer_1.4.0_linux_amd64.zip
  2. After downloading the binary unzip it to the location you want to keep it. If you want it to be installed such that it can be used by system-wide users, do  not unzip in user space $sudo unzip packer_1.4.0_linux_amd64.zip -d /usr/local/packer 
  3. After unzipping the package, the directory should contain a single binary program called packer . 
  4. The final step to installation is to make sure the directory you installed Packer to is set on the PATH, so that it can be used using a command line. Open the /etc/environment and append the below line to the end of the file export PATH="$PATH:/usr/local/packer" After adding the line into the file to let the change reflect source the environment file $source /etc/environment 
  5. Verify the installation by firing packer command or simply check its version by    $packer --version . You should see the version of packer as an output.
Once installed, running packer is as simple as packer build <build-file>, which will take the build-file and run the steps we provide within. Let’s get started with a simple build file.

Setting Up Stage

                                                                                                                         
As we are building an image for AWS cloud, there are certain prerequisites which need to be taken care of.
You should have IAM user who has access to create and destroy ec2 instance, create an AMI, create and destroy security groups etc. You can find sample IAM policy for packer user in sample minimum IAM user policy for Packer.

After setting up your IAM user for packer, generate the access key and id and save it.
Now having noted the key, you can either directly use it in your template (which is not suggested) or you can configure it as an environment variable or the AWS CLI config on which you have the packer installed.

I have configured it with AWS CLI config so I did not have to define in variable section or in the builder section. You can also pass your access keys as variable as an option while running packer build command.
Here we will be installing apache webserver in the image. I have named this json file as httpd.json and used httpd.sh script to install httpd under provisioner section.


Below is the sample httpd.json file

{
   "variables": {
     "ami_id": "ami-0a574895390037a62",
     "app_name": "httpd"
   },

   "builders": [{
     "type": "amazon-ebs",
     "region": "ap-south-1",
     "vpc_id": "vpc-df95d4b7",
     "subnet_id": "subnet-175b2d7f",
     "source_ami": "{{user `ami_id`}}",
     "instance_type": "t2.micro",
     "ssh_username": "ubuntu",
     "ami_name": "PACKER-DEMO-{{user `app_name` }}",
     "tags": {
         "Name": "PACKER-DEMO-{{user `app_name` }}",
         "Env": "DEMO"

       }
   }],

  "provisioners": [
   {
       "type": "shell",
       "script": "httpd.sh"
    }
  ]

}

Below is the simple httpd.sh

#!/bin/bash


sudo apt-get update
sudo apt-get install -y httpd

First Validate your template by firing below command:-
packer validate httpd.json

You should get the output as a success, or as an error indicating the line number.

Now run packer build to build your image:-

packer build httpd.json

After a successful build, you will get AMI id as output and success message.

==> amazon-ebs: Prevalidating AMI Name: PACKER-DEMO-httpd
   amazon-ebs: Found Image ID: ami-0a574895390037a62
==> amazon-ebs: Creating temporary keypair: packer_5cd559df-84ce-ff8a-fa93-0c4477d988e4
==> amazon-ebs: Creating temporary security group for this instance: packer_5cd559e2-ea81-be15-b94a-c28493c0d3ff
==> amazon-ebs: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
   amazon-ebs: Adding tag: "Name": "Packer Builder"
   amazon-ebs: Instance ID: i-06ed051a3435865c4
==> amazon-ebs: Waiting for instance (i-06ed051a3435865c4) to become ready...
==> amazon-ebs: Using ssh communicator to connect: *.*.*.*
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Stopping the source instance...
   amazon-ebs: Stopping instance
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating AMI PACKER-DEMO-httpd from instance i-06ed051a3435865c4
   amazon-ebs: AMI: ami-0ce41081a3b649374
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Adding tags to AMI (ami------)...
==> amazon-ebs: Tagging snapshot: snap-0ee3ce80ec289ed24
==> amazon-ebs: Creating AMI tags
   amazon-ebs: Adding tag: "Name": "PACKER-DEMO-httpd"
   amazon-ebs: Adding tag: "Env": "DEMO"
==> amazon-ebs: Creating snapshot tags
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
ap-south-1: ami--------------------

Few things to keep in mind:-

  • Packer does not create the image of any running instance, instead, it spins a temporary instance and create the image, post image creation it destroys all the resources which were created by a packer in order to create images. 
  • Though packer gives us ease of taking machine AMI’s programmatically, purging of an older image should also be kept in mind because AMIs gets stored over s3 and it might add up to your cost. 
  • Though a rollback becomes a lot easier in immutable infra. It can become a pain in the neck if you frequently make changes in production. 
  • We cannot expect it to solve all our problems, its only job is to create an image. You will have to decide when to create an image and what post action needs to be taken or deployed after image creation.

I hope the above setup will help you in getting started with it. Later we will discuss how we can use it along with Ansible and Terraform to achieve immutable Infra.
I appreciate any suggestions and comments or any questions/doubts faced while implementing it.

What Without Internet

What without Internet? I had a dream a few days ago in which the existence of the internet was gone, When I woke up I though...