Perfect Spot Instance’s Imperfections | part-II

Hello friends, if you are reading this blog, I assume that you have gone through the first part of my blog “ https://blog.opstree.com/2019/11/05/perfect-spot-instances-imperfections-part-i/ “. However, if you haven’t, I suggest you to go through the link before reading this blog.

Now let’s recall the concept of first part of this blog that we are going to implement in this part.

We will create all the components related to this project(as shown in figure above). We will also go through how best we can create spot fleet request wisely by choosing the parameters that fits for our purpose and prone to less interruption. I assume that you have previously created VPC, Subnets(at least two), Internet Gateway and have associated it with VPC, Target Group, AMI(nginx server running at 80), Launch Configuration with the same AMI, ASG with launch configuration you created, and associating the tags for the instance(on-demand), also associating the target group you created, Load Balancer(listening on http protocol and directing to target group you created), route53(optional -where one address is mapped to Load Balancer DNS name).

I have made my server public if you want you can make them private. After this we are going to create IAM role, Spot fleet request, Lambda function, and then Cloudwatch rules for our purpose.

 

Let’s create IAM Role

We are going to create two IAM roles. One for Lambda  and other for Spot Fleet Request.

  1. Go to IAM console. Select Roles from left side navigation pane.
  2. Click on Create role and follow the screenshot below.

Here we are creating Role for Lambda function.

  1. Click on Next:Permissions.
  2. Check on Administrator access to allow lambda to access all AWS services.
  1. Click on Next:Tags.
  2. Add tags if you want, then click on Next:Review.
  3. Type your Role Name and then click on Create role. Role is created now.
  4. Now again come to IAM console, select Roles from navigation pane.
  5. Click on Create Role and then follow screenshot below.

This role is for spot fleet request.

  1. Follow the steps from 3 to 7. 

So, by this point we have two IAM Roles let’s say Lambda_role and Fleet_role . 

Now we are going to create interesting part which deals with the process to place spot fleet request and also explains each and every related factors which powers us to customize our spot fleet requests according to your needs wisely.

Create Spot Fleet Request 


Move to EC2 dashboard,click on Spot Requests. Then click on Request Spot Instances.

  1. You will see something like below:

Load Balancing Workloads: For instances of same size in any Availability zone.
Flexible Workloads: For instances of any size in any Availability zone.
Big Data Workloads: For instances of same size in single Availability zone.

Now decide among these three options based on your requirements and if you are just learning then leave the choice to default.

The last option ‘Defined duration workloads’ is a bit different. This option provides you a new way to reserve spot instances but for a maximum of 6 hours only with the choices vary from 1 to 6  hours. This will ensure that you will not be interrupted for 1 to 6 hours after you opted to run your workloads for defined duration and because you won’t be interrupted you will have to pay slightly higher price than the spot instances.

So, for this option AWS has another pricing category called Spot Block. Under this model, pricing is based on the requested duration and the available capacity, and is typically 30% to 45% less than On-Demand, with an additional 5% off during non-peak hours for the region. Observe the differences below.

Let us start with a brief comparison of categories into which we can launch our instances.

Instance TypesSpot Instance PriceSpot Block Price for
1 hour
Spot Block Price for
6 hours
a1.medium$0.0084 per Hour$0.012 per Hour$0.016 per Hour
a1.large$0.0168 per Hour$0.025 per Hour$0.033 per Hour
c5.large$0.0392 per Hour$0.047 per Hour$0.059 per Hour
c1.medium$0.013 per Hour$0.065 per Hour $0.085 per Hour
t2.micro$0.0035 per Hour$0.005 per Hour$0.007 per Hour

2. Next we will configure our spot instances.

Launch Template: If you have ‘Launch Template’ you can select it from here. One advantage of using launch template is you will have an option for choosing a part of your total capacity as on-demand instances. If you don’t have launch template you can go with AMI and specify all other parameters.
AMI: Don’t have launch template?? Choose AMI but with this option you won’t be having features to choose a part of your total capacity as on-demand instances.
Minimum Compute Unit: Specify how much capacity you need either in terms of vCPU and memory or as instance types. As the name suggests this is the minimum capacity that we need for our purpose. AWS will choose similar instances based on this option.
Then, you will have options to choose vpc, Availability zone, key-pair name. On additional configuration you can choose security groups, IAM instance profile, user data, tags and many more configurations.

3. Next section is for defining target capacity.

Total target capacity: Specify how much capacity is needed. If you have chosen launch template then you can specify how much of total capacity you want as on-demand.
Maintain target capacity: Once selecting this feature AWS will always maintain your target capacity.Suppose AWS takes your one instance back then under this option it will automatically place request for one spot-instances in order to maintain target capacity. Selecting this option allows you to modify target capacity even after spot-fleet request has been created.
Maintain target cost for Spot: This option allows you to set the maximum hourly price you want to pay for spot instances and it’s optional.

4. Next section is about customizing your spot fleet request.

If you don’t care much about how AWS is going to fulfill your request leave the things to default and proceed to next step, or else uncheck the ‘Apply recommendations’ field which will appear like below: 

As i have chosen c3.large as minimum compute unit, AWS chooses similar instances like c3.large or it can even choose instances with more memory than specified but not lesser under almost similar prices. You can also choose more instances by selecting Select instance types, this will strengthen your probability of getting spot instances. Now let’s see how!

Suppose you ordered for only t2.small instance. You will get that only when this instance is available in the Availability Zone you specified. So to increase your chances, you should specify maximum Availability Zone and maximum instance types which is required for your workload. This will increase the number of pools and ultimately the chances of success.

     Picture showing the instance pool an Availability Zone can contain.

Instance Pools: You can say it a bag full of same instance types.
Every Availability Zone contains instance pools. So suppose you choose two availability zones and a total of three instances types. You, then, will have 6 instance pools three from each availability zone. Maximum the number of instance pools, maximum is the chance of getting spot instances.
Fleet allocation strategy: AWS allows you to choose allocation strategy i.e., the strategy through which your capacity is going to be fulfilled. One thing to be considered is that AWS will try to evenly distribute your capacity across your specified availability zones.
Lowest price: Instances will be made available to you from the pools with the Lowest Price. Suppose you choose 2 Availability Zones and your capacity is 6 then 3 instances from each Availability Zone comes from pools with lowest cost.
Diversified across n number of pools: Suppose your capacity is 20 and 2 Availability Zones are selected and also you have chosen diversified across 2 instance pools. So 10 instances from each Availability Zone will be provided with further restriction to choose 2 pools to fulfill the capacity of 10 from each Availability Zone. This will make at least one pool available for you even if the other pool is unavailable and hence reducing the risk of interruption.
Capacity Optimized: This option provides you the instances among those pools which are highly available. Suppose your capacity is 20 and 2 Availability Zones are selected. So 10 instances from each Availability Zone will be provided from the pools which is highly available and is going to be highly available in future too.

5. Next section is for choosing the price and other additional configurations.

First of all, Uncheck the Apply defaults. Then, customize your additional settings under the heads:

IAM fleet role: This role allows you to provide tags to your spot instances. Choose the default one or you can create your own.
Maximum price: This option allows you to choose default maximum price which is on-demand price or to set your maximum price you want to pay for an instance per hour.
Next you can specify your spot fleet request validation. Check Terminate instances on request expire.
Load balancing: If you check Receive traffic from one or more load balancer you will get to choose the classic load balancer under which you want to launch your instance or you can choose the target group under which you want to launch your instance.

This is how you can help yourself customize spot fleet request based on your requirements. Lastly i will advise you to visit the link below once when you are on to deciding your suited instance types. After visiting this link you can make estimates of your saving.

https://aws.amazon.com/ec2/spot/instance-advisor/

 

Let’s create Lambda function for our purpose

We are going to create two lambda function for our purpose. Follow the steps below:

  1. Create an IAM role ready to allow lambda function to modify ASG, i prefer to ready an IAM role with admin permission because we are going to require IAM role many times throughout this project.
  2. Head to AWS Lambda console. On the left side navigation pane click on function.
  3. Click on Create function tab.
  4. Leave the default selection Author from scratch.
  5. Enter function name of your choice.
  6. Select Runtime as Python 3.7 .
  7. Expand Choose or create an execution role and select the role you have created for this project.
  8. Click on Create function
  9. Scroll down and put the below code on lambda_function.py.

    Copy code from below

import json
import boto3

def lambda_handler(event, context):
    # TODO implement
    client = boto3.client('autoscaling')
    response = client.set_desired_capacity(
        AutoScalingGroupName='spot',
        DesiredCapacity=1
    )

This code increases the desired capacity of ASG with name ‘spot’ to 1.

This is triggered when AWS issues interruption notice. As a result this launches an on-demand instance to maintain total capacity of two.


10. Upon the upper right corner click on save to save the function and come again to AWS Lambda console and one more time select function and then Create function.

11. Give your function name and execution role same as did previously and select Create function to write code to maintain spot capacity to two always.

12. Refer following snap.

Copy code from below

import boto3         ## Python sdk
import json

## In this part code is checking if number of spot instances is >= 2,
##then set ASG desired capacity to 0 to terminate running on-demand instances running.
def asg_cap(fleet, od):
    print('in function',fleet)
    print('in function',od)
    if fleet >= 2 and od > 0:
        client = boto3.client('autoscaling')
        response = client.set_desired_capacity(
            AutoScalingGroupName='spot',
            DesiredCapacity=0
        )

##Beginning of the execution
def lambda_handler(event, context):
    cnt = 0
    ec3 = boto3.resource('ec2')
    fleet = 0
    od = 0
    instancename = []
    fleet_ltime = []
    od_ltime = []
    for instance in ec3.instances.all():    ##looping all instances
        print (instance.id)
        print (instance.state)
        print (instance.tags)
        print (instance.launch_time)
        abc = instance.tags                ##get tags of all instances
        ab = instance.state                ##get state of all instances
        print (ab['Name'])
        if ab['Name'] == 'running':        ## checks for the instances whose state is running 
            cnt += 1
            for tags in abc:
                if tags["Key"] == 'Name':  ## checks for tag key is 'Name'
                    instancename.append(tags["Value"])
                    inst = tags["Value"]
                    print (inst)
                    if inst == 'fleet':    ## checks if tag key 'Name' has value 'fleet'. Change 'fleet' to your own tag name       
                        fleet += 1
                        fleet_ltime.append(instance.launch_time)
                    if inst == 'Test':     ## checks if tag key 'Name' has value 'Test'. Change 'Test' to your own tag name
                        od += 1
                        od_ltime.append(instance.launch_time)
                    
    print('Total number of running instances: ', cnt)
    print(instancename)
    print('Number of spot instances: ', fleet)
    print('Number of on-demand instances: ', od)
    print('Launch time of Fleet: ', fleet_ltime)
    print('Launch time of on-demand: ', od_ltime)
    
    if od > 0:
        dt_od = od_ltime[0]
    else:
        dt_od = '0'
        
    if fleet > 1:
        dt_spot = fleet_ltime[0]
        dt_spot1 = fleet_ltime[1]
    elif (fleet > 0) and (fleet < 2):
        dt_spot = fleet_ltime[0]
        dt_spot1 = '0'
    else:
        dt_spot = '0'
        dt_spot1 = '0'
        
        
    if dt_od != '0':
        if dt_spot != '0':
            if dt_od > dt_spot:
                if dt_spot1 != '0':
                    if dt_od > dt_spot1:
                        print('On-Demand instance is Launched')
                        # do nothing
                    else:
                        print('Spot instance is Launched')
                        asg_cap(fleet, od)
                else:
                    print('Only 1 spot instance exist')
            else:
                print('1Spot instance is Launched')
                asg_cap(fleet, od)
        else:
            print('No spot instance exist')
    else:
        print('No On-Demand instance exist')
        
        
    ## modify the spot fleet request capacity to two    
    client1 = boto3.client('ec2')
    response = client1.modify_spot_fleet_request(
        SpotFleetRequestId='sfr-92b7b2f1-163b-498a-ae7c-7bd1b4fdb227', ##replace with your spot fleet rquest 
        TargetCapacity=2
    )

13. Save this function.

Till now we have created two lambda function and now we are going to create Cloudwatch Rules which will call lambda function on interruption and state change to running of ec2 instances on our behalf.

Let’s create Cloudwatch Rules.

Steps to create Cloudwatch Rules.

  1. Go to Cloudwatch console.
  2. Select Rules from left side navigation pane.
  3. Click on Create Rule.
  4. Follow the screenshot below.

We are creating rules here to trigger on interruption notice by AWS.

On left side there is Targets Area, there select Lambda function and then select the first function which is increasing the desired capacity of ASG to 1.
Save the first Rule.

  1. Now let’s create another cloudwatch rule. 
  2. Create Rule. Follow screenshot.

After this add target as Lambda function and function name to the one which we created secondly a little lengthy one.

Save the second Rule.

Now to verify this automation go to spot fleet request you created with target capacity two. Select that request, click Action tab and click on Modify capacity and replace 2 with 1 there. This will terminate one spot instance and before that it is going to send interruption notice. Observe the changes on Auto scaling group, instance, and spot request dashboards. Wait for couple of minutes, if everything is right and according to our plan then again you will be having two spot instances under your bag.

If you are not having two spot instances at the end, then something is not right. You need to cross-check to verify:

  1. Check the name of the Auto scaling group you created. Copy it’s name and now go to the first lambda function you created which increase the desired capacity of ASG to 1 and check if the function contain the correct name of Auto scaling group, if not paste the name you copied against AutoScalingGroupName section.
  2. Check the tag Name value of spot instances and note somewhere. Also check the tag Name of On-demand instance which you configured while creating Auto scaling group and note that too. Now go to second lambda function you created and go to line number 40, here tag Name value of spot instance is given under single quotes, check if that matches with yours. Now at line number 43 check if tag Name of on-demand instance matches with yours.
  3. Go to Spot request and copy the spot fleet request id you created and go to line number 94 of the second lambda function and make sure the id under single quotes matches with yours.

Now test again, hopefully it will work now. If still you are facing problem or you have not created manually all the above stuffs then no need to worry.

I have created terraform code which will create the whole infra needed for this project. However after running this terraform code successfully you will have to make some changes so that your infra functions properly and for that you need to follow the steps below. Below is the link of github repository, clone the repo and post that follow the steps stated below:

Link: https://github.com/sah-shatrujeet/infra_spot_fleet_terraform.git

  1. Make sure you have terraform version 0.12.8 installed. 
  2. You must have AWS CLI configured too. 
  3. Before running the terraform code go to the folder where you have cloned the repo, then go to infra-spot/infra/infra and open vars.tf on your favourite editor.
  4. Go through the files and change the default set values as per your choice.
  5. Run the terraform apply by moving into the folder infra-spot/infra/infra .  Follow last three step to assure your infra is going to automate properly.
  6. Go to the Lambda function console, select first_function and check if the code contains same Autoscaling group name as the name of the Autoscaling group created with terraform. If not match the same with yours.
  7. On lambda function console select second_function and repeat the previous step and also check the Tag Name of the on-demand and spot instances, they are ‘Test’ and ‘fleet’ respectively in the terraform code. However you should make sure that the Tag Name of on-demand and spot instances matches with the tag mentioned in code.
  8. Lastly head on to spot request and note the request id and match with second_function SpotFleetRequestId part.

I believe terraform will do everything right for you. If you are still facing problems, i will be happy to resolve your queries.

 

Good to know

AWS reports shows that the average frequency of interruptions across all regions and instance types is <5% .

For any instance types on-demand price is maximum and we can bid at a maximum of 10 times of on-demand price.

When you will implement spot instances automation for your project then you will come across different scenario, you might need to monitor more events and trigger action based on that. Unfortunately, cloudwatch do not have all the events of AWS covered. But AWS CloudTrail solve this. CloudTrail knows and covered everything you performed on AWS. To use this on cloudwatch you will have to enable CloudTrail and then you can make a rule with service EC2 and Event Type AWS API Call via CloudTrail and then can add any specific operation that is not present as cloudwatch events. However i recommend you to first go through every details about CloudTrail before implementing that. If you have any queries implementing CLoudTrail, you can ask on comment section. I will be happy to help you.

While implementing spot instances for Database you may configure your spot instances such that volume of instance will not be deleted upon spot instance termination assuring that you are not going to loose any data.

Conclusion

With smart automation and monitoring we can have our production server on spot instances with guaranteed failover and high availability. However one who don’t want to run into any risk or the one who has no proper idea or resource  to automate the interruption can plan :

  1. Half or a portion of  the total production server on spot instances.
  2. Development server on spot instances without any worry.
  3. QA server over  spot instances.
  4. For more capacity prefer spot instances to ease the load on other main server.
  5. Prefer irregular short-term task on spot instances.

 

 

One more reason to use Docker

Recently I was working on a project which includes Terraform and AWS stuff. While working on that I was using my local machine for terraform code testing and luckily everything was going fine. But when we actually want to test it for the production environment we got some issues there. Then, as usual, we started to dig into the issue and finally, we got the issue which was quite a silly one 😜. The production server Terraform version and my local development server Terraform version was not the same. 

After wasting quite a time on this issue, I decided to come up with a solution so this will never happen again.

But before jumping to the solution, let’s think is this problem was only related to Terraform or do we have faced the similar kind of issue in other scenarios as well.

Well, I guess we face a similar kind of issue in other scenarios as well. Let’s talk about some of the scenario’s first.

Suppose you have to create a CI pipeline for a project and that too with code re-usability. Now pipeline is ready and it is working fine in your project and then after some time, you have to implement the same kind of pipeline for the different project. Now you can use the same code but you don’t know the exact version of tools which you were using with CI pipeline. This will lead you to error elevation. 

Let’s take another example, suppose you are developing something in any of the programming languages. Surely that utility or program will have some dependencies as well. While installing those dependencies on the local system, it can corrupt your complete system or package manager for dependency management. A decent example is Pip which is a dependency manager of Python😉.

These are some example scenarios which we have faced actually and based on that we got the motivation for writing this blog.

The Solution

To resolve all this problem we just need one thing i.e. containers. I can also say docker as well but container and docker are two different things.

But yes for container management we use docker.

So let’s go back to our first problem the terraform one. If we have to solve that problem there are multiple ways to solve this. But we tried it to solve this using Docker.

As Docker says

Build Once and Run Anywhere

So based on this statement what we did, we created a Dockerfile for required Terraform version and stored it alongside with the code. Basically our Dockerfile looks like this:-

FROM alpine:3.8

MAINTAINER OpsTree.com

ENV TERRAFORM_VERSION=0.11.10

ARG BASE_URL=https://releases.hashicorp.com/terraform

RUN apk add --no-cache curl unzip bash \
    && curl -fsSL -o /tmp/terraform.zip ${BASE_URL}/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \
    && unzip /tmp/terraform.zip -d /usr/bin/

WORKDIR /opstree/terraform

USER opstree

In this Dockerfile, we are defining the version of Terraform which needs to run the code.
In a similar fashion, all other above listed problem can be solved using Docker. We just have to create a Dockerfile with exact dependencies which are needed and that same file can work in various environments and projects.

To take it to the next level you can also dump a Makefile as well to make everyone life easier. For example:-

IMAGE_TAG=latest
build-image:
    docker build -t opstree/terraform:${IMAGE_TAG} -f Dockerfile .

run-container:
    docker run -itd --name terraform -v ~/.ssh:/root/.ssh/ -v ~/.aws:/root/.aws -v ${PWD}:/opstree/terraform opstree/terraform:${IMAGE_TAG}

plan-infra:
    docker exec -t terraform bash -c "terraform plan"

create-infra:
    docker exec -t terraform bash -c "terraform apply -auto-approve"

destroy-infra:
    docker exec -t terraform bash -c "terraform destroy -auto-approve"

And trust me after making this utility available the reactions of the people who will be using this utility will be something like this:-

Now I am assuming you guys will also try to simulate the Docker in multiple scenarios as much as possible.

There are a few more scenarios which yet to be explored to enhance the use of Docker if you find that before I do, please let me know.

Thanks for reading, I’d really appreciate any and all feedback, please leave your comment below if you guys have any feedback.

Cheers till the next time.

Unix File Tree Part-2

For those who have surfed straight to this blog, please check out the previous part of this series Unix File Tree Part-1 and those who have stayed tuned for this part, welcome back.In the previous part, we discussed the philosophy and the need for file tree. In this part, we will dive deep into the significance of each directory.

Image result for horizontal file tree linux

Dayum!! that’s a lot of stuff to gulp at once, we’ll kick out things one after the other.

Major directories

Let’s talk about the crucial directories which play a major role.

  • /bin: When we started crawling on Linux this helped us to get on our feet yes, you read it right whether you want to copy any file, move it somewhere, create a directory, find out date, size of a file, all sorts of basic operations without which the OS won’t even listen to you (Linux yawning meanwhile) happens because of the executables present in this directory. Most of the programs in /bin are in binary format, having been created by a C compiler, but some are shell scripts in modern systems.
  • /etc: When you want things to behave the way you want, you go to /etc and put all your desired configuration there (Imagine if your girlfriend has an /etc life would have been easier). whether it is about various services or daemons running on your OS it will make sure things are working the way you want them to.
  • /var: He is the guy who has kept an eye over everything since the time you have booted the system (consider him like Heimdall from Thor). It contains files to which the system writes data during the course of its operation. Among the various sub-directories within /var are /var/cache (contains cached data from application programs), /var/games(contains variable data relating to games in /usr), /var/lib (contains dynamic data libraries and files), /var/lock (contains lock files created by programs to indicate that they are using a particular file or device), /var/log (contains log files), /var/run (contains PIDs and other system information that is valid until the system is booted again) and /var/spool (contains mail, news and printer queues).
  • /proc: You can think of /proc just like thoughts in your brain which are illusions and virtual. Being an illusionary file system it does not exist on disk instead, the kernel creates it in memory. It is used to provide information about the system (originally about processes, hence the name). If you navigate to /proc The first thing that you will notice is that there are some familiar-sounding files, and then a whole bunch of numbered directories. The numbered directories represent processes, better known as PIDs, and within them, a command that occupies them. The files contain system information such as memory (meminfo), CPU information (cpuinfo), and available filesystems.
  • /opt: It is like a guest room in your house where the guest stayed for prolong period and became part of your home. This directory is reserved for all the software and add-on packages that are not part of the default installation.
  • /usr: In the original Unix implementations, /usr was where the home directories of the users were placed (that is to say, /usr/someone was then the directory now known as /home/someone). In current Unixes, /usr is where user-land programs and data (as opposed to ‘system land’ programs and data) are. The name hasn’t changed, but its meaning has narrowed and lengthened from “everything user related” to “user usable programs and data”. As such, some people may now refer to this directory as meaning ‘User System Resources’ and not ‘user’ as was originally intended.

Potato or Potaaato what is the difference? 

We’ll be discussing those directories which confuse us always, which have almost a similar purpose but still are in separate locations and when asked about them we go like ummmm…….

/bin vs /usr/bin vs /sbin vs /usr/local/bin

This might get almost clear out when I explained the significance of /usr in the above paragraph. Since Unix designers planned /usr to be the local directories of individual users so it contained all of the sub-directories like /usr/bin, /usr/sbin, /usr/local/bin. But the question remains the same how the content is different?

/usr/bin:

  • /usr/bin is a standard directory on Unix-like operating systems that contains most of the executable files that are not needed for booting or repairing the system. 
  • A few of the most commonly used are awk, clear, diff, du, env, file, find, free, gzip, less, locate, man, sudo, tail, telnet, time, top, vim, wc, which, and zip.

/usr/sbin:

  • The /usr/sbin directory contains non-vital system utilities that are used after booting.
  • This is in contrast to the /sbin directory, whose contents include vital system utilities that are necessary before the /usr directory has been mounted (i.e., attached logically to the main filesystem). 
  • A few of the more familiar programs in /usr/sbin are adduser, chroot, groupadd, and userdel. 
  • It also contains some daemons, which are programs that run silently in the background, rather than under the direct control of a user, waiting until they are activated by a particular event or condition such as crond and sshd.

I hope I have covered most of the directories which you might come across frequently and your questions must have been answered.
Now that we know about the significance of each UNIX directory, It’s time to use them wisely the way they are supposed to be.
Please feel free to reach me out for any suggestions.
Goodbye till next time!

References: https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/usr.htmlhttps://askubuntu.com/questions/130186/what-is-the-rationale-for-the-usr-directoryhttps://askubuntu.com/questions/308045/differences-between-bin-sbin-usr-bin-usr-sbin-usr-local-bin-usr-localhttp://index-of.es/Varios-2/How%20Linux%20Works%20What%20Every%20Superuser%20Should%20Know.pdf
https://imgflip.com/memegenerator

Jenkins Pipeline Global Shared Libraries

Although, the coding language used here is groovy but Jenkins does not allow us to use Groovy to its fullest,  so we can say that Jenkins Pipelines are not exactly groovy. Classes that you may write in src, they are processed in a “special Jenkins way” and you have no control over this. Depending on the various scenarios objects in Groovy don’t behave as you would expect objects to work.

Our thought is putting all pipeline functions in vars is much more practical approach, while there is no other good way to do inheritance, we wanted to use Jenkins Pipelines the right way but it has turned out to be far more practical to use vars for global functions.

Practical Strategy
As we know Jenkins Pipeline’s shared library support allows us to define and develop a set of shared pipeline helpers in this repository and provides a straightforward way of using those functions in a Jenkinsfile.This simple example will just illustrate how you can provide input to a pipeline with a simple YAML file so you can centralize all of your pipelines into one library. The Jenkins shared library example:And the example app that uses it:

Directory Structure

You would have the following folder structure in a git repo:

└── vars
    ├── opstreePipeline.groovy
    ├── opstreeStatefulPipeline.groovy
    ├── opstreeStubsPipeline.groovy
    └── pipelineConfig.groovy

Setting up Library in Jenkins Console.

This repo would be configured in under Manage Jenkins > Configure System in the Global Pipeline Libraries section. In that section Jenkins requires you give this library a Name. Example opstree-library

Pipeline.yaml

Let’s assume that project repository would have a pipeline.yaml file in the project root that would provide input to the pipeline:Pipeline.yaml

ENVIRONMENT_NAME: test
SERVICE_NAME: opstree-service
DB_PORT: 3079
REDIS_PORT: 6079

Jenkinsfile

Then, to utilize the shared pipeline library, the Jenkinsfile in the root of the project repo would look like:

@Library ('opstree-library@master') _
opstreePipeline()

PipelineConfig.groovy

So how does it all work? First, the following function is called to get all of the configuration data from the pipeline.yaml file:

def call() {
  Map pipelineConfig = readYaml(file: "${WORKSPACE}/pipeline.yaml")
  return pipelineConfig
}

opstreePipeline.groovy

You can see the call to this function in opstreePipeline(), which is called by the Jenkinsfile.

def call() {
    node('Slave1') {

        stage('Checkout') {
            checkout scm
        }

         def p = pipelineConfig()

        stage('Prerequistes'){
            serviceName = sh (
                    script: "echo ${p.SERVICE_NAME}|cut -d '-' -f 1",
                    returnStdout: true
                ).trim()
        }

        stage('Build & Test') {
                sh "mvn --version"
                sh "mvn -Ddb_port=${p.DB_PORT} -Dredis_port=${p.REDIS_PORT} clean install"
        }

        stage ('Push Docker Image') {
            docker.withRegistry('https://registry-opstree.com', 'dockerhub') {
                sh "docker build -t opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} ."
                sh "docker push opstree/${p.SERVICE_NAME}:${BUILD_NUMBER}"
            }
        }

        stage ('Deploy') {
            echo "We are going to deploy ${p.SERVICE_NAME}"
            sh "kubectl set image deployment/${p.SERVICE_NAME} ${p.SERVICE_NAME}=opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} "
            sh "kubectl rollout status deployment/${p.SERVICE_NAME} -n ${p.ENVIRONMENT_NAME} "

    }
}

You can see the logic easily here. The pipeline is checking if the developer wants to deploy on which environment what db_port needs to be there.

Benefits

The benefits of this approach are many, some of them are as mentioned below:

  • How to write groovy code is now none of the developer’s perspective.
  • Structure of the Pipeline.yaml is really flexible, where entire data structures can be passed as input to the pipeline.
  • Code redundancy saved to a large extent.

 Jenkinsfiles could actually just look more commonly, like this:

@Library ('opstree-library@master') _
opstreePipeline()

and opstreePipeline() would just read the the project type from pipeline.yaml and dynamically run the exact function, like opstreeStatefulPipeline(), opstreeStubsPipeline.groovy() . since pipeline are not exactly groovy, this isn’t possible. So one of the drawback is that each project would have to have a different-looking Jenkinsfile. The solution is in progress!So, what do you think?

Reference links: 
Image: Google image search (jenkins.io)

Docker-Compose As A Bundled Application

When docker was released as a new containerization tool, it took the market by a storm. With its lightweight images, multi-os support, and ability to ship containers, it’s popularity only roared. I have been using it for more than six months now, I can see why it is so. Hypervisors, another type of virtualizing tools,  have been hard on hardware. Which means they require a lot of resources to run. This increases the cost of running applications way more than those running on containers. This is the problem docker solved and hence, it’s popularity. Docker engine just sits on host OS and translates the instructions from an application to the underlying OS. It does not need one extra layer of virtual OS, just the binaries and libraries of application bundled in the image. Right? Now, hold on to that thought. We all have been working with docker and an extension with docker-compose. Why? Because it makes our job easy, We are spared from typing hundreds of ad-hoc commands in terminal to set up a slightly or very complicated application with certain dependencies. We can just describe it in a `docker-compose.yml` file and our job is done. However, the problem arises when we have to share that compose file:

  • Other users might need to use the file in a different environment, so they will need to edit all the values pertaining to their need, manually, and keep separate compose files for each environment.
  • Troubleshooting various configuration issues can be a tedious task since there is no single place where the configuration of the application can be stored. Changes will have to be made in the file.
  • This also makes communication between Dev and Ops team more tricky than it has to be resulting in communication gap and time wastage.

To have a more clear picture of the issue, we can have look at the below image:

We have compose file and configuration for separate environments, we make changes according to environment needs in different compose files, which could be a long manual task depending on the size of our project.


All of this points to the fact that there is no way to bundle the applications that use efficiently-bundled docker images. See the irony here? Well, there “was” no way, until there was. Enter ‘docker-app’. This, relatively, new tool is the answer to packaging docker-compose applications. I came across it when I was, myself, struggling to re-use a docker-compose application I had written in another environment. As soon as I read about it, I had to try it, which I did and loved. It made the task much easier as it provided a template of compose file and a key-value store for environment dependent parameters.


Now, we have an artefact with extention of ‘.dockerapp’. We can pass configuration values either through CLI or files or both and it will render compose file according to those values.

Let us now go through an example of how the docker app works. I am going to deploy a dummy application Spring3hibernate from Opstree Github repository in QA env and later in PROD by making simple configuration changes.
Installing docker-app is easy, though, there is one thing one should keep in mind: it can be installed as a plugin in docker-CLI or as standalone CLI tool itself. I will be installing it as a standalone CLI tool on linux. If you wish to install it as a plugin to docker-CLI and/or on another OS, visit their Github page: https://github.com/docker/app (Also, please visit github page for basics)
Before continuing, please ensure you have docker-CLI and docker-compose installed.
Please follow below steps to install docker-app:

$ export OSTYPE="$(uname | tr A-Z a-z)"
$ curl -fsSL --output "/tmp/docker-app-${OSTYPE}.tar.gz" \
"https://github.com/docker/app/releases/download/v0.8.0/docker-app-${OSTYPE}.tar.gz"
$ tar xf "/tmp/docker-app-${OSTYPE}.tar.gz" -C /tmp/
$ install -b "/tmp/docker-app-standalone-${OSTYPE}" /usr/local/bin/docker-app

Create a new directory in your home, we’ll call it app home:

$ cd ~
$ mkdir spring3hibernate-app
$ cd spring3hibernate-app/

Now, clone the app from Opstree Github repository. This app needs only mysql as a dependency.

$ git clone https://github.com/opstree/spring3hibernate.git

We need to update database properties file and nginx config file with below contents respectively:

$ vim ~/spring3hibernate-app/spring3hibernate/src/main/resources/database.properties

Replace below content over there:

database.driver=com.mysql.jdbc.Driver
database.url=jdbc:mysql://mysql:3306/employeedb
database.user=admin
database.password=password
hibernate.dialect=org.hibernate.dialect.MySQLDialect
hibernate.show_sql=true
hibernate.hbm2ddl.auto=update
upload.dir=c:/uploads

For nginx conf file:

$ vim ~/spring3hibernate-app/spring3hibernate/nginx/default.conf
server {
    listen       80;
    server_name  localhost;

    location / {
        stub_status on;
        proxy_pass http://springapp1:8080/;

    }
# redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Move ‘default.conf’ to ~/spring3hibernate-app/spring3hibernate/nginx/conf/qa/ as we have different conf file for PROD which goes to ~/spring3hibernate-app/spring3hibernate/nginx/conf/prod/

upstream s3hbackend {
    server springapp1:8080;
    server springapp2:8080;
}
server {
       listen 80;
       location / {
           stub_status on;
           proxy_pass http://s3hbackend;
       }
  
       # redirect server error pages to the static page /50x.html
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   /usr/share/nginx/html;
       }

}

This is the configuration for the nginx load balancer. Remember this, we’ll use it later. Let’s create our docker-app now, make sure you are in the app home directory
when executing this command:

$ docker-app init --single-file s3h

This will create a single file named s3h.dockerapp which will look like this: 

# This section contains your application metadata.
# Version of the application
version: 0.1.0
# Name of the application
name: s3h
# A short description of the application
description:
# List of application maintainers with name and email for each
maintainers:
  - name: ubuntu
    email:


---
# This section contains the Compose file that describes your application services.
version: "3.6"
services: {}


---
# This section contains the default values for your application parameters.

{}

As you can see this file is divided into three parts, metadata, compose, and parameters. They are all in one file because we used –single-file switch. We can divide them up in multiple files by using docker-app split command in app home directory, docker-app merge will put them back in one file. Now, for QA, we have the following configuration for s3h.dockerapp file:

version: 0.1.0
name: s3h
description:
maintainers:
  - name: atbk5
    email: adeel.ahmad@opstree.com


---
version: "3.7"
services:
  mysql:
    image: mysql:5.7
    container_name: mysql
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.env.rootpass}
      MYSQL_DATABASE: ${mysql.env.database}
      MYSQL_USER: ${mysql.env.user}
      MYSQL_PASSWORD: ${mysql.env.userpass}
    restart: always
    networks:
      - backend
    volumes:
      - db_data:/var/lib/mysql


  spring1:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp1
    restart: always
    networks:
      - backend
      - frontend


  spring2:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp2
    restart: always
    networks:
      - backend
      - frontend
    x-enabled: ${spring.app2}


  nginx:
    depends_on:
      - spring1
    image: nginx:alpine
    container_name: proxy
    restart: always
    networks:
      - frontend
    volumes:
      - ${nginx.conf}:/etc/nginx/conf.d
    ports:
      - ${nginx.port}:80
    x-enabled: ${nginx.status}


networks:
  frontend:
  backend:


volumes:
  db_data:


---
mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/qa
  port: 81
  status: true
spring:
  app2: false

As mentioned before, first part contains app metadata, second part contains actual compose file with lots of variables, and last part contains values of those variables. Special mention here is x-enabled variable, docker-app provides functionality to temporarily disable a service using this variable. Now, try a few commands:

$ docker-app inspect

It will produce summary of whole app.

$ docker-app render

It will replace all variables with their values and will produce a compose file

$ docker-app render --set nginx.status=”false”

It will remove nginx from docker-app compose as well as deploy

$ docker-app render | docker-compose -f - up

It will spin up all the containers according to rendered compose file. We can see the application running on port 81 of our machine.

$ docker-app --help

To check out more commands and play around a bit.
At this point, it will be better to create two directories in app home: qa and prod. Create a file in qa: qa-params.yml. Another file in prod: prod-params.yml. Copy all parameters from above s3h.dockerapp file to qa-params.yaml (or not). More importantly, copy below changes in parameters to prod-params.yml

mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/prod
  port: 80
  status: true
spring:
  app2: true

We are going to loadbalance springapp1 and springapp2 in PROD environment, since we have enabled springapp2 using x-enabled parameter. We have also changed nginx conf bind path to the new conf file and host port for nginx to 80 (for Production). All so easily. Run command:

$ docker-app render --parameters-file ./prod/prod-params.yaml

This command will produce a compose file ready for production deployment. Now run:

$ docker-app render --parameters-file ./prod/prod-params.yml | docker-compose -f - up

And production is deployed … Visit port 80 of your localhost to verify. What’s more exciting is that we can also share our docker-apps through docker hub, we can tag the app and push it to our remote registry as images after logging in:

$ docker login

Provide your username and password for docker hub, create an account if not yet created.

$ docker-app push --tag atbk5/s3h.dockerapp:latest

If we wish to upload additional files as well, we will have to split our project using docker-app split and put additional files in the directory before pushing. The additional files will go as attachments which can be accessed later.

Conclusion

With the arrival of docker app, our large, composite, and containerized applications can also be shipped and re-used as images. That is cool. But there’s something cooler which we haven’t explored yet. Deploying our docker-apps on kubernetes with the goal of exploring how far in management, and how optimal in delivery, we can go with our applications. Let’s keep this as a topic for the next blog. Until then, have a nice one. 🙂

Image Source: https://reflectoring.io/externalize-configuration/

Unix File Tree Part-1

Related image

Nature has its own way to reach out for perfection and the same should be our instinct to make our creations perfect.

Dennis Ritchie, father of Unix and an esteemed computer scientist might have implied the same approach for Unix directory structure.

Why?

Before getting into the hierarchy of Unix File Tree lets discuss why we need it. The need for a directory structure arises when multiple users are handling multiple software along with their dependent files. Let me explain this with a couple of scenarios.

Scenario-1:

Consider an ideal software or package which requires multiple files to function properly.

  • Binary files
  • Configuration files
  • Log files
  • Data files
  • Metadata files during execution
  • Libraries

 For now, let’s consider there is just one directory and I am keeping all of the dependent files in that directory. 

$ ls
package-1.binary  package-1.conf  package-1.data  package-1.lib  package-1.log  package-1.tmp

Another software comes in the picture which has its own dependent files.

$ ls
package-1.binary  package-1.data  package-1.log  package-2.binary  package-2.data  package-2.log
package-1.conf    package-1.lib   package-1.tmp  package-2.conf    package-2.lib   package-2.tmp

Things will get messy while dealing with various software since handling them won’t be easy and will lead to a chaotic situation.

Scenario-2:

Suppose I am a system admin and managing all of the software in the above scenario-1. To make things organized I created different directories to place the dependent files.
  • Binary files –> /dir-1
  • Configuration files –> /dir-2
  • Log files –> /dir-3
  • Data files –> /dir-4
  • Meta files –> /dir-5
  • Libraries –> /dir-6

As the work gets overloaded I need more admins to support they won’t be able to relate with the naming convention as I did.
To escape this situation the creator of Unix decided to follow a philosophy “Convention over Configuration”.
 As the name suggest giving priority to defined convention over individual’s configuration. So that everyone should be on the same page and keeping that in mind everyone else will follow.
And the simulation of the philosophy was like this

  • Binary files –> /bin
  • Configuration files –> /etc
  • Log files –> /log
  • Data files –> /var
  • Meta files –> /tmp
  • Libraries –> /lib

Which resulted in the Unix File Tree

$ tree -d -L 1
.
├── apps
├── bin
├── boot
├── dev
├── etc
├── home
├── lib
├── lib64
├── lost+found
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin
├── snap
├── srv
├── sys
├── tmp
├── usr
└── var

22 directories

You might be thinking that how will Unix figure out where is the configuration file, where is the binary and rest of the stuff of the software.
Here comes the role of the PATH variable


$ echo $PATH
/home/dennis/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
These are environment variables specifying a set of directories where executable programs are located. In general, each executing process or user session has its own PATH setting.

So now we have a proper understanding of why do we need a File Tree.
For diving deep into the significance of each one of the directory stay tuned for Unix File Tree Part-2.

Cheers!

Speeding up Ansible Execution Part 2

MITOGEN


In the previous post, we discussed various ways to reduce the ansible-playbook execution time, those changes were mostly made in the ansible config file, by adding or adjusting certain parameters in the file. But as you may have noticed that those methods were not that effective in certain cases, while using those methods we have to be very cautious about the result as they may affect ansible performance in one way or the other.


Generally, for the slower ansible execution, the main culprit is the way ansible is executed on the hosts. It creates multiple SSH connections and does not fully utilize the available resources. To tackle this problem, MITOGEN came to rescue !!!

Mitogen is a distributed programming library for Python. The Mitogen extension is a set of plug-ins for Ansible that enable it to operate via Mitogen, vastly improving its performance and enhancing its functional capability.

We all know about the strategies in ansible – linear, free & debug., the mitogen is just defined in the strategy column of the config file, so it is just a strategy, we are not making any other changes in the config file of the ansible so it is not affecting any other parameter, it is just the way, playbooks will be executed on the hosts.

Now coming to the mitogen installation part, we just have to download this package at a particular location and make some changes in the ansible config file as shown below,

[defaults]

strategy_plugins = /path/to/mitogen/ansible_mitogen/plugins/strategystrategy = mitogen_linear

we have to define the path where we have stored our mitogen files, and mention the strategy as “mitogen_linear”, under the default section of the config file, and we are good to go.

Now, after the Mitogen installation part, when we run our playbook, we will notice a reasonable reduction in the execution time,

Mitogen is fast because of the following reasons,

  • One connection is created per target and system logs aren’t spammed with repeated authentication events.
  • A single network roundtrip is used to execute a step whose code already exists in RAM on the target.
  • Processes are aggressively reused, avoiding the cost of invoking Python and recompiling imports, saving 300-800 ms for every playbook step.
  • Code is cached in the RAM, which further increases the speed.
  • Generally, ansible repeatedly rewrites and extracts ZIP files to temporary directories in the target hosts, mitogen also reduces these rewrites.

      All the above-mentioned features make the ansible to run faster.

      Mitogen is another extension for ansible that provides a decrease in its execution time and it is very easy to use, I think MITOGEN is very underrated and one of its kind, and we should definitely give it a try.

      I hope I have explained everything well, any suggestion/queries are highly appreciated.


Thanks !!!

Source:

https://mitogen.networkgenomics.com/ansible_detailed.html