Best Practices for Writing a Shell Script

I am a lazy DevOps Engineer. So whenever I came across the same task more than 2 times I automate that. Although now we have many automation tools, still the first thing that hit into our mind for automation is bash or shell script.
After making a lot of mistakes and messy scripts :), I am sharing my experiences for writing a good shell script which not only looks good but also it will reduce the chances of error.

The things that every code should have:-
     – A minimum effort in the modification.
     – Your program should talk in itself, so you don’t have to explain it.
     – Reusability, Of course, I can’t write the same kind of script or program again and again.

I am a firm believer in learning by doing. So let’s create a problem statement for ourselves and then try to solve it via shell scripting with best practices :). I would like to have solutions in the comment section of this blog.


Problem Statement:- Write a shell script to install and uninstall a package(vim) depending on the arguments. The script should tell if the package is already installed. If no argument is passed it should print the help page.

So without wasting time let’s start for writing an awesome shell script. Here is the list of things that should always be taken care of while writing a shell script.

Lifespan of Script

If your script is procedural(each subsequent steps relies on the previous step to complete), do me a favor and add set -e in starting of the script so that the script exists on the first error. For example:-

#!/bin/bash
set -e # Script exists on the first failure
set -x # For debugging purpose

Functions

Ahha, Functions are my most favorite part of programming. There is a saying

Any fool can write code that a computer can understand. Good programmers write code that humans can understand. 

To achieve this always try to use functions and name them properly so that anyone can understand the function just by reading its name. Functions also provide the concept of re-usability. It also removes the duplicating of code, how? let’s see this

#!/bin/bash 
install_package() {
   local PACKAGE_NAME="$1"
   yum install "${PACKAGE_NAME}" -y
}
install_package "vim"

Command Sanity

Usually, scripts call other scripts or binary. When we are dealing with commands there are chances that commands will not be available on all systems. So my suggestion is to check them before proceeding.

#!/bin/bash  
check_package() {
    local PACKAGE_NAME="$1"
    if ! command -v "${PACKAGE_NAME}" > /dev/null 2>&1
    then
           printf "${PACKAGE_NAME} is not installed.\n"
    else
           printf "${PACKAGE_NAME} is already installed.\n"
    fi
}
check_package "vim"

Help Page

If you guys are familiar with Linux, you have certainly noticed that every Linux command has its help page. The same thing can be true for the script as well. It would be really helpful to include –help flag.

#!/bin/bash  
INITIAL_PARAMS="$*"
help_function() {
   {
        printf "Usage:- ./script <option>\n"
        printf "Options:\n"
        printf " -a ==> Install all base softwares\n"
        printf " -r ==> Remove base softwares\n"
    }
}
arg_checker() {
     if [ "${INITIAL_PARAMS}" == "--help" ]; then
            help_function
     fi
}
arg_checker

Logging

Logging is the most critical thing for everyone whether he is a developer, sysadmin or DevOps. Debugging seems to be impossible without logs. As we know most applications generate logs for understanding that what is happening with the application, the same practice can be implemented for shell script as well. For generating logs we have a bash utility called logger.

#!/bin/bash 
DATE=$(date)
declare DATE
check_file() {
     local FILENAME="$1"
     if ! ls "${FILENAME}" > /dev/null 2>&1
     then
            logger -s "${DATE}: ${FILENAME} doesn't exists"
     else
           logger -s "${DATE}: ${FILENAME} found successfuly"
     fi
}
check_file "/etc/passwd"

Variables

I like to name my variables in Capital letters with an underscore, In this way, I will not get confused with the function name and variable name. Never give a,b,c etc. as a variable name instead of that try to give a proper name to a variable as well just like functions.

#!/bin/bash 
# Use declare for declaring global variables
declare GLOBAL_MESSAGE="Hey, I am a global message"
# Use local for declaring local variables inside the function
message_print() {
    local LOCAL_MESSAGE="Hey, I am a local message"
    printf "Global Message:- ${GLOBAL_MESSAGE}\n"
    printf "Local Message:- ${LOCAL_MESSAGE}\n"
}
message_print

Cases

Cases are also a fascinating part of shell script. But the question is when to use this? According to me if your shell program is providing more than one functionality basis on the arguments then you should go for cases. For example:- If your shell utility provides the capability of installing and uninstalling the software.

#!/bin/bash  
print_message() {
    MESSAGE="$1"
    echo "${MESSAGE}"
}
case "$1" in
   -i|--input)
      print_message "Input Message"
      ;;
   -o|--output)
        print_message "Output Message"
        ;;
   --debug)
       print_message "Debug Message"
       ;;
    *)
      print_message "Wrong Input"
      ;;
esac

In this blog, we have covered functions, variables, the lifespan of a script, logging, help page, command sanity. I hope these topics help you in your daily life while using the shell script. If you have any feedback please let me know through comments.
Cheers Till the next Time!!!!

Can you integrate a GitHub Webhook with Privately hosted Jenkins No? Think again

Introduction

One of the most basic requirement of CI implementation using Jenkins is to automatically trigger a Jenkins job post every commit. As you are already aware there are two ways in which a Jenkins job can be triggered in an automated fashion is:

  • Pull | PollSCM
  • Push | Webhook

It is a no-brainer that a Push-based trigger is the most efficient way of triggering a Jenkins job else you would be unnecessarily hogging your resources. One of the hurdles in implementing a push-based trigger is that your VCS & Jenkins server should be in the same network or in simple terms they can talk to each other.

In a typical CI setup, there is a SAAS VCS i.e GitHub/GitLab and a privately hosted Jenkins server, which make a Push-based triggering of Jenkins job impossible. Till a few days back I was under the same impression until I found this awesome blog that talks about how you can integrate a Webhook with your private Jenkins server.

In this blog, I’ll be trying to explain how I implemented the Webhook relay. Most importantly the reference blog was about integration of WebhookRelay with GitHub, with GitLab still there were some unexplored areas and I faced some challenges while doing the integration. This motivated me to write a blog so that people will have a ready reference on how to integrate GitLab with Webhook Relay.

Overall Workflow

Step 1: Download WebHook Relay Agent on the local system

Copy and execute the command

curl -sSL https://storage.googleapis.com/webhookrelay/downloads/relay-linux-amd64 > relay && chmod +wx relay && sudo mv relay /usr/local/bin

Note: Webhook Relay and Webhook Relay agent are different. Webhook Relay is running on public IP which triggers by GitLab and Webhook Relay Agent is a service which gets trigger by Webhook relay.

Step 2: Create a Webhook Relay Account

After successfully signing up we will land on Webhook Relay home page.

Step 3: Setting up the Webhook Relay Agent.

We have to create Access Tokens.
Now after navigating through Access token, click on Create Token button. Then we are provided with a Key and Secret pair.
Copy and execute:

relay login -k token-key -s token-secret
 
 
If it prompts a success message it means our Webhook relay agent is successfully setup.

Step 4: Create GItLab Repository

We will keep our repository a public one to keep things simple and understandable. Let’s say our Gitlab repository’s name is  WebhookProject.

Step 5: Install GitLab and GitLab Hook Plugin.

Go to Manage Jenkins →  Manage Plugins → Available
 

Step 6: Create Jenkins Job

 
Configure job: Add Gitlab repository link
 
Now we’ll choose the build trigger option:
 
 
 
Save the job.

Step 7: Connecting GitLab Repository, Webhook Relay, and Webhook Relay Agent

The final and most important step is to Connect the Overall flow.

Start forwarding Webhooks to Jenkins

Open terminal and type command:

relay forward --bucket gitlab-jenkins http://localhost:8080/project/webhook-gitlab-test
Note: Bucket name can be anything
 
 
Note: Do not stop this process by doing (ctrl+c).Open a new terminal or a new tab for commit to gitlab.

The most critical part of the workflow is the link generated by the Webhook Relay Agent. Copy this link and paste Gitlab repository(webhookProject) → Settings → Integrations

Paste the link.
For the sake of simplicity uncheck the Enable SSL Verification and click Add webhook button
Until now all major configuration has been done. Now Clone GitLab repository and push commits to the remote repository.
Go to Jenkins job and see build is triggered by GitLab webhook.
To see GitLab webhook Relay Logs, Go to :
Gitlab Repository → Settings → Integrations → webhook → Edit
 
 
To see Logs of Webhook Relay Agent trigger Jenkins, Go to:
Webhook Relay UI page → Relay Logs.

So now you know how to do WebHook integration between your VCS & Jenkins even when they are not directly reachable to each other.
Can you integrate a GitHub Webhook with Privately hosted Jenkins? Yes
Cheers Till Next Time!!!!

Log Parsing of Windows Servers on Instance Termination

 Windows
As we all know that how critical are Logs as a part of any system, they give you deep insights about your application, what your system is doing and what caused the error. Depending on how logging is configured logs may contain transaction history, timestamps and amounts debited/credited into client’s account and a lot more.

On an enterprise level application, your system goes to multiple hosts, managing the logs across multiple hosts can be complicated. Debugging the error in the application across hundreds of log files on hundreds of servers can be very time consuming and complicated and not the right approach so it is always better to move the logs to a centralized location.

 
Lately in my company I faced a situation which I assume is a very commonly faced scenario in Amazon’s Cloud where we might have to retain application logs from multiple instances behind an Auto Scaling group.  Let’s assume an example for better understanding. 
 
Suppose your application is configured to be logging into C:\Source\Application\web\logs Directory. The Application running has variant incoming traffic, sometimes it receives requests which can be handled by 2 servers, other times it may require 20 servers to handle the traffic.
 
When there is a hike in traffic, Amazon Ec2’s smart AutoScaling Group uses the configuration and scales from 2 server to many (According to ASG Policy) and during this phase, the application running in the newly launched Ec2’s also log into C:\Source\Application\web\logs …. but when there’s a drop in traffic, the ASG triggers a scale down policy, resulting to termination of instances, which also results in deletion of all the log files inside the instances launched via ASG during high traffic time.

Faced a similar situation ?  No worries, now in order to retain logs I figured out an absolute solution.
Here, in this blog, the motive is to sync the logs from dying instances at the time of their termination. This will be done using AWS Services, the goal is to trigger a Powershell Script in the instance using SSM which sync logs to S3 Bucket with sufficient information about the dying instances. For this we will require 2 things:

1) Configuring SSM agent to be able to talk to Ec2 Instances
2) Ec2 Instances being able to write to S3 Buckets

For the tutorial we will be using Microsoft Windows Server 2012 R2 Base with the AMI ID: ami-0f7af6e605e2d2db5

A Blueprint of the scenario to be understood below:

1) Configuring SSM agent to be able to talk to Ec2 Instances
 
SSM Agent is installed by default on Windows Server 2016 instances and instances created from Windows Server 2003-2012 R2 AMIs published in November 2016 or later. Windows AMIs published before November 2016 use the EC2Config service to process requests and configure instances.
If your instance is a Windows Server 2003-2012 R2 instance created before November 2016, then EC2Config must be upgraded on the existing instances to use the latest version of EC2Config. By using the latest EC2Config installer, you install SSM Agent side-by-side with EC2Config. This side-by-side version of SSM Agent is compatible with your instances created from earlier Windows AMIs and enables you to use SSM features published after November 2016.
 
This simple script can be used to update Ec2Config and then layer it with the latest version of SSM agent. This will always install AwsCli which is used to push logged archives to S3
 

#ScriptBlock

if(!(Test-Path -Path C:\Scripts )){
mkdir C:\Tmp
}
cd C:/Tmp
wget https://s3.ap-south-1.amazonaws.com/asg-termination-logs/Ec2Install.exe -OutFile Ec2Config.exe
wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/windows_amd64/AmazonSSMAgentSetup.exe -OutFile ssmagent.exe
wget https://s3.amazonaws.com/aws-cli/AWSCLI64PY3.msi -OutFile awscli.msi
wget https://s3.amazonaws.com/aws-cli/AWSCLISetup.exe -OutFile awscli.exe
Invoke-Command -ScriptBlock {C:\Tmp\Ec2Config.exe /Ec /S /v/qn }
sleep 20
Invoke-Command -ScriptBlock {C:\Tmp\awscli.exe /Ec /S /v/qn }
sleep 20
Invoke-Command -ScriptBlock {C:\Tmp\ssmagent.exe /Ec /S /v/qn }
sleep 10
Restart-Service AmazonSSMAgent
Remove-Item C:\Tmp

 
An IAM Role is Required for SSM to Ec2 Instance Conversation:
IAM instance role: Verify that the instance is configured with an AWS Identity and Access Management (IAM) role that enables the instance to communicate with the Systems Manager API.
 
Add instance profile permissions for Systems Manager managed instances to an existing role
  • Open the IAM console at https://console.aws.amazon.com/iam/.
  • In the navigation pane, choose Roles, and then choose the existing role you want to associate with an instance profile for Systems Manager operations.
  • On the Permissions tab, choose Attach policy.
  • On the Attach policy page, select the check box next to AmazonEC2RoleforSSM, and then choose Attach policy.
Now, Navigate to Roles > and select your role. 
That should look like:
 
index
 
2) Ec2 Instances being able to write to S3 Buckets
 
An IAM Role is Required for Ec2 to be able to write to S3:
 
IAM instance role: Verify that the instance is configured with an AWS Identity and Access Management (IAM) role that enables the instance to communicate with the S3 API.
 
Add instance profile permissions for Systems Manager managed instances to an existing role
  • Open the IAM console at https://console.aws.amazon.com/iam/.
  • In the navigation pane, choose Roles, and then choose the existing role you want to associate with an instance profile for Systems Manager operations.
  • On the Permissions tab, choose Attach policy.
  • On the Attach policy page, select the check box next to AmazonS3FullAccess, and then choose Attach policy.
That should look like:
 
index
 

This Powershell script saved in C:/Scripts/termination.ps1 will pick up log files from:

$SourcePathWeb:

and will output logs into:

$DestFileWeb
with a IP and date-stamp to recognize and identify the instances and where the logs originate from later.
Make sure that the s3 bucket name and –region and source of log files is changed according to the preferences.
 

#ScriptBlock

$Date=Get-Date -Format yyyy-MM-dd
$InstanceName=”TerminationEc2″
$LocalIP=curl http://169.254.169.254/latest/meta-data/local-ipv4 -UseBasicParsing

if((Test-Path -Path C:\Users\Administrator\workdir\$InstanceName-$LocalIP-$Date/$Date )){
Remove-Item “C:\Users\Administrator\workdir\$InstanceName-$LocalIP-$Date/$Date” -Force -Recurse
}

New-Item -path “C:\Users\Administrator\workdir\$InstanceName-$LocalIP-$Date/$Date” -type directory
$SourcePathWeb=”C:\Source\Application\web\logs”
$DestFileWeb=”C:\Users\Administrator\workdir\$InstanceName-$LocalIP-$Date/$Date/logs.zip”

Add-Type -assembly “system.io.compression.filesystem”
[io.compression.zipfile]::CreateFromDirectory($SourcePathWeb, $DestFileWeb)

C:\’Program Files’\Amazon\AWSCLI\bin\aws.cmd s3 cp C:\Users\Administrator\workdir s3://terminationec2 –recursive –exclude “*.ok” –include “*” –region us-east-1

If the above settings are done fine then manually the script should produce a success suggesting output:


indexindex

Check your S3, Bucket for seeing if it has synced logs to there. Now, because the focus of this blog trigger a Powershell Script in the instance using SSM which syncs the logs to S3 Bucket so we will try running the script through SSM > Run Command.

Select and run of the instances having the above script and configuration. The output should be pleasing.

index

The AMI used by the ASG should have the above configuration (Can be archived via created a ami from ec2 having above config and then adding it into Launch Configuration of the ASG). The ASG we have here for the tutorial is named after my last name : “group_kaien”.

Now, the last and the Most important step is configuration theCloudwatch > Event > Rules.

Navigating to Cloudwatch>Event>Rules: Create Rule.
 
index
 

This would return the following JSON config:

{
“source”: [
“aws.autoscaling”

],
“detail-type”: [
“EC2 Instance Terminate Successful”,
“EC2 Instance-terminate Lifecycle Action”
],
“detail”: {
“AutoScalingGroupName”: [
“group_kaien”
]
}
}
 
On the right side of Targets:
 
Select
 
SSM Run Command:
  • Document: AwsRunPowerShellScript
  • Target key: “Instanceids or tag:
  • Target Values:
 Configure parameter
  • Commands: .\termination.ps1
  • WorkingDirectory: C:\Scripts.ps1
  • ExecutionTimeout: 3600 is default
Making sure that on termination event happening, the powershell script is run and it syncs logs to S3. This is what our configuration looks like:
 
index
 

For more on setting up Cloudwatch Events refer :
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CWE_GettingStarted.html

Wait for the AutoScaling Policies to run such that new instances are created and terminated, with above configuration. The terminating instances will sync their logs S3 before they are fully terminated. Here’s the output on S3 for me after a scale down activity was done.
 
index
 

Conclusion

Now with this above, we have learned how to export logs to S3 automatically from a dying instance, with the correct date/time stamp as mentioned in the termination.ps1 script.
Hence, fulfilling the scope of the blog.
Stay tuned for more

Prometheus Overview and Setup

Overview

Prometheus is an opensource monitoring solution that gathers time series based numerical data. It is a project which was started by Google’s ex-employees at SoundCloud. 

To monitor your services and infra with Prometheus your service needs to expose an endpoint in the form of port or URL. For example:- {{localhost:9090}}. The endpoint is an HTTP interface that exposes the metrics.

For some platforms such as Kubernetes and skyDNS Prometheus act as directly instrumented software that means you don’t have to install any kind of exporters to monitor these platforms. It can directly monitor by Prometheus.

One of the best thing about Prometheus is that it uses a Time Series Database(TSDB) because of that you can use mathematical operations, queries to analyze them. Prometheus uses SQLite as a database but it keeps the monitoring data in volumes.

Pre-requisites

  • A CentOS 7 or Ubuntu VM
  • A non-root sudo user, preferably one named prometheus

Installing Prometheus Server

First, create a new directory to store all the files you download in this tutorial and move to it.

mkdir /opt/prometheus-setup
cd /opt/prometheus-setup
Create a user named “prometheus”

useradd prometheus

Use wget to download the latest build of the Prometheus server and time-series database from GitHub.


wget https://github.com/prometheus/prometheus/releases/download/v2.0.0/prometheus-2.0.0.linux-amd64.tar.gz
The Prometheus monitoring system consists of several components, each of which needs to be installed separately.

Use tar to extract prometheus-2.0.0.linux-amd64.tar.gz:

tar -xvzf ~/opt/prometheus-setup/prometheus-2.0.0.linux-amd64.tar.gz .
 Place your executable file somewhere in your PATH variable, or add them into a path for easy access.

mv prometheus-2.0.0.linux-amd64  prometheus
sudo mv  prometheus/prometheus  /usr/bin/
sudo chown prometheus:prometheus /usr/bin/prometheus
sudo chown -R prometheus:prometheus /opt/prometheus-setup/
mkdir /etc/prometheus
mv prometheus/prometheus.yml /etc/prometheus/
sudo chown -R prometheus:prometheus /etc/prometheus/
prometheus --version
  

You should see the following message on your screen:

  prometheus,       version 2.0.0 (branch: HEAD, revision: 0a74f98628a0463dddc90528220c94de5032d1a0)
  build user:       root@615b82cb36b6
  build date:       20171108-07:11:59
  go version:       go1.9.2
Create a service for Prometheus 

sudo vi /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus

[Service]
User=prometheus
ExecStart=/usr/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /opt/prometheus-setup/

[Install]
WantedBy=multi-user.target
systemctl daemon-reload

systemctl start prometheus

systemctl enable prometheus

Installing Node Exporter


Prometheus was developed for the purpose of monitoring web services. In order to monitor the metrics of your server, you should install a tool called Node Exporter. Node Exporter, as its name suggests, exports lots of metrics (such as disk I/O statistics, CPU load, memory usage, network statistics, and more) in a format Prometheus understands. Enter the Downloads directory and use wget to download the latest build of Node Exporter which is available on GitHub.

Node exporter is a binary which is written in go which monitors the resources such as cpu, ram and filesystem. 

wget https://github.com/prometheus/node_exporter/releases/download/v0.15.1/node_exporter-0.15.1.linux-amd64.tar.gz

You can now use the tar command to extract : node_exporter-0.15.1.linux-amd64.tar.gz

tar -xvzf node_exporter-0.15.1.linux-amd64.tar.gz .

mv node_exporter-0.15.1.linux-amd64 node-exporter

Perform this action:-

mv node-exporter/node_exporter /usr/bin/

Running Node Exporter as a Service

Create a user named “prometheus” on the machine on which you are going to create node exporter service.

useradd prometheus

To make it easy to start and stop the Node Exporter, let us now convert it into a service. Use vi or any other text editor to create a unit configuration file called node_exporter.service.


sudo vi /etc/systemd/system/node_exporter.service
This file should contain the path of the node_exporter executable, and also specify which user should run the executable. Accordingly, add the following code:

[Unit]
Description=Node Exporter

[Service]
User=prometheus
ExecStart=/usr/bin/node_exporter

[Install]
WantedBy=default.target

Save the file and exit the text editor. Reload systemd so that it reads the configuration file you just created.


sudo systemctl daemon-reload
At this point, Node Exporter is available as a service which can be managed using the systemctl command. Enable it so that it starts automatically at boot time.

sudo systemctl enable node_exporter.service
You can now either reboot your server or use the following command to start the service manually:
sudo systemctl start node_exporter.service
Once it starts, use a browser to view Node Exporter’s web interface, which is available at http://your_server_ip:9100/metrics. You should see a page with a lot of text:

Starting Prometheus Server with a new node

Before you start Prometheus, you must first edit a configuration file for it called prometheus.yml.

vim /etc/prometheus/prometheus.yml
Copy the following code into the file.

# my global configuration which means it will applicable for all jobs in file
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. scrape_interval should be provided for scraping data from exporters 
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. Evaluation interval checks at particular time is there any update on alerting rules or not.

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'. Here we will define our rules file path 
#rule_files:
#  - "node_rules.yml"
#  - "db_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape: In the scrape config we can define our job definitions
scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped from this config.
  - job_name: 'node-exporter'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'. 
    # target are the machine on which exporter are running and exposing data at particular port.
    static_configs:
      - targets: ['localhost:9100']
After adding configuration in prometheus.yml. We should restart the service by

systemctl restart prometheus
This creates a scrape_configs section and defines a job called a node. It includes the URL of your Node Exporter’s web interface in its array of targets. The scrape_interval is set to 15 seconds so that Prometheus scrapes the metrics once every fifteen seconds. You could name your job anything you want, but calling it “node” allows you to use the default console templates of Node Exporter.
Use a browser to visit Prometheus’s homepage available at http://your_server_ip:9090. You’ll see the following homepage. Visit http://your_server_ip:9090/consoles/node.html to access the Node Console and click on your server, localhost:9100, to view its metrics.

Logstash Timestamp

Introduction

A few days back I encountered with a simple but painful issue. I am using ELK to parse my application logs  and generate some meaningful views. Here I met with an issue which is, logstash inserts my logs into elasticsearch as per the current timestamp, instead of the actual time of log generation.
This creates a mess to generate graphs with correct time value on Kibana.
So I had a dig around this and found a way to overcome this concern. I made some changes in my logstash configuration to replace default time-stamp of logstash with the actual timestamp of my logs.

Logstash Filter

Add following piece of code in your  filter plugin section of logstash’s configuration file, and it will make logstash to insert logs into elasticsearch with the actual timestamp of your logs, besides the timestamp of logstash (current timestamp).
 
date {
  locale => "en"
  timezone => "GMT"
  match => [ "timestamp", "yyyy-mm-dd HH:mm:ss +0000" ]
}
In my case, the timezone was GMT  for my logs. You need to change these entries  “yyyy-mm-dd HH:mm:ss +0000”  with the corresponding to the regex for actual timestamp of your logs.

Description

Date plugin will override the logstash’s timestamp with the timestamp of your logs. Now you can easily adjust timezone in kibana and it will show your logs on correct time.
(Note: Kibana adjust UTC time with you bowser’s timezone)

Chef Start here with ease..


Introduction

Until I discovered cooking, I was never really interested in anything. Julia Child

Chef, the lead in automation industry has many tickling facet and calibre. Before introducing the potentials of “The Chef”, it’s non negotiable to evade the foresight of its relevance to devops exercises. Chef can take care of server automation, infrastructure environment and continuously deliver your application.


Motive behind this array

With this blog series, we will familiarize you with the concepts of chef and will try to make you comfortable with our hands on blogs. This series of blog contains 15 blogs in a row which will enhance the knowledge and draw your faith in chef.

Always Pre-Heat The Oven Before Putting The Meat In !!

Prerequisites

For all the upcoming blogs we presume that you have basic understanding of Git, Docker,Vagrant and Linux. This blog series is written in consideration with centos as platform, although you can apply them on ubuntu by following some minor changes.


We are going to use our public git repository for all the blogs in this series. We will be using centos7 vagrant box to spin up our testing environment.


We are going to follow a single problem statement in our all blogs to maintain the uniformity and avoid the ambiguity. We are going to install nginx using chef and deploying two virtual host (blog.opstree.com, chef.opstree.com) with it.


Blogs in this series

In this blog we describe Nginx and manually setup the nginx, as per the problem statement and also create two virtual host(blog.opstree.com, chef.opstree.com).
Here we took some example of resources such as package, git, file and service and put our hands to work with chef-apply. We perform some simple task using chef resources.
This blog provides you theoretical concepts about chef resources. In this article  resources and their attributes elaborated.
Chef recipes is in consideration for this edition. Create your first recipe and apply it with chef. Complete doctrine behind the recipes of chef with simplified examples.
Walls of chef house, the cookbook, written from scratch with step to step explanation. Setup of nginx and proxy implementation with sample cookbook.
This blog furnish entire theoretical stuff about cookbooks. This includes command line cookbook generation and handling. One by one description of complete directory structure of a cookbook.  
Installation of chef kitchen. Testing of our nginx cookbook in different environment using docker container. Create, converge, verify and destroy a node with kitchen.
  1. Chef-Kitchen Chefs diagnosis center..
Theory behind the chef kitchen. Complete cycle of kitchen. With in this article elaborated view of .kitchen.yml file, and .kitchen folder provided.
  1. Chef Foodcritic && Chef Rubocop Handle it casually..
Chef lint tools, foodcritic and rubocop requirement. Theory, setup and practice exercises for foodcritic and rubocop.  
  1. Chef-Databags Carry all at once..
Introduction to databags and their need. Division of code and data with databags.  Databags implementation with chef-solo. Setup of mysql password with databags.  
  1. Chef-Roles Club everybody..
Requirement and implementation of chef roles. Clubbing of multiple nodes with chef roles. Complete web stack (webserver, proxy server and database) setup with roles.
  1. Chef-Environment  Organized wisely..
Chef environments for better management of the need of an organization. A complete organizational view with chef to setup different environment. Handle environments with chef-knife.
  1. Chef Server-Client Setup
Complete setup of chef client-server mode. Use of vagrant provisioning only, to spin up chef-server, chef-client and workstation.
  1. Collaboration of Client Server and Workstations
How chef-server, client and workstations work together to automate a complete infrastructure. Chef-server web interface.
  1. Chef Server-Client Work quietly..
Kickoff working with workstation. Chef-client. Install nginx and setup proxies with nginx cookbook on client node.

Setup Jenkins using Ansible

In this document I’ll walk you through how you can setup Jenkins using Ansible.

Prerequisites

  •  OS – Ubuntu {at least two machine required in production}
  •  First machine for Ansible  installation
  •  Second machine where we will install jenkins server
  • You should have basic understanding of ansible workflow.
Note :  You should have password less login enabled in second machine. use this link 
http://www.linuxproblem.org/art_9.html

Ansible Installation
Before starting with installing Jenkins using Ansible, you need to have Ansible installed in your system.

$ curl https://raw.githubusercontent.com/OpsTree/AnsiblePOC/alok/scripts/Setup/setup_ansible.sh | sudo bash

Setup jenkins using Ansible
Install jenkins ansible roles
Once we have ansible installed in our system, we can start installing the jenkins using ansible. To install we will use an already available ansible role to setup jenkins

$ ansible-galaxy install geerlingguy.jenkins
to know more about the jenkins role hit this link https://galaxy.ansible.com/detail#/role/440

Ansible roles default directory path is /etc/ansible/roles
Make ansible playbook file

Now the next step is to use the installed jenkins roles to install the jenkins. For this purpose we will create a playbook  and hosts file with below content

$ cd ~/MyPlaybook/jenkins
create a file hosts and add below content
[jenkins_hosts]
192.168.33.15

Screenshot from 2015-11-30 12:55:41.png
Next create  a file site.yml and add below content
– hosts: jenkins_hosts
 roles:
     – { role: geerlingguy.jenkins }

Screenshot from 2015-11-30 12:59:08.png
so configuration file is done, the next step is to run ansible playbook command

$ ansible-playbook -i hosts site.yml

Now that Jenkins is running, go to http://192.168.33.15:8080. You’ll be welcome by the default Jenkins screen.