Ansible directory structure (Default vs Vars)

 

Ansible directory
Defaults and vars

Ansible is one of the most prominent tools among DevOps for managing software configuration because of its ease of use and bare minimum dependencies. The highlight of this tool is Ansible roles which provide a wholesome package of various functionalities that we need for software configuration.

As we know that ansible roles have a wide directory structure that looks something like this.

Continue reading “Ansible directory structure (Default vs Vars)”

Speeding up Ansible Execution Part 2

MITOGEN


In the previous post, we discussed various ways to reduce the ansible-playbook execution time, those changes were mostly made in the ansible config file, by adding or adjusting certain parameters in the file. But as you may have noticed that those methods were not that effective in certain cases, while using those methods we have to be very cautious about the result as they may affect ansible performance in one way or the other.


Generally, for the slower ansible execution, the main culprit is the way ansible is executed on the hosts. It creates multiple SSH connections and does not fully utilize the available resources. To tackle this problem, MITOGEN came to rescue !!!

Mitogen is a distributed programming library for Python. The Mitogen extension is a set of plug-ins for Ansible that enable it to operate via Mitogen, vastly improving its performance and enhancing its functional capability.

We all know about the strategies in ansible – linear, free & debug., the mitogen is just defined in the strategy column of the config file, so it is just a strategy, we are not making any other changes in the config file of the ansible so it is not affecting any other parameter, it is just the way, playbooks will be executed on the hosts.

Now coming to the mitogen installation part, we just have to download this package at a particular location and make some changes in the ansible config file as shown below,

[defaults]

strategy_plugins = /path/to/mitogen/ansible_mitogen/plugins/strategystrategy = mitogen_linear

we have to define the path where we have stored our mitogen files, and mention the strategy as “mitogen_linear”, under the default section of the config file, and we are good to go.

Now, after the Mitogen installation part, when we run our playbook, we will notice a reasonable reduction in the execution time,

Mitogen is fast because of the following reasons,

  • One connection is created per target and system logs aren’t spammed with repeated authentication events.
  • A single network roundtrip is used to execute a step whose code already exists in RAM on the target.
  • Processes are aggressively reused, avoiding the cost of invoking Python and recompiling imports, saving 300-800 ms for every playbook step.
  • Code is cached in the RAM, which further increases the speed.
  • Generally, ansible repeatedly rewrites and extracts ZIP files to temporary directories in the target hosts, mitogen also reduces these rewrites.

      All the above-mentioned features make the ansible to run faster.

      Mitogen is another extension for ansible that provides a decrease in its execution time and it is very easy to use, I think MITOGEN is very underrated and one of its kind, and we should definitely give it a try.

      I hope I have explained everything well, any suggestion/queries are highly appreciated.


Thanks !!!

Source:

https://mitogen.networkgenomics.com/ansible_detailed.html

Speeding up Ansible Execution Part 1

The knowledge of one of the SCM tools is a must for any DevOps engineer, ANSIBLE is one of the popular tools in this category, we all are aware of the ease that Ansible provides whether it is infra provisioning, orchestration or application deployment.
The reason for the vast popularity of Ansible is the long list of modules it provides to support any level of automation, moreover it also gives users the flexibility to create their own modules as per their requirement.
But The purpose of this blog is not to mention the features that ansible provides, but to show how we can speed up our playbook execution in Ansible, as a beginner executing ansible, is very easy and it also feels like saving a lot of time with it, but as you dive deep into it, you will come to know that running ansible playbooks will engage you for a considerable amount of time.
There are a lot of articles available on the internet on how we can speed up our ansible execution, so I have decided to sum up those articles into my blog, with the following methods, we can reduce our execution time without compromising with the overall performance of Ansible.
Before starting, I request  you guys to make a small change in your ansible configuration file (ansible.cfg), this small change will help you in tracking the time it will take for the playbook execution, and it also lists out the time is taken by each task.
Just add these lines to your ansible.cfg file under default section,

[default]

callback_whitelist = profile_tasks

Forks

When you are running your playbooks on various hosts, then you may have noticed that the number of servers where the playbook executes simultaneously is 5. You can increase this number inside the ansible.cfg file:
# ansible.cfg

forks = 10


or with a command line argument to ansible-playbook with the -f or –forks options. We can increase or decrease this value as per our requirement.
while using forks we should use “local_action” or “delegated” steps limited in number, as with higher fork value it will affect the ansible-server’s performance.

Async

In ansible, each task blocks the playbook, meaning the connections stay open until the task is done on each node, which is some cases takes a lot of time, here we can use “async” for those particular tasks, with the help of this ansible will automatically move to another task without waiting for the task execution on each node.
To launch a task asynchronously, we need to specify its maximum runtime and how frequently we would like to poll for status, it’s default value in 10 sec.
tasks:

– name: “name of the task”  

command: “command we want to execute”     

async: 40    

poll: 15
The only condition is that the subsequent tasks must not have a dependency on this task.

Free Strategy 

When running Ansible playbooks, you might have noticed that the Ansible runs every task on each node one by one, it will not move to another task until a particular task is completed on each node, which will take a lot of time, in some cases.
By default, the strategy is set to “linear”, we can set it to free.

– hosts: “hosts/groups”  

name: “name of the playbook”  

strategy: free


It will run the playbook on each host independently, without waiting for each node to complete.
Facts gathering is the default feature while executing playbook, sometimes we don’t need it.
In those cases, we can disable facts gathering, This has advantages in scaling Ansible in push mode with very large numbers of systems.

– hosts: “hosts/groups”  

name: “name of the playbook”  

gather_facts: no

Pipelining 

For each task in Ansible, there are lots of ssh connection created, which results in increasing the total execution time. Pipelining reduces the number of ssh operations required to execute a module by executing many Ansible modules without an actual file transfer. We just have to make these changes in the ansible.cfg file,
# ansible.cfg Pipelining = True
Although this can result in a very significant performance improvement when enabled, Pipelining is disabled by default because requiretty is enabled by default for many distros.

Poll Interval

When we run any the task in Ansible, it starts polling to check if the task is completed on the host or not, we can decrease this polling interval time in ansible.cfg to increase its performance, but it will increase the CPU usage, so we need to adjust its value accordingly We just have to adjust this the parameter in the ansible.cfg file,
internal_poll_interval=0.001

so, these are the various ways to decrease our playbook execution time in Ansible, generally we don’t use all these methods in a single setup, we use these features as per the requirement, 
The main motive of writing this blog is to determine the factors which will help in fine-tuning the Ansible performance, and there are many more factors which serves the same purpose but here I am mentioning the most important parameters among them.
I hope I have covered all the important aspects of the blog, feel free to provide your valuable feedback.
Thanks !!!

Source:

https://mitogen.networkgenomics.com/ansible_detailed.html

Best practices of Ansible Role

Ansible Role
(Best practices)

 

I have written many Ansible Roles in my career. But when I talk about the “Best Practice of writing an Ansible Role” half of them were non-considerable. When I started writing Ansible Roles, I wrote them with a thought as to just complete my task. This thought made me struggle as a “DevOps Guy” because of this practice I just have to write each and every Ansible Role again and again when needed. Without the proper understanding about the Architecture of Ansible Role, I was incapable of enjoying all the functionality which I could have used to write an Ansible Role where I was just using “command” and “shell”  modules.

 
Advantages of Best Practices
  • Completing the task using Full Functionality.
  • Vandalized Architecture helps to create Ansible roles as Utilities which can be used further using different values.
  • Applying best practices helps you to learn new things every day.
  • Following “Convention Over Configuration” makes your troubleshooting much easier.
  • Helps you to grow your Automation skills.
  • You don’t have to worry about the latest version or change in values ever.

I can talk about the Advantages of best practices continuously but you should understand it after using them. So now, Let’s talk about “How to apply them”.

 

First, we will understand the complete directory structure on Ansible Role:
  • Defaults: The default variables for the role are been stored here inside this directory. These variables have the lowest priority.
  • Files: All the static files are being stored here which are used inside the role.
  • Handlers: All the handlers are being used here not inside the Task directory. And automatically called upon from here.
  • Meta: This directory contains the metadata about your role regarding the dependencies which are being required to run this role in any system, so it will not be run until the dependencies inside it are not been resolved.
  • Tasks: This directory contains the main list of the tasks which needs to be executed by the role.
  • Vars: This directory has high precedence than defaults directory and can only be overwritten by passing them On the command line, In the specific task or In a block.
  • Templates:This directory contains the Jinja to template inside this. Basically, all the dynamic files are being stored here which can be variablized.

 

Whitespace and Comments

Generous use of whitespace and breaking things up is really appreciated. One very important thing is the use of comments inside your roles so that someone using your role in future could be able to easily understand it properly.

 

YAML format

Learn YAML format properly and use of indentation properly inside the document. Sometimes, when running the role gives the error for Invalid Syntax due to bad indentation format. And writing in proper Indentation makes your role look beautiful.

 

 

Always Name Tasks

It is possible to leave off the ‘name’ for a given task, though it is recommended to provide a description about something is being done instead. This name is shown when that particular task is being run.

 

 

Version Control
Use version control. Keep your roles and inventory files in git and commit when you make changes to them. This way you have an audit trail describing when and why you changed the rules that are automating your infrastructure.

 

 

Variable and Vaults
Since the variable contains sensitive data, so It is often easier to find variables using grep or similar tools inside the Ansible system. Since vaults obscure these variables, It is best to work with a layer of Indirection. This allows Ansible to find the variables inside the unencrypted file and all sensitive variables come from an encrypted file.
The best approach to perform is to start with a group_vars subdirectory containing two more subdirectories inside it naming “Vars” and “Vaults”. Inside “Vars”  directory define all the variable including sensitive variables also. Now, copy those sensitive variables inside “Vault” directory while using the prefix “vault_*” for the variables. Now you should adjust the variables in the “Vars” to point the matching “vault_*” variables using jinja2 syntax and ensure that vault file is vault encrypted.

 

Roles for multiple OS
Roles should be written in a way that they could be run on multiple Operating systems. Try to make your roles as generic as you can. But if you have created a role for some specific kind of operating system or some specific application, then try to explicitly define that inside the role name.

 

Single role Single goal
Avoid tasks within a role which are not related to each other. Don’t build a common role. It’s ugly and bad for readability of your role.

 

Other Tips:
  • Use a module if available
  • Try not to use command or shell module
  • Use the state parameter
  • Prefer scalar variables
  • Set default for every variable
  • If you have multiple roles related to each other than try to create a common variable file for all of them which will be called inside your playbook

 

  • Use “copy” or “template” module instead of “lineinfile” module
  • Make role fully variablized

 

  • Be explicit when writing tasks. Suppose, If you are creating a file or directory then rather defining src and destination, try to define owner, group, mode etc.
Summary:
  • Create a Role which could be used further.
  • Create it using proper modules for better understanding.
  • Do proper comments inside it so that it would be understood by someone else also.
  • Use proper Indentation for the YAML format.
  • Create your Role variables and also secure them using vault.
  • Create Single role for Single goal.

 

Using Ansible Dynamic Inventory with Azure can save the day for you.

As a DevOps Engineer, I always love to make things simple and convenient by automating them. Automation can be done on many fronts like infrastructure, software, build and release etc.

Ansible is primarily a software configuration management tool which can also be used as an infrastructure provisioning tool.
One of the thing that I love about Ansible is its integration with different cloud providers. This integration makes things really loosely coupled, For ex:- we don’t require to manage whole information of cloud in Ansible (Like we don’t need instance metadata information for provisioning it).

Ansible Inventory

Ansible uses a term called inventory to refer to the set of systems or machines that our Ansible playbook or command work against. There are two ways to manage inventory:-
  • Static Inventory
  • Dynamic Inventory
By default, the static inventory is defined in /etc/ansible/hosts in which we provide information about the target system. In most of the cloud platform when the server gets reboot then it reassigns a new public address and again we have to update that in our static inventory, so this can’t be the lasting option.
Luckily Ansible supports the concept of dynamic inventory in which we have some python scripts and a .ini file through which we can provision machines dynamically without knowing its public or private address. Ansible Dynamic Inventory is fed by using external python scripts and .ini files provided by Ansible for cloud infrastructure platforms like Amazon, Azure, DigitalOcean, Rackspace.
In this blog, we will talk about how to configure dynamic inventory on the Azure Cloud Platform.

Ansible Dynamic Inventory on Azure

The first thing that always required to run anything is software and its dependencies. So let’s install the software and its dependencies first. First, we need the python modules of azure that we can install via pip.
 
$ pip install 'ansible[azure]'
After this, we need to download azure_rm.py

$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/azure_rm.py

Change the permission of file using chmod command.

$ chmod +x azure_rm.py

Then we have to log in to Azure account using azure-cli

$ az login
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code XXXXXXXXX to authenticate.

The az login command output will provide you a unique code which you have to enter in the webpage i.e.
https://aka.ms/devicelogin

As part of the best practice, we should always create an Active Directory for different services or apps to restrict privileges. Once you logged in Azure account you can create an Active Directory app for Ansible

$ az ad app create --password ThisIsTheAppPassword --display-name opstree-ansible --homepage ansible.opstree.com --identifier-uris ansible.opstree.com

Don’t forget to change your password ;). Note down the appID from the output of the above command.

Once the app is created, create a service principal to associate it with.

$ az ad sp create --id appID

Replace the appID with actual app id and copy the objectID from the output of the above command.
Now we just need the subscription id and tenant id, which we can get by a simple command

$ az account show

Note down the id and tenantID from the output of the above command.

Let’s assign a contributor role to service principal which is created above.

$ az role assignment create --assignee objectID --role contributor

Replace the objectID with the actual object id output.

All the azure side setup is done. Now we have to make some changes to your system.

Let’s start with creating an azure home directory

$ mkdir ~/.azure

In that directory, we have to create a credentials file

$ vim ~/.azure/credentials

[default]
subscription_id=id
client_id=appID
secret=ThisIsTheAppPassword
tenant=tenantID

Please replace the id, appID, password and tenantID with the above-noted things.

All set !!!! Now we can test it by below command

$ python ./azure_rm.py --list | jq

and the output should be like this:-

{
  "azure": [
    "ansibleMaster"
  ],
  "westeurope": [
    "ansibleMaster"
  ],
  "ansibleMasterNSG": [
    "ansibleMaster"
  ],
  "ansiblelab": [
    "ansibleMaster"
  ],
  "_meta": {
    "hostvars": {
      "ansibleMaster": {
        "powerstate": "running",
        "resource_group": "ansiblelab",
        "tags": {},
        "image": {
          "sku": "7.3",
          "publisher": "OpSTree",
          "version": "latest",
          "offer": "CentOS"
        },
        "public_ip_alloc_method": "Dynamic",
        "os_disk": {
          "operating_system_type": "Linux",
          "name": "osdisk_vD2UtEJhpV"
        },
        "provisioning_state": "Succeeded",
        "public_ip": "52.174.19.210",
        "public_ip_name": "masterPip",
        "private_ip": "192.168.1.4",
        "computer_name": "ansibleMaster",
        ...
      }
    }
  }
}

Now you are ready to use Ansible in Azure with dynamic inventory. Good Luck 🙂

Setup Jenkins using Ansible

In this document I’ll walk you through how you can setup Jenkins using Ansible.

Prerequisites

  •  OS – Ubuntu {at least two machine required in production}
  •  First machine for Ansible  installation
  •  Second machine where we will install jenkins server
  • You should have basic understanding of ansible workflow.
Note :  You should have password less login enabled in second machine. use this link 
http://www.linuxproblem.org/art_9.html

Ansible Installation
Before starting with installing Jenkins using Ansible, you need to have Ansible installed in your system.

$ curl https://raw.githubusercontent.com/OpsTree/AnsiblePOC/alok/scripts/Setup/setup_ansible.sh | sudo bash

Setup jenkins using Ansible
Install jenkins ansible roles
Once we have ansible installed in our system, we can start installing the jenkins using ansible. To install we will use an already available ansible role to setup jenkins

$ ansible-galaxy install geerlingguy.jenkins
to know more about the jenkins role hit this link https://galaxy.ansible.com/detail#/role/440

Ansible roles default directory path is /etc/ansible/roles
Make ansible playbook file

Now the next step is to use the installed jenkins roles to install the jenkins. For this purpose we will create a playbook  and hosts file with below content

$ cd ~/MyPlaybook/jenkins
create a file hosts and add below content
[jenkins_hosts]
192.168.33.15

Screenshot from 2015-11-30 12:55:41.png
Next create  a file site.yml and add below content
– hosts: jenkins_hosts
 roles:
     – { role: geerlingguy.jenkins }

Screenshot from 2015-11-30 12:59:08.png
so configuration file is done, the next step is to run ansible playbook command

$ ansible-playbook -i hosts site.yml

Now that Jenkins is running, go to http://192.168.33.15:8080. You’ll be welcome by the default Jenkins screen.

Opstree SHOA Part 1: Build & Release

At Opstree we have started a new initiative called SHOA, Saturday Hands On Activity. Under this program we pick up a concept, tool or technology and do a hands on activity. At the end of the day whatever we do is followed by a blog or series of blog that we have understood during the day.
 
Since this is the first Hands On Activity so we are starting with Build & Release.

What we intend to do 

Setup Build & Release for project under git repository https://github.com/OpsTree/ContinuousIntegration.

What all we will be doing to achieve it

  • Finalize a SCM tool that we are going to use puppet/chef/ansible.
  • Automated setup of Jenkins using SCM tool.
  • Automated setup of Nexus/Artifactory/Archiva using SCM tool.
  • Automated setup of Sonar using SCM tool.
  • Dev Environment setup using SCM tool: Since this is a web app project so our Devw443 environment will have Nginx & tomcat.
  • QA Environment setup using SCM tool: Since this is a web app project so our QA environment will have Nginx & tomcat.
  • Creation of various build jobs
    • Code Stability Job.
    • Code Quality Job.
    • Code Coverage Job.
    • Functional Test Job on dev environment.
  • Creation of release Job.
  • Creation of deployment job to do deployment on Dev & QA environment.
This activity is open for public as well so if you have any suggestion or you want to attend it you are most welcome