Best practices of Ansible Role

Ansible Role
(Best practices)

 

I have written many Ansible Roles in my career. But when I talk about the “Best Practice of writing an Ansible Role” half of them were non-considerable. When I started writing Ansible Roles, I wrote them with a thought as to just complete my task. This thought made me struggle as a “DevOps Guy” because of this practice I just have to write each and every Ansible Role again and again when needed. Without the proper understanding about the Architecture of Ansible Role, I was incapable of enjoying all the functionality which I could have used to write an Ansible Role where I was just using “command” and “shell”  modules.

 
Advantages of Best Practices
  • Completing the task using Full Functionality.
  • Vandalized Architecture helps to create Ansible roles as Utilities which can be used further using different values.
  • Applying best practices helps you to learn new things every day.
  • Following “Convention Over Configuration” makes your troubleshooting much easier.
  • Helps you to grow your Automation skills.
  • You don’t have to worry about the latest version or change in values ever.

I can talk about the Advantages of best practices continuously but you should understand it after using them. So now, Let’s talk about “How to apply them”.

 

First, we will understand the complete directory structure on Ansible Role:
  • Defaults: The default variables for the role are been stored here inside this directory. These variables have the lowest priority.
  • Files: All the static files are being stored here which are used inside the role.
  • Handlers: All the handlers are being used here not inside the Task directory. And automatically called upon from here.
  • Meta: This directory contains the metadata about your role regarding the dependencies which are being required to run this role in any system, so it will not be run until the dependencies inside it are not been resolved.
  • Tasks: This directory contains the main list of the tasks which needs to be executed by the role.
  • Vars: This directory has high precedence than defaults directory and can only be overwritten by passing them On the command line, In the specific task or In a block.
  • Templates:This directory contains the Jinja to template inside this. Basically, all the dynamic files are being stored here which can be variablized.

 

Whitespace and Comments

Generous use of whitespace and breaking things up is really appreciated. One very important thing is the use of comments inside your roles so that someone using your role in future could be able to easily understand it properly.

 

YAML format

Learn YAML format properly and use of indentation properly inside the document. Sometimes, when running the role gives the error for Invalid Syntax due to bad indentation format. And writing in proper Indentation makes your role look beautiful.

 

 

Always Name Tasks

It is possible to leave off the ‘name’ for a given task, though it is recommended to provide a description about something is being done instead. This name is shown when that particular task is being run.

 

 

Version Control
Use version control. Keep your roles and inventory files in git and commit when you make changes to them. This way you have an audit trail describing when and why you changed the rules that are automating your infrastructure.

 

 

Variable and Vaults
Since the variable contains sensitive data, so It is often easier to find variables using grep or similar tools inside the Ansible system. Since vaults obscure these variables, It is best to work with a layer of Indirection. This allows Ansible to find the variables inside the unencrypted file and all sensitive variables come from an encrypted file.
The best approach to perform is to start with a group_vars subdirectory containing two more subdirectories inside it naming “Vars” and “Vaults”. Inside “Vars”  directory define all the variable including sensitive variables also. Now, copy those sensitive variables inside “Vault” directory while using the prefix “vault_*” for the variables. Now you should adjust the variables in the “Vars” to point the matching “vault_*” variables using jinja2 syntax and ensure that vault file is vault encrypted.

 

Roles for multiple OS
Roles should be written in a way that they could be run on multiple Operating systems. Try to make your roles as generic as you can. But if you have created a role for some specific kind of operating system or some specific application, then try to explicitly define that inside the role name.

 

Single role Single goal
Avoid tasks within a role which are not related to each other. Don’t build a common role. It’s ugly and bad for readability of your role.

 

Other Tips:
  • Use a module if available
  • Try not to use command or shell module
  • Use the state parameter
  • Prefer scalar variables
  • Set default for every variable
  • If you have multiple roles related to each other than try to create a common variable file for all of them which will be called inside your playbook

 

  • Use “copy” or “template” module instead of “lineinfile” module
  • Make role fully variablized

 

  • Be explicit when writing tasks. Suppose, If you are creating a file or directory then rather defining src and destination, try to define owner, group, mode etc.
Summary:
  • Create a Role which could be used further.
  • Create it using proper modules for better understanding.
  • Do proper comments inside it so that it would be understood by someone else also.
  • Use proper Indentation for the YAML format.
  • Create your Role variables and also secure them using vault.
  • Create Single role for Single goal.

 

Docker Logging Driver

The  docker logs command batch-retrieves logs present at the time of execution. The docker logs command shows information logged by a running container. The docker service logs command shows information logged by all containers participating in a service. The information that is logged and the format of the log depends almost entirely on the container’s endpoint command.

These logs are basically stored at “/var/lib/docker/containers/.log”, So basically it is not easy to use this file by using Filebeat because the file will change every time when the new container is up with a new container id.

So, How to monitor these logs which are formed in different files ? For this Docker logging driver were introduced to monitor the docker logs.

Docker includes multiple logging mechanisms to help you get information from running containers & services. These mechanisms are called logging drivers. These logging drivers are configured for the docker daemon.

To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts or C:\ProgramData\docker\config\ on Windows server hosts.

The default logging driver is json-file. The following example explicitly sets the default logging driver to syslog:

{                                            
  “log-driver”: “syslog”
}

After configuring the log driver in daemon.json file, you can define the log driver & the destination where you want to send the logs for example logstash & fluentd etc. You can define it either on the run time execution command as “–log-driver=syslog –log-opt syslog-address=udp://logstash:5044” or if you are using a docker-compose file then you can define it as:

“`
logging:
driver: fluentd
options:
fluentd-address: “192.168.1.1:24224”
tag: “{{ container_name }}”
“`

Once you have configured the log driver, it will send all the docker logs to the configured destination. And now if you will try to see the docker logs on the terminal using the docker logs command, you will get a msg:

“`
Error response from daemon: configured logging driver does not support reading
“`

Because all the logs have been parsed to the destination.

Let me give you an example that how i configured logging driver fluentd
and parse those logs onto Elasticsearch and viewed them on Kibana. In this case I am configuring the logging driver at the run-time by installing the logging driver plugin inside the fluentd but not in daemon.json. So make sure that your containers are created inside the same docker network where you will be configuring the logging driver.

Step 1: Create a docker network.

“`
docker network create docker-net
“`

Step 2: Create a container for elasticsearch inside a docker network.

“`
docker run -itd –name elasticsearch -p 9200:9200 –network=docker-net elasticsearch:6.4.1
“`

Step 3: Create a fluentd configuration where you will be configuring the logging driver inside the fluent.conf which is further being copied inside the fluentd docker image.

fluent.conf

“`

@type forward
port 24224
bind 0.0.0.0

@type copy

@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key app
flush_interval 1s
index_name fluentd
type_name fluentd

@type stdout

“`

This will also create an index naming as fluentd & host is defined in the name of the service defined for elasticsearch.

Step 4: Build the fluentd image and create a docker container from that.

Dockerfile.fluent

“`
FROM fluent/fluentd:latest
COPY fluent.conf /fluentd/etc/
RUN [“gem”, “install”, “fluent-plugin-elasticsearch”, “–no-rdoc”, “–no-ri”, “–version”, “1.9.5”]
“`

Here the logging driver pluggin is been installed and configured inside the fluentd.

Now build the docker image. And create a container.

“`
docker build -t fluent -f Dockerfile.fluent .
docker run -itd –name fluentd -p 24224:24224 –network=docker-net fluent
“`

Step 5: Now you need to create a container whose logs you want to see on kibana by configuring it on the run time. In this example, I am creating an nginx container and configuring it for the log driver.

“`
docker run -itd –name nginx -p 80:80 –network=docker-net –log-driver=fluentd –log-opt fluentd-address=udp://:24224 opstree/nginx:server
“`

Step 6: Finally you need to create a docker container for kibana inside the same network.

“`
docker run -itd –name kibana -p 5601:5601 –network=docker-net kibana
“`

Now, You will be able to check the logs for the nginx container on kibana by creating an index fluentd-*.

Types of Logging driver which can be used:

       Driver           Description

  •  none:           No logs are available for the container and docker logs does  not return any output.
  •  json-file:     The logs are formatted as JSON. The default logging driver for Docker.
  •  syslog:     Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
  •  journald:     Writes log messages to journald. The journald daemon must be running on the host machine.
  •  gelf:     Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
  •  fluentd:     Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
  •  awslogs:     Writes log messages to Amazon CloudWatch Logs.
  •  splunk:     Writes log messages to splunk using the HTTP Event Collector.
  •  etwlogs:     Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms.
  •  gcplogs:     Writes log messages to Google Cloud Platform (GCP) Logging.
  •  logentries:     Writes log messages to Rapid7 Logentries.