Docker-Compose As A Bundled Application

When docker was released as a new containerization tool, it took the market by a storm. With its lightweight images, multi-os support, and ability to ship containers, it’s popularity only roared. I have been using it for more than six months now, I can see why it is so. Hypervisors, another type of virtualizing tools,  have been hard on hardware. Which means they require a lot of resources to run. This increases the cost of running applications way more than those running on containers. This is the problem docker solved and hence, it’s popularity. Docker engine just sits on host OS and translates the instructions from an application to the underlying OS. It does not need one extra layer of virtual OS, just the binaries and libraries of application bundled in the image. Right? Now, hold on to that thought. We all have been working with docker and an extension with docker-compose. Why? Because it makes our job easy, We are spared from typing hundreds of ad-hoc commands in terminal to set up a slightly or very complicated application with certain dependencies. We can just describe it in a `docker-compose.yml` file and our job is done. However, the problem arises when we have to share that compose file:

  • Other users might need to use the file in a different environment, so they will need to edit all the values pertaining to their need, manually, and keep separate compose files for each environment.
  • Troubleshooting various configuration issues can be a tedious task since there is no single place where the configuration of the application can be stored. Changes will have to be made in the file.
  • This also makes communication between Dev and Ops team more tricky than it has to be resulting in communication gap and time wastage.

To have a more clear picture of the issue, we can have look at the below image:

We have compose file and configuration for separate environments, we make changes according to environment needs in different compose files, which could be a long manual task depending on the size of our project.


All of this points to the fact that there is no way to bundle the applications that use efficiently-bundled docker images. See the irony here? Well, there “was” no way, until there was. Enter ‘docker-app’. This, relatively, new tool is the answer to packaging docker-compose applications. I came across it when I was, myself, struggling to re-use a docker-compose application I had written in another environment. As soon as I read about it, I had to try it, which I did and loved. It made the task much easier as it provided a template of compose file and a key-value store for environment dependent parameters.


Now, we have an artefact with extention of ‘.dockerapp’. We can pass configuration values either through CLI or files or both and it will render compose file according to those values.

Let us now go through an example of how the docker app works. I am going to deploy a dummy application Spring3hibernate from Opstree Github repository in QA env and later in PROD by making simple configuration changes.
Installing docker-app is easy, though, there is one thing one should keep in mind: it can be installed as a plugin in docker-CLI or as standalone CLI tool itself. I will be installing it as a standalone CLI tool on linux. If you wish to install it as a plugin to docker-CLI and/or on another OS, visit their Github page: https://github.com/docker/app (Also, please visit github page for basics)
Before continuing, please ensure you have docker-CLI and docker-compose installed.
Please follow below steps to install docker-app:

$ export OSTYPE="$(uname | tr A-Z a-z)"
$ curl -fsSL --output "/tmp/docker-app-${OSTYPE}.tar.gz" \
"https://github.com/docker/app/releases/download/v0.8.0/docker-app-${OSTYPE}.tar.gz"
$ tar xf "/tmp/docker-app-${OSTYPE}.tar.gz" -C /tmp/
$ install -b "/tmp/docker-app-standalone-${OSTYPE}" /usr/local/bin/docker-app

Create a new directory in your home, we’ll call it app home:

$ cd ~
$ mkdir spring3hibernate-app
$ cd spring3hibernate-app/

Now, clone the app from Opstree Github repository. This app needs only mysql as a dependency.

$ git clone https://github.com/opstree/spring3hibernate.git

We need to update database properties file and nginx config file with below contents respectively:

$ vim ~/spring3hibernate-app/spring3hibernate/src/main/resources/database.properties

Replace below content over there:

database.driver=com.mysql.jdbc.Driver
database.url=jdbc:mysql://mysql:3306/employeedb
database.user=admin
database.password=password
hibernate.dialect=org.hibernate.dialect.MySQLDialect
hibernate.show_sql=true
hibernate.hbm2ddl.auto=update
upload.dir=c:/uploads

For nginx conf file:

$ vim ~/spring3hibernate-app/spring3hibernate/nginx/default.conf
server {
    listen       80;
    server_name  localhost;

    location / {
        stub_status on;
        proxy_pass http://springapp1:8080/;

    }
# redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Move ‘default.conf’ to ~/spring3hibernate-app/spring3hibernate/nginx/conf/qa/ as we have different conf file for PROD which goes to ~/spring3hibernate-app/spring3hibernate/nginx/conf/prod/

upstream s3hbackend {
    server springapp1:8080;
    server springapp2:8080;
}
server {
       listen 80;
       location / {
           stub_status on;
           proxy_pass http://s3hbackend;
       }
  
       # redirect server error pages to the static page /50x.html
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   /usr/share/nginx/html;
       }

}

This is the configuration for the nginx load balancer. Remember this, we’ll use it later. Let’s create our docker-app now, make sure you are in the app home directory
when executing this command:

$ docker-app init --single-file s3h

This will create a single file named s3h.dockerapp which will look like this: 

# This section contains your application metadata.
# Version of the application
version: 0.1.0
# Name of the application
name: s3h
# A short description of the application
description:
# List of application maintainers with name and email for each
maintainers:
  - name: ubuntu
    email:


---
# This section contains the Compose file that describes your application services.
version: "3.6"
services: {}


---
# This section contains the default values for your application parameters.

{}

As you can see this file is divided into three parts, metadata, compose, and parameters. They are all in one file because we used –single-file switch. We can divide them up in multiple files by using docker-app split command in app home directory, docker-app merge will put them back in one file. Now, for QA, we have the following configuration for s3h.dockerapp file:

version: 0.1.0
name: s3h
description:
maintainers:
  - name: atbk5
    email: adeel.ahmad@opstree.com


---
version: "3.7"
services:
  mysql:
    image: mysql:5.7
    container_name: mysql
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.env.rootpass}
      MYSQL_DATABASE: ${mysql.env.database}
      MYSQL_USER: ${mysql.env.user}
      MYSQL_PASSWORD: ${mysql.env.userpass}
    restart: always
    networks:
      - backend
    volumes:
      - db_data:/var/lib/mysql


  spring1:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp1
    restart: always
    networks:
      - backend
      - frontend


  spring2:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp2
    restart: always
    networks:
      - backend
      - frontend
    x-enabled: ${spring.app2}


  nginx:
    depends_on:
      - spring1
    image: nginx:alpine
    container_name: proxy
    restart: always
    networks:
      - frontend
    volumes:
      - ${nginx.conf}:/etc/nginx/conf.d
    ports:
      - ${nginx.port}:80
    x-enabled: ${nginx.status}


networks:
  frontend:
  backend:


volumes:
  db_data:


---
mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/qa
  port: 81
  status: true
spring:
  app2: false

As mentioned before, first part contains app metadata, second part contains actual compose file with lots of variables, and last part contains values of those variables. Special mention here is x-enabled variable, docker-app provides functionality to temporarily disable a service using this variable. Now, try a few commands:

$ docker-app inspect

It will produce summary of whole app.

$ docker-app render

It will replace all variables with their values and will produce a compose file

$ docker-app render --set nginx.status=”false”

It will remove nginx from docker-app compose as well as deploy

$ docker-app render | docker-compose -f - up

It will spin up all the containers according to rendered compose file. We can see the application running on port 81 of our machine.

$ docker-app --help

To check out more commands and play around a bit.
At this point, it will be better to create two directories in app home: qa and prod. Create a file in qa: qa-params.yml. Another file in prod: prod-params.yml. Copy all parameters from above s3h.dockerapp file to qa-params.yaml (or not). More importantly, copy below changes in parameters to prod-params.yml

mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/prod
  port: 80
  status: true
spring:
  app2: true

We are going to loadbalance springapp1 and springapp2 in PROD environment, since we have enabled springapp2 using x-enabled parameter. We have also changed nginx conf bind path to the new conf file and host port for nginx to 80 (for Production). All so easily. Run command:

$ docker-app render --parameters-file ./prod/prod-params.yaml

This command will produce a compose file ready for production deployment. Now run:

$ docker-app render --parameters-file ./prod/prod-params.yml | docker-compose -f - up

And production is deployed … Visit port 80 of your localhost to verify. What’s more exciting is that we can also share our docker-apps through docker hub, we can tag the app and push it to our remote registry as images after logging in:

$ docker login

Provide your username and password for docker hub, create an account if not yet created.

$ docker-app push --tag atbk5/s3h.dockerapp:latest

If we wish to upload additional files as well, we will have to split our project using docker-app split and put additional files in the directory before pushing. The additional files will go as attachments which can be accessed later.

Conclusion

With the arrival of docker app, our large, composite, and containerized applications can also be shipped and re-used as images. That is cool. But there’s something cooler which we haven’t explored yet. Deploying our docker-apps on kubernetes with the goal of exploring how far in management, and how optimal in delivery, we can go with our applications. Let’s keep this as a topic for the next blog. Until then, have a nice one. 🙂

Image Source: https://reflectoring.io/externalize-configuration/

MySQL Monitoring

In recent time, I invested a good amount of time in learning and working on monitoring esp. Database Monitoring. So I found this medium the best way to share my journey, findings and obviously spectacular dashboards.

This blog will help you understand why we need MySQL monitoring and how we can do it.

Let’s start with the need to implement MySQL monitoring. There are multiple areas which we can monitor, here I am enlightening some important ones.

1. Resource Utilization
First of all, you have no idea what’s going on with MySQL, you can not know if it’s in a haywire state if there is no monitoring.
    An ample number of queries run through it. Some of them are lightweight and some of them are very heavy which makes CPU over-utilized or overload. In that case, if we talk about production, a number of requests can be flushed out making it a business loss.

2. Database Connections
Sometimes the number of connections run out and no further connections are left for application to communicate with DB. In the absence of monitoring, it’s really hard to figure out the root cause.

3. Replication Lag
When we use MySQL as a master-slave cluster, real-time replication of data from the master to slave is a key factor to monitor. The lag between master and slave should be zero.

In my scenario, the slave is being used for DB replication from the master and also serving read queries to avoid overburden on the master. Now if replication lag is high and at the same time if any read query is triggered for the slave, what will happen? The same data which is on the master will not get replicated on the slave because of replication lag!
That read query will show an unexpected or erroneous result.

4. Query Analytics
Monitoring DB also helps in identify what queries are taking a long time. It helps in identify and optimize slow queries. At the end its all about being fast.

Ok, so how to monitor MySQL. There are multiple enterprise solutions available to monitor Database with a single click solution, but I didn’t have the luck to go for paid solutions. So, I have started exploring open source solutions which will cover all my requirements. Finally, I got one.

Its Percona Monitoring And management (PMM)Tool

PMM is an open-source platform to monitor and manage MySQL Database, that we can run in own environment. It provides a time-series database which ensures reliable and real-time data.

Installing PMM Server

curl -fsSL https://raw.githubusercontent.com/percona/pmm/master/get-pmm.sh  -o get-pmm.sh

Change permission to make it executable

chmod +x get-pmm.sh

Now run pmm script to install it.

./ get-pmm.sh

This will run a docker container. Once docker container is up and running we will install PMM client and will bind the port

Installing PMM Client

Add the below repo

wget https://repo.percona.com/apt/percona-release_0.1-6.$(lsb_release -sc)_all.deb

Install the package from added repository

dpkg -i percona-release_0.1-6.$(lsb_release -sc)_all.deb

Update your ubuntu follow below steps:

apt-get update 
pmm-admin config --server <server_ip>:<port>-get update 
pmm-admin add mysql

It’s not only MySQL you can monitor, in fact, but pmm also allows to integrate it with other databases as well like Amazon RDS, Postgres, and MongoDB.

There are many alternatives for MySQL monitoring in the market like Nagios, VividCortex Analyser, SolarWinds server and application monitor, LogicMonitor and Management tool, MySQL OpsPack etc. But exploring open-source tools has its pros and cons but the level of learning you get from it, that makes it worth using. So anyone out there reading this blog I would suggest to give it a try.

Happy monitoring!!

Image Source: https://www.kisspng.com/png-clip-art-brand-line-technology-text-messaging-6462820/preview.html

Unix File Tree Part-1

Related image

Nature has its own way to reach out for perfection and the same should be our instinct to make our creations perfect.

Dennis Ritchie, father of Unix and an esteemed computer scientist might have implied the same approach for Unix directory structure.

Why?

Before getting into the hierarchy of Unix File Tree lets discuss why we need it. The need for a directory structure arises when multiple users are handling multiple software along with their dependent files. Let me explain this with a couple of scenarios.

Scenario-1:

Consider an ideal software or package which requires multiple files to function properly.

  • Binary files
  • Configuration files
  • Log files
  • Data files
  • Metadata files during execution
  • Libraries

 For now, let’s consider there is just one directory and I am keeping all of the dependent files in that directory. 

$ ls
package-1.binary  package-1.conf  package-1.data  package-1.lib  package-1.log  package-1.tmp

Another software comes in the picture which has its own dependent files.

$ ls
package-1.binary  package-1.data  package-1.log  package-2.binary  package-2.data  package-2.log
package-1.conf    package-1.lib   package-1.tmp  package-2.conf    package-2.lib   package-2.tmp

Things will get messy while dealing with various software since handling them won’t be easy and will lead to a chaotic situation.

Scenario-2:

Suppose I am a system admin and managing all of the software in the above scenario-1. To make things organized I created different directories to place the dependent files.
  • Binary files –> /dir-1
  • Configuration files –> /dir-2
  • Log files –> /dir-3
  • Data files –> /dir-4
  • Meta files –> /dir-5
  • Libraries –> /dir-6

As the work gets overloaded I need more admins to support they won’t be able to relate with the naming convention as I did.
To escape this situation the creator of Unix decided to follow a philosophy “Convention over Configuration”.
 As the name suggest giving priority to defined convention over individual’s configuration. So that everyone should be on the same page and keeping that in mind everyone else will follow.
And the simulation of the philosophy was like this

  • Binary files –> /bin
  • Configuration files –> /etc
  • Log files –> /log
  • Data files –> /var
  • Meta files –> /tmp
  • Libraries –> /lib

Which resulted in the Unix File Tree

$ tree -d -L 1
.
├── apps
├── bin
├── boot
├── dev
├── etc
├── home
├── lib
├── lib64
├── lost+found
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin
├── snap
├── srv
├── sys
├── tmp
├── usr
└── var

22 directories

You might be thinking that how will Unix figure out where is the configuration file, where is the binary and rest of the stuff of the software.
Here comes the role of the PATH variable


$ echo $PATH
/home/dennis/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
These are environment variables specifying a set of directories where executable programs are located. In general, each executing process or user session has its own PATH setting.

So now we have a proper understanding of why do we need a File Tree.
For diving deep into the significance of each one of the directory stay tuned for Unix File Tree Part-2.

Cheers!

Postfix Email Server integration with SES

Have you ever thought of setting up your web or application server with your own email server? Well, when you setup a application it is likely to have your own email server to handle incoming and outgoing mail to your domain. Before I get into my topic I assume that you got some basic knowledge of AWS. Here I am going to explain you how to setup a simple postfix email server with AWS SES to handle all your email. For any kind of more information please refer AWS SES doc. Lets put it in simple way. We have two phases in this implementation.

  1. Configure SES with Domain
  2. Configure postfix and integrate with SES on EC2

Configure SES with Domain

Amazon SES requires that you verify your email address or domain, to confirm that you own it and to prevent others from using it. When you verify an entire domain, you are verifying all email addresses from that domain, so you don’t need to verify email addresses from that domain individually. For example, if you verify the domain example.com, you can send email from user1@example.com, user2@example.com, or any other user at example.com. Lets verify our domain name with SES.

  • Go to the AWS console management and click on the SES.
  • Click on the Domain availabe on left top corner.
  • Click verify new Domain. 

  • On the Verify a New Domain, for Domain, type the name of the domain that you registered using Route 53, and then choose Verify This Domain.   
  • On the Verify a New Domain dialog box, choose Use Route 53. Your Domain Verification and Email Receiving Record will be updated in Route 53.

Note

If you don’t see Use Route 53 your domain may not be registered with Route 53.

  • Once verified your domain, you can use any email address from this domain as your email.
  • To establish connection between postfix and SES you will need SMTP credential.
  • Now choose the SMTP settings in same SES console.
  • Choose Create My SMTP Credential.
  • Give the user name and click create.
  • Download the credentials this will be uses when you configure server.

Configure postfix and integrate with SES on EC2

In this section you are going to install and configure postfix on EC2 instance.
    Prerequisites

  • You should have up and running EC2 machine.
  • Open port 25(SMTP) and 22(SSH) for all security group.

Lets get started

Lets login to machine using putty or ssh client. Now need to create a domain on Route53.

   Route53

  • Go to the AWS console and choose Route53.
  • Choose Hosted Zone and select your domain where you wish to configure.
  • Click on create record set to add a new record set, then select A-IPv4 address for the resource type.
  • Add subdomain name in NAME field and enter a record value that is your EC2 IP.
  • Set the desired TTL.
  • Then click on Create button.

Now we will install Postfix on our EC2 machine.

sudo apt-get update

sudo apt-get install postfix

      Now we need to make some changes in postfix configuration file. Lets do it one by one.

To integrate our postfix with SES we need to add some more line in main.cf.

vim /etc/mailname
example.com

vim /etc/postfix/main.cf

mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain

myhostname = example.com

myorigin = /etc/mailname

relayhost = [email-smtp.us-east-1.amazonaws.com]:587

smtp_sasl_auth_enable = yes

smtp_sasl_security_options = noanonymous

smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

smtp_use_tls = yes

smtp_tls_security_level = encrypt

smtp_tls_note_starttls_offer = yes

smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt

NOTE:
  Value of relay host will change depending upon SES region you use.
Comment out of the following line of master.cf file by putting # infront of it:

vim /etc/postfix/master.cf
#-o smtp_fallback_relay=

Edit the file /etc/postfix/sasl_passwd if not present please create it:

vim /etc/postfix/sasl_passwd

[email-smtp.us-west-2.amazonaws.com]:587 IAMUSERNAME:PASSWORD

NOTE: Add your SMTP username and password that you downloaded. Save and close the file and use the below command to create hashmap database.

sudo postmap /etc/postfix/sasl_passwd

Stop and Start Postfix:

sudo service postfix stop

sudo service postfix start

Speeding up Ansible Execution Part 2

MITOGEN


In the previous post, we discussed various ways to reduce the ansible-playbook execution time, those changes were mostly made in the ansible config file, by adding or adjusting certain parameters in the file. But as you may have noticed that those methods were not that effective in certain cases, while using those methods we have to be very cautious about the result as they may affect ansible performance in one way or the other.


Generally, for the slower ansible execution, the main culprit is the way ansible is executed on the hosts. It creates multiple SSH connections and does not fully utilize the available resources. To tackle this problem, MITOGEN came to rescue !!!

Mitogen is a distributed programming library for Python. The Mitogen extension is a set of plug-ins for Ansible that enable it to operate via Mitogen, vastly improving its performance and enhancing its functional capability.

We all know about the strategies in ansible – linear, free & debug., the mitogen is just defined in the strategy column of the config file, so it is just a strategy, we are not making any other changes in the config file of the ansible so it is not affecting any other parameter, it is just the way, playbooks will be executed on the hosts.

Now coming to the mitogen installation part, we just have to download this package at a particular location and make some changes in the ansible config file as shown below,

[defaults]

strategy_plugins = /path/to/mitogen/ansible_mitogen/plugins/strategystrategy = mitogen_linear

we have to define the path where we have stored our mitogen files, and mention the strategy as “mitogen_linear”, under the default section of the config file, and we are good to go.

Now, after the Mitogen installation part, when we run our playbook, we will notice a reasonable reduction in the execution time,

Mitogen is fast because of the following reasons,

  • One connection is created per target and system logs aren’t spammed with repeated authentication events.
  • A single network roundtrip is used to execute a step whose code already exists in RAM on the target.
  • Processes are aggressively reused, avoiding the cost of invoking Python and recompiling imports, saving 300-800 ms for every playbook step.
  • Code is cached in the RAM, which further increases the speed.
  • Generally, ansible repeatedly rewrites and extracts ZIP files to temporary directories in the target hosts, mitogen also reduces these rewrites.

      All the above-mentioned features make the ansible to run faster.

      Mitogen is another extension for ansible that provides a decrease in its execution time and it is very easy to use, I think MITOGEN is very underrated and one of its kind, and we should definitely give it a try.

      I hope I have explained everything well, any suggestion/queries are highly appreciated.


Thanks !!!

Source:

https://mitogen.networkgenomics.com/ansible_detailed.html

Speeding up Ansible Execution Part 1

The knowledge of one of the SCM tools is a must for any DevOps engineer, ANSIBLE is one of the popular tools in this category, we all are aware of the ease that Ansible provides whether it is infra provisioning, orchestration or application deployment.
The reason for the vast popularity of Ansible is the long list of modules it provides to support any level of automation, moreover it also gives users the flexibility to create their own modules as per their requirement.
But The purpose of this blog is not to mention the features that ansible provides, but to show how we can speed up our playbook execution in Ansible, as a beginner executing ansible, is very easy and it also feels like saving a lot of time with it, but as you dive deep into it, you will come to know that running ansible playbooks will engage you for a considerable amount of time.
There are a lot of articles available on the internet on how we can speed up our ansible execution, so I have decided to sum up those articles into my blog, with the following methods, we can reduce our execution time without compromising with the overall performance of Ansible.
Before starting, I request  you guys to make a small change in your ansible configuration file (ansible.cfg), this small change will help you in tracking the time it will take for the playbook execution, and it also lists out the time is taken by each task.
Just add these lines to your ansible.cfg file under default section,

[default]

callback_whitelist = profile_tasks

Forks

When you are running your playbooks on various hosts, then you may have noticed that the number of servers where the playbook executes simultaneously is 5. You can increase this number inside the ansible.cfg file:
# ansible.cfg

forks = 10


or with a command line argument to ansible-playbook with the -f or –forks options. We can increase or decrease this value as per our requirement.
while using forks we should use “local_action” or “delegated” steps limited in number, as with higher fork value it will affect the ansible-server’s performance.

Async

In ansible, each task blocks the playbook, meaning the connections stay open until the task is done on each node, which is some cases takes a lot of time, here we can use “async” for those particular tasks, with the help of this ansible will automatically move to another task without waiting for the task execution on each node.
To launch a task asynchronously, we need to specify its maximum runtime and how frequently we would like to poll for status, it’s default value in 10 sec.
tasks:

– name: “name of the task”  

command: “command we want to execute”     

async: 40    

poll: 15
The only condition is that the subsequent tasks must not have a dependency on this task.

Free Strategy 

When running Ansible playbooks, you might have noticed that the Ansible runs every task on each node one by one, it will not move to another task until a particular task is completed on each node, which will take a lot of time, in some cases.
By default, the strategy is set to “linear”, we can set it to free.

– hosts: “hosts/groups”  

name: “name of the playbook”  

strategy: free


It will run the playbook on each host independently, without waiting for each node to complete.
Facts gathering is the default feature while executing playbook, sometimes we don’t need it.
In those cases, we can disable facts gathering, This has advantages in scaling Ansible in push mode with very large numbers of systems.

– hosts: “hosts/groups”  

name: “name of the playbook”  

gather_facts: no

Pipelining 

For each task in Ansible, there are lots of ssh connection created, which results in increasing the total execution time. Pipelining reduces the number of ssh operations required to execute a module by executing many Ansible modules without an actual file transfer. We just have to make these changes in the ansible.cfg file,
# ansible.cfg Pipelining = True
Although this can result in a very significant performance improvement when enabled, Pipelining is disabled by default because requiretty is enabled by default for many distros.

Poll Interval

When we run any the task in Ansible, it starts polling to check if the task is completed on the host or not, we can decrease this polling interval time in ansible.cfg to increase its performance, but it will increase the CPU usage, so we need to adjust its value accordingly We just have to adjust this the parameter in the ansible.cfg file,
internal_poll_interval=0.001

so, these are the various ways to decrease our playbook execution time in Ansible, generally we don’t use all these methods in a single setup, we use these features as per the requirement, 
The main motive of writing this blog is to determine the factors which will help in fine-tuning the Ansible performance, and there are many more factors which serves the same purpose but here I am mentioning the most important parameters among them.
I hope I have covered all the important aspects of the blog, feel free to provide your valuable feedback.
Thanks !!!

Source:

https://mitogen.networkgenomics.com/ansible_detailed.html

What Without Internet

What without Internet?

I had a dream a few days ago in which the existence of the internet was gone, When I woke up I thought about what would happen if there is no Internet for a day?


Sure, it would cause quite a bit of panic and uproar and it would be havoc for an organization to work without the internet, but if the internet resumed normally after 24 hours are over, things would return to normal pretty quickly.


Now, switch it off for a longer time, possibly a week or a month, that would have a more lasting impact, since, in that time, a significant number of people would find themselves unable to meet their obligations or do their business at all. This would be somewhat mitigated by the fact that the situation is a sort of a ‘natural disaster’, but still, those who really depend on the internet for their business would likely feel a lasting negative impact.
               
What if I say there are some organizations that work in a situation like there is no internet, yes it’s right due to some sort of security reasons they don’t prefer to use the public internet. Banks, space organizations, and many security agencies fall under this category.


Now, a question arises here how they manage to do regular updates and the installation of different packages in their different systems? The answer is quite simple: “the use of satellite server”.


Recently I got a task in relation to this context, in which:

1. A prerequisite here is that you don’t have internet connectivity in your system but one of the systems with which you can connect has internet connectivity.
2. Setup individual satellite server in your local network.
3. Install packages and regular updates.


To do so here I prefer to use the FTP satellite server

How to implement Ftp satellite server

Pre-requisites

An Ubuntu Server, and a non-root user with sudo privileges.
The system is configured with vsftpd

Suppose we are doing the installation of Jenkins

Make a directory pkg.jenkins.io in /var/www/html/



 Contents of pkg.jenkins.io


 

 

 

 

 

 

 

 

 

 

 

Paste the host link of debian file in /etc/apt/source.list.d

 

 Run the command sudo apt-get update

 Now run the command for installation of the package

 


“ The internet made fame wack and anonymity cool ”

So, far from the above context, we have learned about setting up FTP for users with a local account. If you need to use an external authentication source, you might want to look into vsftpd’s support of virtual users. This offers a rich set of options through the use of PAM, the Pluggable Authentication Modules, and is a good choice if you manage users in another system such as LDAP or Kerberos.

I hope I explained everything clearly enough to understand. If you have some better way of implementing a satellite server please help me to improve this blog.

Thanks for reading my writing. I’d really appreciate any kind of feedback in the comments.

Cheers till next time!!!!