MySQL Monitoring

In recent time, I invested a good amount of time in learning and working on monitoring esp. Database Monitoring. So I found this medium the best way to share my journey, findings and obviously spectacular dashboards.

This blog will help you understand why we need MySQL monitoring and how we can do it.

Let’s start with the need to implement MySQL monitoring. There are multiple areas which we can monitor, here I am enlightening some important ones.

1. Resource Utilization
First of all, you have no idea what’s going on with MySQL, you can not know if it’s in a haywire state if there is no monitoring.
    An ample number of queries run through it. Some of them are lightweight and some of them are very heavy which makes CPU over-utilized or overload. In that case, if we talk about production, a number of requests can be flushed out making it a business loss.

2. Database Connections
Sometimes the number of connections run out and no further connections are left for application to communicate with DB. In the absence of monitoring, it’s really hard to figure out the root cause.

3. Replication Lag
When we use MySQL as a master-slave cluster, real-time replication of data from the master to slave is a key factor to monitor. The lag between master and slave should be zero.

In my scenario, the slave is being used for DB replication from the master and also serving read queries to avoid overburden on the master. Now if replication lag is high and at the same time if any read query is triggered for the slave, what will happen? The same data which is on the master will not get replicated on the slave because of replication lag!
That read query will show an unexpected or erroneous result.

4. Query Analytics
Monitoring DB also helps in identify what queries are taking a long time. It helps in identify and optimize slow queries. At the end its all about being fast.

Ok, so how to monitor MySQL. There are multiple enterprise solutions available to monitor Database with a single click solution, but I didn’t have the luck to go for paid solutions. So, I have started exploring open source solutions which will cover all my requirements. Finally, I got one.

Its Percona Monitoring And management (PMM)Tool

PMM is an open-source platform to monitor and manage MySQL Database, that we can run in own environment. It provides a time-series database which ensures reliable and real-time data.

Installing PMM Server

curl -fsSL https://raw.githubusercontent.com/percona/pmm/master/get-pmm.sh  -o get-pmm.sh

Change permission to make it executable

chmod +x get-pmm.sh

Now run pmm script to install it.

./ get-pmm.sh

This will run a docker container. Once docker container is up and running we will install PMM client and will bind the port

Installing PMM Client

Add the below repo

wget https://repo.percona.com/apt/percona-release_0.1-6.$(lsb_release -sc)_all.deb

Install the package from added repository

dpkg -i percona-release_0.1-6.$(lsb_release -sc)_all.deb

Update your ubuntu follow below steps:

apt-get update 
pmm-admin config --server <server_ip>:<port>-get update 
pmm-admin add mysql

It’s not only MySQL you can monitor, in fact, but pmm also allows to integrate it with other databases as well like Amazon RDS, Postgres, and MongoDB.

There are many alternatives for MySQL monitoring in the market like Nagios, VividCortex Analyser, SolarWinds server and application monitor, LogicMonitor and Management tool, MySQL OpsPack etc. But exploring open-source tools has its pros and cons but the level of learning you get from it, that makes it worth using. So anyone out there reading this blog I would suggest to give it a try.

Happy monitoring!!

Image Source: https://www.kisspng.com/png-clip-art-brand-line-technology-text-messaging-6462820/preview.html

Unix File Tree Part-1

Related image

Nature has its own way to reach out for perfection and the same should be our instinct to make our creations perfect.

Dennis Ritchie, father of Unix and an esteemed computer scientist might have implied the same approach for Unix directory structure.

Why?

Before getting into the hierarchy of Unix File Tree lets discuss why we need it. The need for a directory structure arises when multiple users are handling multiple software along with their dependent files. Let me explain this with a couple of scenarios.

Scenario-1:

Consider an ideal software or package which requires multiple files to function properly.

  • Binary files
  • Configuration files
  • Log files
  • Data files
  • Metadata files during execution
  • Libraries

 For now, let’s consider there is just one directory and I am keeping all of the dependent files in that directory. 

$ ls
package-1.binary  package-1.conf  package-1.data  package-1.lib  package-1.log  package-1.tmp

Another software comes in the picture which has its own dependent files.

$ ls
package-1.binary  package-1.data  package-1.log  package-2.binary  package-2.data  package-2.log
package-1.conf    package-1.lib   package-1.tmp  package-2.conf    package-2.lib   package-2.tmp

Things will get messy while dealing with various software since handling them won’t be easy and will lead to a chaotic situation.

Scenario-2:

Suppose I am a system admin and managing all of the software in the above scenario-1. To make things organized I created different directories to place the dependent files.
  • Binary files –> /dir-1
  • Configuration files –> /dir-2
  • Log files –> /dir-3
  • Data files –> /dir-4
  • Meta files –> /dir-5
  • Libraries –> /dir-6

As the work gets overloaded I need more admins to support they won’t be able to relate with the naming convention as I did.
To escape this situation the creator of Unix decided to follow a philosophy “Convention over Configuration”.
 As the name suggest giving priority to defined convention over individual’s configuration. So that everyone should be on the same page and keeping that in mind everyone else will follow.
And the simulation of the philosophy was like this

  • Binary files –> /bin
  • Configuration files –> /etc
  • Log files –> /log
  • Data files –> /var
  • Meta files –> /tmp
  • Libraries –> /lib

Which resulted in the Unix File Tree

$ tree -d -L 1
.
├── apps
├── bin
├── boot
├── dev
├── etc
├── home
├── lib
├── lib64
├── lost+found
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin
├── snap
├── srv
├── sys
├── tmp
├── usr
└── var

22 directories

You might be thinking that how will Unix figure out where is the configuration file, where is the binary and rest of the stuff of the software.
Here comes the role of the PATH variable


$ echo $PATH
/home/dennis/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
These are environment variables specifying a set of directories where executable programs are located. In general, each executing process or user session has its own PATH setting.

So now we have a proper understanding of why do we need a File Tree.
For diving deep into the significance of each one of the directory stay tuned for Unix File Tree Part-2.

Cheers!

Postfix Email Server integration with SES

Have you ever thought of setting up your web or application server with your own email server? Well, when you setup a application it is likely to have your own email server to handle incoming and outgoing mail to your domain. Before I get into my topic I assume that you got some basic knowledge of AWS. Here I am going to explain you how to setup a simple postfix email server with AWS SES to handle all your email. For any kind of more information please refer AWS SES doc. Lets put it in simple way. We have two phases in this implementation.

  1. Configure SES with Domain
  2. Configure postfix and integrate with SES on EC2

Configure SES with Domain

Amazon SES requires that you verify your email address or domain, to confirm that you own it and to prevent others from using it. When you verify an entire domain, you are verifying all email addresses from that domain, so you don’t need to verify email addresses from that domain individually. For example, if you verify the domain example.com, you can send email from user1@example.com, user2@example.com, or any other user at example.com. Lets verify our domain name with SES.

  • Go to the AWS console management and click on the SES.
  • Click on the Domain availabe on left top corner.
  • Click verify new Domain. 

  • On the Verify a New Domain, for Domain, type the name of the domain that you registered using Route 53, and then choose Verify This Domain.   
  • On the Verify a New Domain dialog box, choose Use Route 53. Your Domain Verification and Email Receiving Record will be updated in Route 53.

Note

If you don’t see Use Route 53 your domain may not be registered with Route 53.

  • Once verified your domain, you can use any email address from this domain as your email.
  • To establish connection between postfix and SES you will need SMTP credential.
  • Now choose the SMTP settings in same SES console.
  • Choose Create My SMTP Credential.
  • Give the user name and click create.
  • Download the credentials this will be uses when you configure server.

Configure postfix and integrate with SES on EC2

In this section you are going to install and configure postfix on EC2 instance.
    Prerequisites

  • You should have up and running EC2 machine.
  • Open port 25(SMTP) and 22(SSH) for all security group.

Lets get started

Lets login to machine using putty or ssh client. Now need to create a domain on Route53.

   Route53

  • Go to the AWS console and choose Route53.
  • Choose Hosted Zone and select your domain where you wish to configure.
  • Click on create record set to add a new record set, then select A-IPv4 address for the resource type.
  • Add subdomain name in NAME field and enter a record value that is your EC2 IP.
  • Set the desired TTL.
  • Then click on Create button.

Now we will install Postfix on our EC2 machine.

sudo apt-get update

sudo apt-get install postfix

      Now we need to make some changes in postfix configuration file. Lets do it one by one.

To integrate our postfix with SES we need to add some more line in main.cf.

vim /etc/mailname
example.com

vim /etc/postfix/main.cf

mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain

myhostname = example.com

myorigin = /etc/mailname

relayhost = [email-smtp.us-east-1.amazonaws.com]:587

smtp_sasl_auth_enable = yes

smtp_sasl_security_options = noanonymous

smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

smtp_use_tls = yes

smtp_tls_security_level = encrypt

smtp_tls_note_starttls_offer = yes

smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt

NOTE:
  Value of relay host will change depending upon SES region you use.
Comment out of the following line of master.cf file by putting # infront of it:

vim /etc/postfix/master.cf
#-o smtp_fallback_relay=

Edit the file /etc/postfix/sasl_passwd if not present please create it:

vim /etc/postfix/sasl_passwd

[email-smtp.us-west-2.amazonaws.com]:587 IAMUSERNAME:PASSWORD

NOTE: Add your SMTP username and password that you downloaded. Save and close the file and use the below command to create hashmap database.

sudo postmap /etc/postfix/sasl_passwd

Stop and Start Postfix:

sudo service postfix stop

sudo service postfix start

Speeding up Ansible Execution Part 2

MITOGEN


In the previous post, we discussed various ways to reduce the ansible-playbook execution time, those changes were mostly made in the ansible config file, by adding or adjusting certain parameters in the file. But as you may have noticed that those methods were not that effective in certain cases, while using those methods we have to be very cautious about the result as they may affect ansible performance in one way or the other.


Generally, for the slower ansible execution, the main culprit is the way ansible is executed on the hosts. It creates multiple SSH connections and does not fully utilize the available resources. To tackle this problem, MITOGEN came to rescue !!!

Mitogen is a distributed programming library for Python. The Mitogen extension is a set of plug-ins for Ansible that enable it to operate via Mitogen, vastly improving its performance and enhancing its functional capability.

We all know about the strategies in ansible – linear, free & debug., the mitogen is just defined in the strategy column of the config file, so it is just a strategy, we are not making any other changes in the config file of the ansible so it is not affecting any other parameter, it is just the way, playbooks will be executed on the hosts.

Now coming to the mitogen installation part, we just have to download this package at a particular location and make some changes in the ansible config file as shown below,

[defaults]

strategy_plugins = /path/to/mitogen/ansible_mitogen/plugins/strategystrategy = mitogen_linear

we have to define the path where we have stored our mitogen files, and mention the strategy as “mitogen_linear”, under the default section of the config file, and we are good to go.

Now, after the Mitogen installation part, when we run our playbook, we will notice a reasonable reduction in the execution time,

Mitogen is fast because of the following reasons,

  • One connection is created per target and system logs aren’t spammed with repeated authentication events.
  • A single network roundtrip is used to execute a step whose code already exists in RAM on the target.
  • Processes are aggressively reused, avoiding the cost of invoking Python and recompiling imports, saving 300-800 ms for every playbook step.
  • Code is cached in the RAM, which further increases the speed.
  • Generally, ansible repeatedly rewrites and extracts ZIP files to temporary directories in the target hosts, mitogen also reduces these rewrites.

      All the above-mentioned features make the ansible to run faster.

      Mitogen is another extension for ansible that provides a decrease in its execution time and it is very easy to use, I think MITOGEN is very underrated and one of its kind, and we should definitely give it a try.

      I hope I have explained everything well, any suggestion/queries are highly appreciated.


Thanks !!!

Source:

https://mitogen.networkgenomics.com/ansible_detailed.html