Wednesday, December 26, 2018

Git-Submodule



Rocket Science has always fascinated me, but one thing which totally blows my mind is the concept of modules aka. modular rockets. The literal definition of modules states "A modular rocket is a type of multistage rocket which features components that can be interchanged for specific mission requirements." In simple terms, you can say that the Super Rocket depends upon those Submodules to get the things done.
Similarly is the case in the Software world, where super projects have multiple dependencies on other objects. And if we talk about managing projects Git can't be ignored, Moreover Git has a concept of Submodules which is slightly inspired by the amazing rocket science of modules.

Hour of Need

Being a DevOps Specialist we need to do provisioning of the Infrastructure of our clients which is sometimes common for most of the clients. We decided to Automate it, which a DevOps is habitual of. Hence, Opstree Solutions initiated an Internal project named OSM. In which we create Ansible Roles of different opensource software with the contribution of each member of our organization. So that those roles can be used in the provisioning of the client's infrastructure.
This makes the client projects dependent on our OSM. Which creates a problem statement to manage all dependencies which might get updated over the period. And to do that there is a lot of copy paste, deleting the repository and cloning them again to get the updated version, which is itself a hair-pulling task and obviously not the best practice.
Here comes the git-submodule as a modular rocket to take our Super Rocket to its destination.

Let's Liftoff with Git-Submodules

"A submodule is a repository embedded inside another repository. The submodule has its own history; the repository it is embedded in is called a superproject."
In simple terms, a submodule is a git repository inside a Superproject's git repository, which has its own .git folder which contains all the information that is necessary for your project in version control and all the information about commits, remote repository address etc. It is like an attached repository inside your main repository, which can be used to reuse a code inside it as a "module".
Let's get a practical use case of submodules.
We have a client let's call it "Armstrong" who needs few of our ansible roles of OSM for their provisioning of Infrastructure. Let's have a look at their git repository below.
$    cd provisioner
$    ls -a
     .  ..  ansible  .git  inventory  jenkins  playbooks  README.md  roles
$    cd roles
$    ls -a
     apache  java   nginx  redis  tomcat
We can see in this Armstrong's provisioner repository(a git repository) depends upon five roles which are available in OSM's repository to help Armstrong to provision their infrastructure. So we'll add submodules osm_java and others.
$    cd java
$    git submodule add -b armstrong git@gitlab.com:oosm/osm_java.git osm_java
     Cloning into './provisioner/roles/java/osm_java'...
     remote: Enumerating objects: 23, done.
     remote: Counting objects: 100% (23/23), done.
     remote: Compressing objects: 100% (17/17), done.
     remote: Total 23 (delta 3), reused 0 (delta 0)
     Receiving objects: 100% (23/23), done.
     Resolving deltas: 100% (3/3), done.
With the above command, we are adding a submodule named osm_java whose URL is git@gitlab.com:oosm/osm_java.git and branch is armstrong. The name of the branch is coined armstrong because to keep the configuration of each of our client's requirement isolated, we created individual branches of OSM's repositories on the basis of client name.
Now if take a look at our superproject provisioner we can see a file named .gitmodules which has the information regarding the submodules.
$    cd provisioner
$    ls -a
     .  ..  ansible  .git  .gitmodules  inventory  jenkins  playbooks  README.md  roles
$    cat .gitmodules
     [submodule "roles/java/osm_java"]
     path = roles/java/osm_java
     url = git@gitlab.com:oosm/osm_java.git
     branch = armstrong
Here you can clearly see that a submodule osm_java has been attached to the superproject provisioner.

What if there was no submodule?

If that was a case, then we need to clone the repository from osm and paste it to the provisioner then add & commit it to the provisioner phew..... that would also have worked.
But what if there is some update has been made in the osm_java which have to be used in provisioner, we can not easily sync with the OSM. We would need to delete osm_java, again clone, copy, and paste in the provisioner which sounds clumsy and not a best way to automate the process.
Being a osm_java as a submodule we can easily update that this dependency without messing up the things.
$    git submodule status
     -d3bf24ff3335d8095e1f6a82b0a0a78a5baa5fda roles/java/osm_java
$    git submodule update --remote
     remote: Enumerating objects: 3, done.
     remote: Counting objects: 100% (3/3), done.
     remote: Total 2 (delta 0), reused 2 (delta 0), pack-reused 0
     Unpacking objects: 100% (2/2), done.
     From git@gitlab.com:oosm/osm_java.git     0564d78..04ca88b  armstrong     -> origin/armstrong
     Submodule path 'roles/java/osm_java': checked out '04ca88b1561237854f3eb361260c07824c453086'

By using the above update command we have successfully updated the submodule which actually pulled the changes from OSM's origin armstrong branch.

What have we learned? 

In this blog post, we learned to make use of git-submodules to keep our dependent repositories updated with our super project, and not getting our hands dirty with gullible copy and paste.
Kick-off those practices which might ruin the fun, sit back and enjoy the automation.

Referred links:
Image: google.com
Documentation: https://git-scm.com/docs/gitsubmodules




Tuesday, December 18, 2018

How DHCP and DNS are managed in AWS VPC - Part 1

In our day-to-day life, we take a lot of things for granted: our body, food, friends, family, internet, domain name (URL) of facebook, IP address of instagram, etc. In our ignorance, we forget to consider how much their absence would affect us, the hardship and inconvenience we'd face, if, one day, any of these just vanishes. Let’s leave the leisure of friends, food, etc to be discussed sometime later. For now, we’ll limit our thoughts to how DHCP and DNS are managed in AWS. These are responsible for domain names and IP addresses. Combined, they are the backbone to connections among hosts and servers over a network.

Both of these services work a little differently in AWS than in a physical infrastructure. In a physical set-up, DHCP can either be managed by a router or by dedicated DHCP servers depending on the size of our infrastructure whereas in AWS, through option sets. Similarly, DNS can either be managed by Internet Service Provider (ISP) or by dedicated DNS servers, whereas AWS, apart from having reserved DNS servers (AmazonProvidedDNS), has a few other tricks up its sleeve as well. To understand this better, we'll go through both of them one by one.

Dynamic Host Configuration Protocol (DHCP)

Helps our hosts/systems obtain a unique identity over a network. We call this identity IP address but it’s not just IP address, DHCP provides much more information than that like subnet mask, default gateway, DNS server addresses. All of these together form network configuration information. DHCP is client-server based protocol which uses 4-way handshake to establish connection. Below is a visual representation:

Fig. 1: 4 - way handshake connection

In case of AWS VPC, this configuration information is contained in DHCP option sets. Option sets contain only those DHCP options that are customizable. They do not contain IP address since IP addresses are provided by default when an instance is launched. There are five options currently supported in DHCP option sets. When an instance is launched in a VPC and requests Dynamic Configuration, AWS DHCP server provides that configuration with the below details, i.e., entries of option set:
  1. domain-name-server: This implies which DNS server the host will query. We can specify IP addresses of up to four custom domain name servers (DNS) separated by comma or we can leave it to AmazonProvidedDNS which is there by default.
  2. domain-name: This specifies our domain name. Example: google.com, opstree.com, etc. When using AmazonProvidedDNS, depending upon our region, domain name could vary for public and private instances. When using custom DNS, we provide our own domain name and DNS servers which we will discuss in detail later.
  3. ntp-servers: NTP servers synchornize clock across servers in our network. They are precise to upto tens of millisecond even on the internet and even more accurate on a local network. We can specify IP addresses of up to four NTP servers.
  4. netbios-name-servers: They are similar to DNS servers, in that, they both provide identification to hosts in a network. DNS provides host identification over the internet but NetBIOS identifies hosts that use a local network. They do not need internet and are always available to the hosts connected directly to them. No need to register name in hosts. They allow applications on different connected hosts to communicate with each other. We can specify IP addresses of up to four NetBIOS name servers.
  5. netbios-node-type: There are four NetBIOS node types using which a windows machine resolves NetBIOS names: 1, 2, 4, and 8. AWS recommends specifying 2 (point-to-point node) mostly because 1 (broadcast)  and 4 (multicast) nodes are not yet supported.

Fig. 2: DHCP option set



These option sets are immutable. So, if we need to make changes, add or remove an option, we must create new option sets. A VPC can have several option sets but only one can be associated to it at a time. If we replace an option of VPC, we do not need to restart our running instances. Changes will reflect automatically depending on lease time of the DHCP configuration to an instance. We could always connect to instance and explicitly renew the lease, if need be. When a VPC is deleted, DHCP options set associated with it is also deleted.

Domain name service (DNS)

Maps names over the internet to their corresponding IP addresses. DNS hostname of a host is unique and indisputable. It consists of a host name and a domain name. A DNS server resolves DNS hostnames to their corresponding IP addresses. They are how we can reach our intended website by just typing its name in the browser while not needing to remember its IP address at all. The image below gives a far better idea of how it works, it really is amazing:
Fig. 3: Working of DNS

AWS provides us a default DNS which is called AmazonProvidedDNS. We can also configure our VPC to use custom DNS servers through DHCP option sets as we saw above. There are two attributes we need to keep in mind if we need DNS support in our VPC: enableDNSHostname and enableDnsSupport-
enableDnsHostname, if set true ensures that the instances in the VPC get public DNS hostnames but only if enableDnsSupport is also set to true. This is because true value of enableDnsSupport ensures that the Amazon-provided DNS server in VPC resolves public DNS hostnames to IP addresses
This means, AWS DNS service, for instances in a VPC, works if both are true and does not work if either of them is false. Having said that, we should keep in mind custom DNS service will work even when both are false if we associate custom DHCP option set with the VPC.  Let’s dive deeper to understand this.

AmazonProvidedDNS (both enableDnsHostname and enableDnsSupport are true)

As mentioned earlier, when we create a VPC, it is provided a DHCP option set which has a default DNS called AmazonProvidedDNS and a default domain name as per naming convention. Any instance launched in this VPC will get a private DNS hostname. Public instances will get public DNS hostname if both the attributes mentioned earlier (enableDnsHostname and enableDnsSupport) are set to true. Private DNS hostname resolves to private IP address of the instance and public DNS hostname resolves to public IP address. 

Both the attributes, enableDnsHostname and enableDnsSupport are true by default in default VPC of a region or VPC created through wizard. In VPC's created any other way, CLI, API, or manually through console, enableDnsSupport is true by default. Other needs to be set true.
 
There is a reserved IP address provided to Amazon provided DNS in our VPC CIDR block. It is the base of CIDR block plus two, i.e. if CIDR block is 10.0.0.0/16, DNS IP address is 10.0.0.2. Apart from this, queries to DNS can also hit 169.254.169.253 server which is also an Amazon provided DNS server.

Fig. 4: This is a default DHCP option set. In options field, we can see that domain-name is provided by default and name-sever is AmazonProvidedDNS


Custom DNS servers (either of enableDnsHostname and enableDnsSupport is false)

We can use our own DNS servers as well. Amazon allows this, and it can be configured rather easily. All we have to do is create a new DHCP option set and fill required options with specific details. In Domain name servers option, we can specify comma separated IP addresses of our DNS servers and in Domain name, we can specify respective domain names. Once the DHCP option set is created, we can associate it with VPC and voila, it is done! DNS servers can be of any vendor or our own.

Fig. 5: This is a custom DHCP option set that I created. I have put my own domain name and IP address of DNS name servers.

Conclusion

Both DHCP and DNS are paramount to internet connectivity. If not for these services, we'd still be writing with pen and paper and using post offices. AWS leaves no stone unturned in making these services available and customizable. In the next blog, i.e., "How DHCP and DNS are managed in AWS VPC - Part 2" , I will show how we can host our own website on EC2 instances and make it accessible to the world using these AWS services.

Tuesday, December 11, 2018

Setting up MySQL Monitoring with Prometheus

One thing that I love about Prometheus is that it has a multitude of Integration with different services, both officially supported and the third party supported.
Let's see how can we monitor MySQL with Prometheus.

Those who are the starter or new to Prometheus can refer to our this blog.

MySQL is a popular opensource relational database system, which exposed a large number of metrics for monitoring but not in Prometheus format. For capturing that data in Prometheus format we use mysqld_exporter.

I am assuming that you have already setup MySQL Server.

Configuration changes in MySQL

For setting up MySQL monitoring, we need a user with reading access on all databases which we can achieve by an existing user also but the good practice is that we should always create a new user in the database for new service.

CREATE USER 'mysqld_exporter'@'localhost' IDENTIFIED BY 'password' WITH MAX_USER_CONNECTIONS 3;

After creating a user we simply have to provide permission to that user.
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'mysqld_exporter'@'localhost';

Setting up MySQL Exporter

Download the mysqld_exporter from GitHub. We are downloading the 0.11.0 version as per latest release now, change the version in future if you want to download the latest version.

wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.11.0/mysqld_exporter-0.11.0.linux-amd64.tar.gz

Then simply extract the tar file and move the binary file at the appropriate location.
tar -xvf mysqld_exporter-0.11.0.linux-amd64.tar.gz

mv mysqld_exporter /usr/bin/

Although we can execute the binary simply, but the best practice is to create service for every Third Party binary application. Also, we are assuming that systemd is already installed in your system. If you are using init then you have to create init service for the exporter.

useradd mysqld_exporter
vim /etc/systemd/system/mysqld_exporter.service

[Unit]
Description=MySQL Exporter Service
Wants=network.target
After=network.target

[Service]
User=mysqld_exporter
Group=mysqld_exporter
Environment="DATA_SOURCE_NAME=mysqld_exporter:password@unix(/var/run/mysqd/mysqld.sock)"
Type=simple
ExecStart=/usr/bin/mysqld_exporter
Restart=always

[Install]
WantedBy=multi-user.target

You may need to adjust the socket location of Unix according to your environment

If you go and visit the http://localhost.com:9104/metrics, you will be able to see them.

Prometheus Configurations

For scrapping metrics from mysqld_exporter in Prometheus we have to make some configuration changes in Prometheus, the changes are not fancy, we just have to add another job for mysqld_exporter, like this:-


vim /etc/prometheus/prometheus.yml

  - job_name: 'mysqld_exporter'
    static_configs:
      - targets:
          - <mysql_ip>:9104


After the configuration changes, we just have to restart the Prometheus server.

systemctl restart prometheus


Then, if you go to the Prometheus server you can find the MySQL metrics there like this:-



So In this blog, we have covered MySQL configuration changes for Prometheus, mysqld_exporter setup and Prometheus configuration changes.

In the next part, we will discuss how to create a visually impressive dashboard in Grafana for better visualization of MySQL metrics. See you soon... :)


Monday, December 3, 2018

Gitlab-CI with Nexus


Recently I was asked to set up a CI- Pipeline for a Spring based application.
I said "piece of cake", as I have already worked on jenkins pipeline, and knew about maven so that won't be a problem. But there was a hitch, "pipeline of Gitlab CI". I said "no problem, I'll learn about it" with a Ninja Spirit.
So for starters what is gitlab-ci pipeline. For those who have already work on Jenkins and maven, they know about the CI workflow of  Building a code , testing the code, packaging, and deploy it using maven. You can add other goals too, depending upon the requirement.
The CI process in GitLab CI is defined within a file in the code repository itself using a YAML configuration syntax.
The work is then dispatched to machines called runners, which are easy to set up and can be provisioned on many different operating systems. When configuring runners, you can choose between different executors like Docker, shell, VirtualBox, or Kubernetes to determine how the tasks are carried out.

What we are going to do?
We will be establishing a CI/CD pipeline using gitlab-ci and deploying artifacts to NEXUS Repository.

Resources Used:
  1. Gitlab server, I'm using gitlab to host my code.   
  2. Runner server, It could be vagrant or an ec2 instance. 
  3. Nexus Server, It could be vagrant or an ec2 instance.

     Before going further, let's get aware of few terminologies. 
  • Artifacts: Objects created by a build process, Usually project jars, library jar. These can include use cases, class diagrams, requirements, and design documents.
  • Maven Repository(NEXUS): A repository is a directory where all the project jars, library jar, plugins or any other project specific artifacts are stored and can be used by Maven easily, here we are going to use NEXUS as a central Repository.
  • CI: A software development practice in which you build and test software every time a developer pushes code to the application, and it happens several times a day.
  • Gitlab-runner: GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. It is used in conjunction with GitLab CI, the open-source continuous integration service included with GitLab that coordinates the jobs.
  • .gitlab-ci.yml: The YAML file defines a set of jobs with constraints stating when they should be run. You can specify an unlimited number of jobs which are defined as top-level elements with an arbitrary name and always have to contain at least the script clause. Whenever you push a commit, a pipeline will be triggered with respect to that commit.

Strategy to Setup Pipeline

Step-1:  Setting up GitLab Repository. 

I'm using a Spring based code Spring3Hibernate, with a directory structure like below.
$    cd spring3hibernateapp
$    ls
     pom.xml pom.xml~ src
# Now lets start pushing this code to gitlab
$    git remote -v
     origin git@gitlab.com:<your_gitlab_id>/spring3hibernateapp.git (fetch)
     origin git@gitlab.com:<your_gitlab_id>/spring3hibernateapp.git (push)
# Adding the code to the working directory
$    git add -A
# Committing the code
$    git commit -m "[Master][Add] Adding the code "
# Pushing it to gitlab
$    git push origin master

Step-2:  Install GitLab Runner manually on GNU/Linux

# Simply download one of the binaries for your system:

$    sudo wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-386
# Give it permissions to execute:
$    sudo chmod +x /usr/local/bin/gitlab-runner 
# Optionally, if you want to use Docker, install Docker with:
$    curl -sSL https://get.docker.com/ | sh 
# Create a GitLab CI user:
$    sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash 
# Install and run as service:
$    sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
$    sudo gitlab-runner start
Step-3: Registering a Runner
To get the runner configuration you need to move to gitlab > spring3hibernateapp > CI/CD setting > Runners
And get the registration token for runners.

# Run the following command:
$     sudo gitlab-runner register
       Runtime platform                                    arch=amd64 os=linux pid=1742 revision=3afdaba6 version=11.5.0
       Running in system-mode.                             
# Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://gitlab.com/
# Please enter the gitlab-ci token for this runner:
****8kmMfx_RMr****
# Please enter the gitlab-ci description for this runner:
[gitlab-runner]: spring3hibernate
# Please enter the gitlab-ci tags for this runner (comma separated):
build
       Registering runner... succeeded                     runner=ZP3TrPCd
# Please enter the executor: docker, docker-ssh, shell, ssh, virtualbox, docker+machine, parallels, docker-ssh+machine, kubernetes:
docker
# Please enter the default Docker image (e.g. ruby:2.1):
maven       
       Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
# You can also create systemd service in /etc/systemd/system/gitlab-runner.service.
[Unit]
Description=GitLab Runner
After=syslog.target network.target
ConditionFileIsExecutable=/usr/local/bin/gitlab-runner

[Service]
StartLimitInterval=5
StartLimitBurst=10
ExecStart=/usr/local/bin/gitlab-runner "run" "--working-directory" "/home/gitlab-runner" "--config" "/etc/gitlab-runner/config.toml" "--service" "gitlab-runner" "--syslog" "--user" "gitlab-runner"
Restart=always
RestartSec=120

[Install]
WantedBy=multi-user.target
Step-4: Setting up Nexus Repository
You can setup a repository installing the open source version of Nexus you need to visit Nexus OSS and download the TGZ version or the ZIP version.
But to keep it simple, I used docker container for that.
# Install docker
$    curl -sSL https://get.docker.com/ | sh
# Launch a NEXUS container and bind the port
$    docker run -d -p 8081:8081 --name nexus sonatype/nexus:oss
You can access your nexus now on http://<public-ip>:8081/nexus.
And login as admin with password admin123.

Step-5: Configure the NEXUS deployment

Clone your code and enter the repository
$    cd spring3hibernateapp/
# Create a folder called .m2 in the root of your repository
$    mkdir .m2
# Create a file called settings.xml in the .m2 folder
$    touch .m2/settings.xml
# Copy the following content in settings.xml
<settings xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd"
    xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <servers>
    <server>
      <id>central</id>
      <username>${env.NEXUS_REPO_USER}</username>
      <password>${env.NEXUS_REPO_PASS}</password>
    </server>
    <server>
      <id>snapshots</id>
      <username>${env.NEXUS_REPO_USER}</username>
      <password>${env.NEXUS_REPO_PASS}</password>
    </server>
  </servers>
</settings>
 Username and password will be replaced by the correct values using variables.
# Updating Repository path in pom.xml
<repository>
     <id>central</id>
     <name>Central</name>
     <url>${env.NEXUS_REPO_URL}central/</url>
   </repository>
 <snapshotRepository>
          <id>snapshots</id>
          <name>Snapshots</name>
          <url>${env.NEXUS_REPO_URL}snapshots/</url>
        </snapshotRepository>

Step-6: Configure GitLab CI/CD for simple maven deployment.

GitLab CI/CD uses a file in the root of the repo, named, .gitlab-ci.yml, to read the definitions for jobs that will be executed by the configured GitLab Runners.
First of all, remember to set up variables for your deployment. Navigate to your project’s Settings > CI/CD > Variables page and add the following ones (replace them with your current values, of course):
  • NEXUS_REPO_URL: http://<nexus_ip>:8081/nexus/content/repositories/ 
  • NEXUS_REPO_USER: admin
  • NEXUS_REPO_PASS: admin123
Now it’s time to define jobs in .gitlab-ci.yml and push it to the repo:
image: maven

variables:
  MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
  MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"

cache:
  paths:
    - .m2/repository/
    - target/

stages:
  - build
  - test
  - package 
  - deploy

codebuild:
  tags:
    - build      
  stage: build
  script: 
    - mvn compile

codetest:
  tags:
    - build
  stage: test
  script:
    - mvn $MAVEN_CLI_OPTS test
    - echo "The code has been tested"

Codepackage:
  tags:
    - build
  stage: package
  script:
    - mvn $MAVEN_CLI_OPTS package -Dmaven.test.skip=true
    - echo "Packaging the code"
  artifacts:
    paths:
      - target/*.war
  only:
    - master  

Codedeploy:
  tags:
    - build
  stage: deploy
  script:
    - mvn $MAVEN_CLI_OPTS deploy -Dmaven.test.skip=true
    - echo "installing the package in local repository"
  only:
    - master
Now add the changes, commit them and push them to the remote repository on gitlab. A pipeline will be triggered with respect to your commit. And if everything goes well our mission will be accomplished.
Note: You might get some issues with maven plugins, which will need to managed in pom.xml, depending upon the environment.
In this blog, we covered the basic steps to use a Nexus Maven repository to automatically publish and consume artifacts.

Redis Best Practices and Performance Tuning

One of the thing that I love about my organization is that you don't have to do the same repetitive work, you will always get the chanc...