Tuesday, February 9, 2016

jgit-flow maven plugin to Release Java Application

Introduction

As a DevOps I need a smooth way to release the java application, so I compared two maven plugin that are used to release the java application and in the end I found that Jgit-flow plugin is far better than maven-release plugin on the basis of following points:
  • Maven-release plugin creates .backup and release.properties files to your working directory which can be committed mistakenly, when they should not be. jgit-flow maven plugin doesn't create these files or any other file in your working directory.
  • Maven-release plugin create two tags.
  • Maven-release plugin does a build in the prepare goal and a build in the perform goal causing tests to run 2 times but jgit-flow maven plugin builds project once so tests run only once.
  • If something goes wrong during the maven plugin execution, It become very tough to roll it back, on the other hand jgit-flow maven plugin makes all changes into the branch and if you want to roll back just delete that branch.
  • jgit-flow maven plugin doesn't run site-deploy
  • jgit-flow maven plugin provides option to turn on/off maven deployment
  • jgit-flow maven plugin provides option to turn on/off remote pushes/tagging
  • jgit-flow maven plugin keeps the master branch always at latest release version.
Now let's see how to integrate Jgit-flow maven plugin and use it

How to use Jgit-flow maven Plugin for Release

Follow the flowing steps 
  1. Add the following lines in your pom.xml for source code management access
           
           <scm>
             <connection>scm:git:<Git Repo URL></connection>
             <developerConnection>scm:git:git:<Git Repo URL></developerConnection>
           </scm>
           
     
  2. Add these line to resolve the Jgit-flow maven plugin and put the other option that will be required during the build
           
    <build>
        <plugins>
          <plugin>
            <groupId>com.atlassian.maven.plugins</groupId>
            <artifactId>maven-jgitflow-plugin</artifactId>
            <version>1.0-m4.3</version>
            <configuration>
              <pushReleases>true</pushReleases>
              <keepBranch>false</keepBranch>
              <autoVersionSubmodules>true</autoVersionSubmodules>
              <noTag>true</noTag>
              <allowUntracked>true</allowUntracked>
              <pullDevelop>true</pullDevelop>
              <pullMaster>true</pullMaster>
              <allowSnapshots>true</allowSnapshots>
              <flowInitContext>
                <masterBranchName>master-test</masterBranchName>
                <developBranchName>deploy-test</developBranchName>
              </flowInitContext>
            </configuration> 
         </plugin>
      </plugins>
    </build>
             
  3. Above code snippet will perform following steps:
    • Maven will resolve the jgitflow plug-in dependency
    • In the configuration section, we describe how jgit-flow plug-in will behave.
    • pushRelease XML tag to enable and disable jgit-flow from releasing the intermediate branches into the git or not.
    • keepBranch XML tag to enable and disable the plug-in for keep the intermediate branch or not.
    • noTag XMl tag to enable and disable the plug-in to create the that tag in git.
    • allowUntracked XML tag to whether allow untracked file during the checking. 
    • flowInitContext XML tag is used to override the default and branch name of the jgit-flow plug-in
    • In above code snippet, there is only two branches, master from where that code will be pulled and a intermediate branch that will be used by the jgit-flow plug-in. as I have discussed that jgit-flow plug-in uses the branches to keep it records. so development branch will be created by the plug-in that resides in the local not remotely, to track the release version etc. 
  4. To put your your releases into the repository manager add these lines
           
    <distributionManagement>
      <repository>
        <id><auth id></id>
        <url><repo url of repository managers></url>
      </repository>
      <snapshotRepository>
        <id><auth id></id>
        <url><repo url of repository managers></url>
      </snapshotRepository>
    </distributionManagement>
          
     
  5. Put the following lines into your m2/settings.xml with your repository manager credentials
            
    <settings>
      <servers>
        <server>
            <id><PUT THE ID OF THE REPOSITORY OR SNAPSHOTS ID HERE></id>
           <username><USERNAME></username>
           <password><PASSWORD></password>
        </server>
      </servers>
    </settings>
          
     

Start Release jgit-flow maven plugin command

To start the new release execute jgitflow:release-start.  

Finish Release jgit-flow maven plugin  command

To finish new release, execute mvn jgitflow:release-finish.

For a example I have created a repository in github.com. for testing and two branch master-test and deploy-test. It is assumed that you have configured maven and git your system.
In the deploy-test branch run following command
$ mvn clean -Dmaven.test.skip=true install jgitflow:release-start


This command will take input from you for release version and create a release branch with release/<version>. then it will push this release branch into github repository for temporarily because we are not saving the intermediate branched





















Now At the end run this command
$ mvn -Dmaven.test.skip=true jgitflow:release-finish
after finishing this command it will delete release/<version> from local and remote.











Now you can check the changes in pom file by jgitflow. in the above snapshot, it is master-test branch, you can see in the <version> tag it has removed the snapshot and also increased the version.  It hold the current version of the application.

And in the deploy-test branch it show you new branch on which developers are working on









Saturday, January 9, 2016

2015 - What an exciting year has gone by

 

OpsTree 2015 Journey

2015 – What an exciting year has gone by. We have had all the fun we could have asked for. We learnt, grew, built relationships, earned valuable trust and we did all these because we are driven by our core values of honesty, transparency and assiduousness. At the onset of the new year, we would like to thank all our partners who laid their valuable faith on us and helped us do our bit in their success stories. Read on to know all about 2015 at OpsTree.

We built great partnerships

2015 at Opstree was a year filled with excitement and challenges and success. The team grew and we shifted to a great new office, but most importantly we built great partnerships. The team worked towards achieving the agreed goals with the intention to go beyond the expectation of our partners at each step. Our relations with our partners stand testimony to this. 

 

We multiplied productivity by learning

We grew qualitatively by deciding to invest heavily in learning. We dedicate one full day per week towards self-development and learning. The results of our learning is evident through the thought leadership initiatives like our blogs, github contributions and open source contributions. But for us, what is more valuable than thought leadership is the significant improvement in the knowledge pool of the company. Today our resources are at least 2x more productive, because of their commitment to learning. 

We matured as Devops Company

We found that there three types of requirements in the market and we aligned our offering along that line. We divided our company into three verticals

1. End to End devops management – This vertical works closely with Product companies typically startup/midsize companies. OpsTree manages the complete dev/tech ops of such companies which allows our partners to focus on their core function while they still have state of the art infra setup and happy end users.

2. Work with big players – This vertical works with large service companies in team augmentation/back to back devops contract mode. This enables OpsTree to create an impact on bigger players through our esteemed partners.

3. Devops Solutioning –This team consists of devops architects who are passionate about devops. Architectural decisions, development of libraries, solutioning and conducting trainings is what drives them. 

Worked with some prestigious Logos

Some of our key outcomes for the last year have been the end to end management of DevOps operations of certain well established product companies. It is through their recommendations and references that we are growing at this fast pace. All our client relations have been partnerships where we have grown together to help each other deliver the best. Our expanding team is filled with enthusiastic people who are always looking for their next challenge. 

We Guarantee improved Productivity

At the core of everything we do are our core values of giving our best to every project and trying every possible method to get the most optimized outcome. Giving our partners a “wow” experience is what we aim for. 

Welcome 2016

It is well researched and accepted fact – ‘Devops helps companies deliver more’. Gartner says by 2016 DevOps will evolve from a niche to a mainstream strategy employed by 25% of the Global 2000 organizations. We are ready for this exciting new tomorrow. Are you? 


Sunday, December 20, 2015

The Reset Button !!!!



!!!! The Reset button !!!!

Anyone who has recently used the Google Compute Engine for creating the VM instances will be aware of the reset button available.

Since I wasn't very much sure of it , I just clicked it without much know-how . This resulted in making all the servers to their original state as they were freshly build and which is certainly a very bad thing for us.

But , we had  puppet that we used to create the whole infrastructure as it is . All the modules we had used and changes we made were committed to GitHub repo and this certainly was a boon to us, else we have to sit whole day long for making those changes on the servers.

Just in couple of minutes the new  instances were created using the compute engine-create group instance feature. We did  installation of the foreman  and git on one of the servers and set up the  puppet clients agents accordingly . This took around 15 more crucial minutes and then cloned our GitHub repo which contains all the  necessary modules and configurations required for the rest of infrastructure.

These are the conditions where Configuration Management Tools like Puppet come in picture and help us get on the track in the shortest possible manner.
It was a hectic day but definitely made us learn several important aspects. Using puppet for maintain the infrastructure is really important now days. It is reliable,efficient and fast for deploying configurations on the servers  and making ready for the production work load.

Thursday, December 3, 2015

Setup Jenkins using Ansible

In this document I’ll walk you through how you can  setup jenkins using ansible.

Prerequisites
  •  OS - Ubuntu {at least two machine required in production}
  •  First machine for Ansible  installation
  •  Second machine where we will install jenkins server
  • You should have basic understanding of ansible workflow.
Note :  You should have password less login enabled in second machine. use this link 
http://www.linuxproblem.org/art_9.html

Ansible Installation
Before starting with installing jenkins using ansible, you need to have ansible installed in your system.

$ curl https://raw.githubusercontent.com/OpsTree/AnsiblePOC/alok/scripts/setup_ansible.sh | sudo bash

Setup jenkins using Ansible

Install jenkins ansible roles

Once we have ansible installed in our system, we can start installing the jenkins using ansible. To install we will use an already available ansible role to setup jenkins

$ ansible-galaxy install geerlingguy.jenkins
to know more about the jenkins role hit this link https://galaxy.ansible.com/detail#/role/440

Ansible roles default directory path is /etc/ansible/roles

Make ansible playbook file
Now the next step is to use the installed jenkins roles to install the jenkins. For this purpose we will create a playbook  and hosts file with below content
$ cd ~/MyPlaybook/jenkins
create a file hosts and add below content
[jenkins_hosts]
192.168.33.15

Screenshot from 2015-11-30 12:55:41.png

Next create  a file site.yml and add below content
---
- hosts: jenkins_hosts
 roles:
     - { role: geerlingguy.jenkins }

Screenshot from 2015-11-30 12:59:08.png

so configuration file is done, the next step is to run ansible playbook command
$ ansible-playbook -i hosts site.yml
Now that Jenkins is running, go to http://192.168.33.15:8080. You'll be welcome by the default Jenkins screen.

Saturday, November 28, 2015

Opstree SHOA Part 1: Build & Release


At Opstree we have started a new initiative called SHOA, Saturday Hands On Activity. Under this program we pick up a concept, tool or technology and do a hands on activity. At the end of the day whatever we do is followed by a blog or series of blog that we have understood during the day.
Since this is the first Hands On Activity so we are starting with Build & Release

 

What we intend to do 

 

Setup Build & Release for project under git repository https://github.com/OpsTree/ContinuousIntegration.

What all we will be doing to achieve it

  • Finalize a SCM tool that we are going to use puppet/chef/ansible.
  • Automated setup of Jenkins using SCM tool.
  • Automated setup of Nexus/Artifactory/Archiva using SCM tool.
  • Automated setup of Sonar using SCM tool.
  • Dev Environment setup using SCM tool: Since this is a web app project so our Devw443 environment will have Nginx & tomcat.
  • QA Environment setup using SCM tool: Since this is a web app project so our QA environment will have Nginx & tomcat.
  • Creation of various build jobs
    • Code Stability Job.
    • Code Quality Job.
    • Code Coverage Job.
    • Functional Test Job on dev environment.
  • Creation of release Job.
  • Creation of deployment job to do deployment on Dev & QA environment.

This activity is open for public as well so if you have any suggestion or you want to attend it you are most welcome

Tuesday, October 27, 2015

Marrying Nginx with ELB

Few weeks back I got a requirement to setup a highly available API server. I said not a big deal! I'll have Nginx as a reverse proxy(Why not directly exposing API via ELB a different story) and my API auto scaled setup will sit behind an internal ELB and things would be in place TA DA.



Things worked perfectly fine for few days, but one day the API consumer reported that they are not getting response back, what? When I checked the API url was indeed returning a 502 error code. It was really strange for nginx to be sending 502 response back that meant the highly scalable setup was down? Well I was proven wrong ELB was working perfectly fine as the curl request to internal ELB was returning proper response, so yes the highly available API setup was in place. What next, yes! Nginx error logs. I did saw Nginx reporting connection timeout with 502 error code. The interesting thing to note that it was an IP(random IP assigned to ELB), when I tried to do curl hit on that IP for API request, it did failed EUREKA EUREKA!! I reproduced the problem.

Well now I've to collect all this information and infer what is the logical cause of the problem, and yes there are lot of smart people available who would have fond the solution to this problem so I've to ask right question in Google :). The question was "Nginx using Ip instead of domain name" and the answer was "Nginx caches the IP at the startup and obviously as ELB is Elastic in nature so it's IP changes over the period of time". That was the reason Nginx was trying to talk to the older un-associated IP's of Internal ELB.

Finding solution was not a big task as it was just about making sure  Nginx should talk to ELB not the IP's associated with it, that's why said marrying nginx with ELB :).

I'll not go into the actual solution as there are already solutions available in web. I referred this really good blog as a solution.

http://ghost.thekindof.me/nginx-aws-elb-dns-resolution-nginx-resolver-directive-and-black-magic/

Monday, February 23, 2015

PERCONA STANDALONE SERVER

As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture step by step details of installation of Percona XtraDB in Standalone mode

Introduction:


Percona Server is an enhanced drop-in replacement for Mysql. It offers breakthrough performance, scalability, features, and instrumentation.
Percona focus on providing a solution for the most demanding applications, empowering users to get the best performance and lowest downtime possible.

The Percona XtraDB Storage Engine:

  • Percona XtraDB is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. 
  • Percona XtraDB includes all of InnoDB's robust, reliable ACID-compliant design and advanced MVCC architecture, and builds on that solid foundation with more features, more tunability. more metrics, and more scalability.
  • It is designed to scale better on many cores, to use memory more efficiently, and to be more convenient and useful.

Installation on ubuntu:

STEP 1: Add Percona Software Repositories
$ apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
STEP 2: Add this to /etc/apt/sources.list:
deb http://repo.percona.com/apt precise main
deb-src http://repo.percona.com/apt precise main
STEP 3: Update the local cache
$ apt-get update
STEP 4: Install the server and client packages
$ apt-get install percona-server-server-5.6 percona-server-client-5.6
STEP 5: Start Percona Server
$ service mysql start
Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.