jgit-flow maven plugin to Release Java Application

Introduction

As a DevOps I need a smooth way to release the java application, so I compared two maven plugin that are used to release the java application and in the end I found that Jgit-flow plugin is far better than maven-release plugin on the basis of following points:
  • Maven-release plugin creates .backup and release.properties files to your working directory which can be committed mistakenly, when they should not be. jgit-flow maven plugin doesn’t create these files or any other file in your working directory.
  • Maven-release plugin create two tags.
  • Maven-release plugin does a build in the prepare goal and a build in the perform goal causing tests to run 2 times but jgit-flow maven plugin builds project once so tests run only once.
  • If something goes wrong during the maven plugin execution, It become very tough to roll it back, on the other hand jgit-flow maven plugin makes all changes into the branch and if you want to roll back just delete that branch.
  • jgit-flow maven plugin doesn’t run site-deploy
  • jgit-flow maven plugin provides option to turn on/off maven deployment
  • jgit-flow maven plugin provides option to turn on/off remote pushes/tagging
  • jgit-flow maven plugin keeps the master branch always at latest release version.
Now let’s see how to integrate Jgit-flow maven plugin and use it

How to use Jgit-flow maven Plugin for Release

Follow the flowing steps
    1. Add the following lines in your pom.xml for source code management access
      
         scm:git:
        scm:git:git:
    2. Add these line to resolve the Jgit-flow maven plugin and put the other option that will be required during the build
      com.atlassian.maven.plugins
      maven-jgitflow-plugin
      1.0-m4.3
              
      true
      false
      true
      true
      true
      true
      true
      true
                
      master-test
      deploy-test       

Above code snippet will perform following steps:

    • Maven will resolve the jgitflow plug-in dependency
    • In the configuration section, we describe how jgit-flow plug-in will behave.
    • pushRelease XML tag to enable and disable jgit-flow from releasing the intermediate branches into the git or not.
    • keepBranch XML tag to enable and disable the plug-in for keep the intermediate branch or not.
    • noTag XMl tag to enable and disable the plug-in to create the that tag in git.
    • allowUntracked XML tag to whether allow untracked file during the checking.
    • flowInitContext XML tag is used to override the default and branch name of the jgit-flow plug-in
    • In above code snippet, there is only two branches, master from where that code will be pulled and a intermediate branch that will be used by the jgit-flow plug-in. as I have discussed that jgit-flow plug-in uses the branches to keep it records. so development branch will be created by the plug-in that resides in the local not remotely, to track the release version etc.
  1. To put your releases into the repository manager add these lines
    <distributionManagement>
      <repository>
        <id><auth id></id>
        <url><repo url of repository managers></url>
      </repository>
      <snapshotRepository>
        <id><auth id></id>
        <url><repo url of repository managers></url>
      </snapshotRepository>
    </distributionManagement>
  2. Put the following lines into your m2/settings.xml with your repository manager credentials
    <settings>
      <servers>
        <server>
            <id><PUT THE ID OF THE REPOSITORY OR SNAPSHOTS ID HERE></id>
           <username><USERNAME></username>
           <password><PASSWORD></password>
        </server>
      </servers>
    </settings>

Start Release jgit-flow maven plugin command

To start the new release execute jgitflow:release-start.

Finish Release jgit-flow maven plugin  command

To finish new release, execute mvn jgitflow:release-finish.
For a example I have created a repository in github.com. for testing and two branch master-test and deploy-test. It is assumed that you have configured maven and git your system.

In the deploy-test branch run following command
$ mvn clean -Dmaven.test.skip=true install jgitflow:release-start

This command will take input from you for release version and create a release branch with release/. then it will push this release branch into github repository for temporarily because we are not saving the intermediate branched

Now At the end run this command
$ mvn -Dmaven.test.skip=true jgitflow:release-finish
after finishing this command it will delete release/ from local and remote.

Now you can check the changes in pom file by jgitflow. in the above snapshot, it is master-test branch, you can see in the tag it has removed the snapshot and also increased the version.  It hold the current version of the application.

And in the deploy-test branch it show you new branch on which developers are working on

2015 – What an exciting year has gone by

 

OpsTree 2015 Journey

2015 – What an exciting year has gone by. We have had all the fun we could have asked for. We learnt, grew, built relationships, earned valuable trust and we did all these because we are driven by our core values of honesty, transparency and assiduousness. At the onset of the new year, we would like to thank all our partners who laid their valuable faith on us and helped us do our bit in their success stories. Read on to know all about 2015 at OpsTree.

We built great partnerships

2015 at Opstree was a year filled with excitement and challenges and success. The team grew and we shifted to a great new office, but most importantly we built great partnerships. The team worked towards achieving the agreed goals with the intention to go beyond the expectation of our partners at each step. Our relations with our partners stand testimony to this. 

 We multiplied productivity by learning

We grew qualitatively by deciding to invest heavily in learning. We dedicate one full day per week towards self-development and learning. The results of our learning is evident through the thought leadership initiatives like our blogs, github contributions and open source contributions. But for us, what is more valuable than thought leadership is the significant improvement in the knowledge pool of the company. Today our resources are at least 2x more productive, because of their commitment to learning. 

We matured as Devops Company

We found that there three types of requirements in the market and we aligned our offering along that line. We divided our company into three verticals


1. End to End devops management – This vertical works closely with Product companies typically startup/midsize companies. OpsTree manages the complete dev/tech ops of such companies which allows our partners to focus on their core function while they still have state of the art infra setup and happy end users.


2. Work with big players – This vertical works with large service companies in team augmentation/back to back devops contract mode. This enables OpsTree to create an impact on bigger players through our esteemed partners.

3. Devops Solutioning –This team consists of devops architects who are passionate about devops. Architectural decisions, development of libraries, solutioning and conducting trainings is what drives them.
Worked with some prestigious Logos

Some of our key outcomes for the last year have been the end to end management of DevOps operations of certain well established product companies. It is through their recommendations and references that we are growing at this fast pace. All our client relations have been partnerships where we have grown together to help each other deliver the best. Our expanding team is filled with enthusiastic people who are always looking for their next challenge. 

We Guarantee improved Productivity

At the core of everything we do are our core values of giving our best to every project and trying every possible method to get the most optimized outcome. Giving our partners a “wow” experience is what we aim for. 

Welcome 2016

It is well researched and accepted fact – ‘Devops helps companies deliver more’. Gartner says by 2016 DevOps will evolve from a niche to a mainstream strategy employed by 25% of the Global 2000 organizations. We are ready for this exciting new tomorrow. Are you? 

The Reset Button !!!!

!!!! The Reset button !!!!

Anyone who has recently used the Google Compute Engine for creating the VM instances will be aware of the reset button available.

Since I wasn’t very much sure of it , I just clicked it without much know-how . This resulted in making all the servers to their original state as they were freshly build and which is certainly a very bad thing for us.

But , we had  puppet that we used to create the whole infrastructure as it is . All the modules we had used and changes we made were committed to GitHub repo and this certainly was a boon to us, else we have to sit whole day long for making those changes on the servers.

Just in couple of minutes the new  instances were created using the compute engine-create group instance feature. We did  installation of the foreman  and git on one of the servers and set up the  puppet clients agents accordingly . This took around 15 more crucial minutes and then cloned our GitHub repo which contains all the  necessary modules and configurations required for the rest of infrastructure.

These are the conditions where Configuration Management Tools like Puppet come in picture and help us get on the track in the shortest possible manner.
It was a hectic day but definitely made us learn several important aspects. Using puppet for maintain the infrastructure is really important now days. It is reliable,efficient and fast for deploying configurations on the servers  and making ready for the production work load.

Marrying Nginx with ELB

Few weeks back I got a requirement to setup a highly available API server. I said not a big deal! I’ll have Nginx as a reverse proxy(Why not directly exposing API via ELB a different story) and my API auto scaled setup will sit behind an internal ELB and things would be in place TA DA.

Things worked perfectly fine for few days, but one day the API consumer reported that they are not getting response back, what? When I checked the API url was indeed returning a 502 error code. It was really strange for nginx to be sending 502 response back that meant the highly scalable setup was down? Well I was proven wrong ELB was working perfectly fine as the curl request to internal ELB was returning proper response, so yes the highly available API setup was in place. What next, yes! Nginx error logs. I did saw Nginx reporting connection timeout with 502 error code. The interesting thing to note that it was an IP(random IP assigned to ELB), when I tried to do curl hit on that IP for API request, it did failed EUREKA EUREKA!! I reproduced the problem.

Well now I’ve to collect all this information and infer what is the logical cause of the problem, and yes there are lot of smart people available who would have fond the solution to this problem so I’ve to ask right question in Google :). The question was “Nginx using Ip instead of domain name” and the answer was “Nginx caches the IP at the startup and obviously as ELB is Elastic in nature so it’s IP changes over the period of time”. That was the reason Nginx was trying to talk to the older un-associated IP’s of Internal ELB.

Finding solution was not a big task as it was just about making sure  Nginx should talk to ELB not the IP’s associated with it, that’s why said marrying nginx with ELB :).

I’ll not go into the actual solution as there are already solutions available in web. I referred this really good blog as a solution.

http://ghost.thekindof.me/nginx-aws-elb-dns-resolution-nginx-resolver-directive-and-black-magic/

Chef Journey

I’m starting a blog series on chef where I would be taking you to a journey of managing my current infrastructure using Chef. To start with these are the high level tasks lists that I’ve in mind:

  • User Management : User’s creation or deletion on an environment(Dev/QA/Staging/Production) should be managed by chef, along with kind of access on the environment i.e read-only access, root access, or adding a user to some groups.
  • VPN Setup : Currently we are using openvpnas for managing secured access to our environment, it is manual right now so the vpn set-up will also be done by chef.
  • Apache Setup : We are using apache as web server that sits in front of our app server and also provides SSL.
  • Jar App : We have a SOA based set-up in which we have multiple micro java services, so we would be using chef to manage those jar app i.e deploying those jar app’s, starting/stopping those jar app’s.
  • Tomcat : Another major component type in our application are web apps that are hosted on tomcat server, the tomcat server is not managed as a service instead we create tomcat as an app user along with tomcat management scripts.
  • Mongo : We use replicated mongo as No SQL database in our application.
  • Logstash : For managing logs we are using log stash in a clustered set-up where all the log agents publish the logs to a central server and then served by Kibana, so this complete setup should also be managed by chef
  • ActiveMQ : We are using ActiveMQ for our queuing purpose

This list is not complete surely, I’ll be adding many more tasks in this list as I proceed in setting up my environment using chef as this is the first time I’ll be doing a set-up using Chef, but this list will be a good starting point.

Before jumping into creating the Chef cookbooks, runlists or data bags I’ve to setup the base infrastructure of Chef that is Chef Server to which all chef agents talk to, a chef workstation which would be updating the server with the configurations and a git repo to keep track of all my configuration as shown in the image given below.

In the next blog I’ll talk about how I’ll set-up a chef server. Let me know if you have any inputs for me or suggestion that how I should proceed with the chef set-up.

Revert a patch in most awesome way

If you are a Release Engineer, System Admin or Support Engineer you have definitely come across a requirement where you have to apply patches to the target systems be it production or non-production. I’m assuming that you are using some automated system to manage the patches i.e applying them and reverting them. In this blog I would be discussing about the standard way of patch management and how you can have an out of the box solution to revert your patch in most simplistic way and without much fuss. At the end of the blog I would like to see an expression where you will say what the hell it’s so awesome yet so simple :).

People usually use some tool to apply patch to a target system which in addition to applying a patch also manage the history the patches so that it can be reverted in case the patch goes wrong. The patch history usually contains below details:

  1. The new files that were added in the patch, while reverting the patch those files should be deleted.
  2. The files that were deleted by the patch, while reverting the patch the deleted files should be restored back.
  3. The files that were modified by the patch, while reverting the patch the modified files should be restored back.
You can definitely create a tool that can revert the patch for you as the use cases are not much, but do you really need to put this much effort if you can have an out of the box solution for this. What if I tell you that we use git for managing our patch history and reverting them. As git comes with a local repository concept so we created a local git repository at our app server codebase location only. Git comes with all the file level tracking we map each patch with one git commit, so at the time of reverting a specific patch you can ask git to simply revert the commit for you.

Extra steps to be done after applying patch:
To make git track the changes done in patch, you just need to perform 2 extra commands

git add . : This command will track all the files that have been modififed, added or deleted in the system.
git commit -m “Applying Patch” : This command actually adds the files information tracked by previous command with a message in the git system

Steps to be done in reverting changes done by a patch:
Once you have all the information tracked in git it will become no-brainer to revert the patches.

To view the details of all the patches: You can use git log command that will provide you the list of all the patches that you have applied or reverts that you have done

sandy@sandy:~/test/app1$ git log
commit f622f1f97fc44f6897f9edc25f9c6aab8e425049
Author: sandy
Date:   Thu Jun 19 15:19:53 2014 +0530

    Patch 1 on release2

commit 9a1dd81c7799c2f83d897eed85914eecef304bf0
Author: sandy
Date:   Thu Jun 19 15:16:52 2014 +0530

    Release 2

commit 135e04c00b3c3d5bc868f7774a5f284c3eb8cb29
Author: sandy
Date:   Thu Jun 19 15:16:28 2014 +0530

  Release 1

Now Reverting a patch is as simple as executing a simple command git revert, with the commit id of the patch

git revert f622f1f97fc44f6897f9edc25f9c6aab8e425049
[master 0ba533f] q Revert "Patch 1 on release2"
 1 file changed, 1 deletion(-)

If you run git log command, you will see the patch revert history as well

sandy@sandy:~/test/app1$ git log
commit 0ba533fda95ed4d7fcf0b7e6b23cd1a5589946a7
Author: sandy
Date:   Thu Jun 19 15:20:24 2014 +0530

    Revert "Patch 1 on release2"

    This reverts commit f622f1f97fc44f6897f9edc25f9c6aab8e425049.commit f622f1f97fc44f6897f9edc25f9c6aab8e425049
Author: sandy
Date:   Thu Jun 19 15:19:53 2014 +0530

    Patch 1 on release2

commit 9a1dd81c7799c2f83d897eed85914eecef304bf0
Author: sandy
Date:   Thu Jun 19 15:16:52 2014 +0530

    Release 2

commit 135e04c00b3c3d5bc868f7774a5f284c3eb8cb29
Author: sandy
Date:   Thu Jun 19 15:16:28 2014 +0530

    Release 1

I hope this blog has given you a very different perspective of managing the patches, let me know your thoughts about this. Also if you have such more ideas do share with me.

How to secure your Linux Server

Yesterday was a good and bad day for me, bad day because one of my linux server has been hacked. Good day because it was one of the most important task in my pipeline which I wanted to take up, that is securing my systems. As people say being agile or lazy :), do when it is actually required and yesterday was that day.

I’m a novice in infrastructure management, but I really liked this field that’s why I plunged into this domain and now I’m really loving it because of such challenges. Now let’s cut the crap and straightaway jump to the point, I’ve figured few of the best practices that you should always do while configuring your “SECURE” linux server:

  • Don’t use default ssh port for login into the system, or best you can have a policy where you will change your ssh port every month or 2 month.
  • To go a step forward disable the password based login and just enable key base login.
  • Use some intrusion prevention framework, I’ve figured out fail2ban is a good one.
  • Keep all non public facing machines on private ip.
  • In case of public machines only open those ports which are actually required.
  • User firewall to it’s maximum effect. Iptables can be a good option.
  • Have a strong alert system that can monitor your system and raise an alert in case of any suspicious activity. We use Icinga.
Though this list may not cover all the required things that you can take care of, but it can serve as a very good starting point. Also I would love to hear more suggestions that can be used.

A wrapper over linode python API bindings

Recently I’ve been working on automating the nodes creation on our Linode infrastructure, in the process I came across the Linode API and it’s bindings. Though they were powerful but lacks at some places i.e:

  1. In case of Linode CLI, while creating a linode you have to enter the root password so you can’t achieve full automation. Also I was not able to find an option to add private ip to the linode
  2. In case of Linode API python binding you can’t straight away create a running linode machine.

Recently I’ve launched a new GitHub project, this project is a wrapper over existing python bindings of linode and will try to ease out the working with linode api. Currently using this project you can create a linode with 3 lines of code
from linode import Linode
linode=Linode(‘node_identifier’)
linode.create()

You just need to have a property file,/data/linode/linode.properties:

[DEFAULT]
UBUNTU_DIST=Ubuntu 12.04
KERNEL_LABEL=Latest 64 bit
DATACENTER_LABEL=Dallas
PLAN_ID=1024
ROOT_SSH_KEY=
LINODE_API=
 The project is still in development, if someone wants to contribute or have any suggestions you are most welcome.

How to Manage Amazon Web Services Instances part 1

If you want to minimize the amount of money you spend on Amazon Web Services (AWS) infrastructure, then this blog post is for you. In this post I will be discussing  the rationale behind starting & stopping AWS instances in an automated fashion and more importantly, doing it in a correct way. Obviously you could do it through the web console of AWS as well, but it will need your daily involvement. In addition, you would have to take care of starting/stopping various services running on those instances.

Before directly jumping on how we achieved instance management in an automated fashion, I like to state the problem that we were facing. Our application testing infrastructure is on AWS and it is a multiple components(20+) application distributed among 8-9 Amazon instances. Usually our testing team starts working from 10 am, and continues till 7 pm. Earlier we used to keep our testing infrastructure up for 24 hours, even though we were using it for only 9 hours on weekdays, and not using it at all on weekends. Thus, we were wasting more then 50% of the money that we spent on the AWS infrastructure. The obvious solution to this problem was: we needed an intelligent system that would make sure that our amazon infrastructure was up only during the time when we needed it.

The detailed list of the requirements, and the corresponding things that we did were:

  1. We should shut down our infrastructure instances when we are not using them.
  2. There should be a functionality to bring up the infrastructure manually: We created a group of Jenkins jobs, which were scheduled to run at a specific time to start our infrastructure. Also a set of people have execution access to these jobs to start the infrastructure manually, if the need arises.
  3. We should bring up our infrastructure instances when we need it.
  4. There should be a functionality to shut down the infrastructure manually: We created a group of Jenkins jobs that were scheduled to run at a specific time to shut down our infrastructure. Also a set of people have execution access on these jobs to shut down the infrastructure manually, if the need arises.
  5. Automated application/services start on instance start: We made sure that all the applications and services were up and running when the instance was started.
  6. Automated graceful application/services shut down before instance shut down: We made sure that all the applications and services were gracefully stopped before the instance was shut down, so that there was be no loss of data.
  7. We also had to make sure that all the applications and services should be started as per defined agreed order.

Once we had the requirements ready, implementing them was simple, as Amazon provides a number of APIs to achieve this. We used AWS CLI, and needed to use just 2 simple commands that AWS CLI provides.
The command to start an instance :
aws ec2 start-instances –instance-ids i-XXXXXXXX
The command to stop an instance :
aws ec2 stop-instances –instance-ids i-XXXXXXXX 

Through above commands you can automate starting and stopping AWS instances, but you might not be doing it the correct way. As you didn’t restrict the AWS CLI allow firing of start-instances and stop-instances commands only, you could use other commands and that could turn out to be a problem area. Another important point to consider is that we should restrict the AWS instances on which above commands could be executed, as these commands could be mistakenly run with the instance id of a production amazon instance id as an argument, creating undesirable circumstances 🙂

In the next blog post I will talk about how to start and stop AWS instances in a correct way.

Puppet module to setup nodejs deployment

I would like to share my puppet module to setup nodejs deployment infrastructure on a linux box. This module performs the basic setup required to facilitate the automated deployment of a nodejs app. Very soon I’ll be introducing another generic puppet module that will run on top of this module & provide a full fledged automatic deployment of any node app. To view the source code of this module you can refer my github repository.

Let’s talk about what this module actually does. First of all we create a nodejs user which we will use for all deployment related activities of all the node app’s, as a convention we have created a folder /home/nodejs/nodeapps this folder will contain all the code of our node applications.

This modules adds 2 scripts as well the first one is deployNodeApp.sh, deployNodeApp.sh is a generic script that assumes that node app code will be present in tar form at /home/nodejs it will clean existing code of nodeapp at /home/nodejs/nodeapps untar the code at corresponding directory of node app & restart the node app. As another convention we are using upstart for managing the node app i.e starting & stopping the node app I’ll talk about the upstart configuration in my next blog where I’ll talk about generic puppet module for a node app. Another script startNodeApp.sh will take care of starting the node app after doing some per-processing such as loading some environment specific properties of node app which we don’t want to commit in the codebase i.e want to separate it out from deployment process choosing a specific version of node.

This module also takes cares of installing nvm for nodejs user so that nodejs version can be managed locally for this user or app.

Though we already have a puppet module for nodejs, but I had some specific requirements which I wanted to handle that’s why I’ve created this module.

Let me know if you have some points of improvement in this module, one thing that I wanted to add in this module is to add npm installation but it had some other dependencies also I had some doubts whether I should have npm as part of nodejs module or not.