Can you integrate a GitHub Webhook with Privately hosted Jenkins No? Think again

Introduction

One of the most basic requirement of CI implementation using Jenkins is to automatically trigger a Jenkins job post every commit. As you are already aware there are two ways in which a Jenkins job can be triggered in an automated fashion is:

  • Pull | PollSCM
  • Push | Webhook

It is a no-brainer that a Push-based trigger is the most efficient way of triggering a Jenkins job else you would be unnecessarily hogging your resources. One of the hurdles in implementing a push-based trigger is that your VCS & Jenkins server should be in the same network or in simple terms they can talk to each other.

In a typical CI setup, there is a SAAS VCS i.e GitHub/GitLab and a privately hosted Jenkins server, which make a Push-based triggering of Jenkins job impossible. Till a few days back I was under the same impression until I found this awesome blog that talks about how you can integrate a Webhook with your private Jenkins server.

In this blog, I’ll be trying to explain how I implemented the Webhook relay. Most importantly the reference blog was about integration of WebhookRelay with GitHub, with GitLab still there were some unexplored areas and I faced some challenges while doing the integration. This motivated me to write a blog so that people will have a ready reference on how to integrate GitLab with Webhook Relay.

Overall Workflow

Step 1: Download WebHook Relay Agent on the local system

Copy and execute the command

curl -sSL https://storage.googleapis.com/webhookrelay/downloads/relay-linux-amd64 > relay && chmod +wx relay && sudo mv relay /usr/local/bin

Note: Webhook Relay and Webhook Relay agent are different. Webhook Relay is running on public IP which triggers by GitLab and Webhook Relay Agent is a service which gets trigger by Webhook relay.

Step 2: Create a Webhook Relay Account

After successfully signing up we will land on Webhook Relay home page.

Step 3: Setting up the Webhook Relay Agent.

We have to create Access Tokens.
Now after navigating through Access token, click on Create Token button. Then we are provided with a Key and Secret pair.
Copy and execute:

relay login -k token-key -s token-secret
 
 
If it prompts a success message it means our Webhook relay agent is successfully setup.

Step 4: Create GItLab Repository

We will keep our repository a public one to keep things simple and understandable. Let’s say our Gitlab repository’s name is  WebhookProject.

Step 5: Install GitLab and GitLab Hook Plugin.

Go to Manage Jenkins →  Manage Plugins → Available
 

Step 6: Create Jenkins Job

 
Configure job: Add Gitlab repository link
 
Now we’ll choose the build trigger option:
 
 
 
Save the job.

Step 7: Connecting GitLab Repository, Webhook Relay, and Webhook Relay Agent

The final and most important step is to Connect the Overall flow.

Start forwarding Webhooks to Jenkins

Open terminal and type command:

relay forward --bucket gitlab-jenkins http://localhost:8080/project/webhook-gitlab-test
Note: Bucket name can be anything
 
 
Note: Do not stop this process by doing (ctrl+c).Open a new terminal or a new tab for commit to gitlab.

The most critical part of the workflow is the link generated by the Webhook Relay Agent. Copy this link and paste Gitlab repository(webhookProject) → Settings → Integrations

Paste the link.
For the sake of simplicity uncheck the Enable SSL Verification and click Add webhook button
Until now all major configuration has been done. Now Clone GitLab repository and push commits to the remote repository.
Go to Jenkins job and see build is triggered by GitLab webhook.
To see GitLab webhook Relay Logs, Go to :
Gitlab Repository → Settings → Integrations → webhook → Edit
 
 
To see Logs of Webhook Relay Agent trigger Jenkins, Go to:
Webhook Relay UI page → Relay Logs.

So now you know how to do WebHook integration between your VCS & Jenkins even when they are not directly reachable to each other.
Can you integrate a GitHub Webhook with Privately hosted Jenkins? Yes
Cheers Till Next Time!!!!

jgit-flow maven plugin to Release Java Application

Introduction

As a DevOps I need a smooth way to release the java application, so I compared two maven plugin that are used to release the java application and in the end I found that Jgit-flow plugin is far better than maven-release plugin on the basis of following points:
  • Maven-release plugin creates .backup and release.properties files to your working directory which can be committed mistakenly, when they should not be. jgit-flow maven plugin doesn’t create these files or any other file in your working directory.
  • Maven-release plugin create two tags.
  • Maven-release plugin does a build in the prepare goal and a build in the perform goal causing tests to run 2 times but jgit-flow maven plugin builds project once so tests run only once.
  • If something goes wrong during the maven plugin execution, It become very tough to roll it back, on the other hand jgit-flow maven plugin makes all changes into the branch and if you want to roll back just delete that branch.
  • jgit-flow maven plugin doesn’t run site-deploy
  • jgit-flow maven plugin provides option to turn on/off maven deployment
  • jgit-flow maven plugin provides option to turn on/off remote pushes/tagging
  • jgit-flow maven plugin keeps the master branch always at latest release version.
Now let’s see how to integrate Jgit-flow maven plugin and use it

How to use Jgit-flow maven Plugin for Release

Follow the flowing steps
    1. Add the following lines in your pom.xml for source code management access
      
         scm:git:
        scm:git:git:
    2. Add these line to resolve the Jgit-flow maven plugin and put the other option that will be required during the build
      com.atlassian.maven.plugins
      maven-jgitflow-plugin
      1.0-m4.3
              
      true
      false
      true
      true
      true
      true
      true
      true
                
      master-test
      deploy-test       

Above code snippet will perform following steps:

    • Maven will resolve the jgitflow plug-in dependency
    • In the configuration section, we describe how jgit-flow plug-in will behave.
    • pushRelease XML tag to enable and disable jgit-flow from releasing the intermediate branches into the git or not.
    • keepBranch XML tag to enable and disable the plug-in for keep the intermediate branch or not.
    • noTag XMl tag to enable and disable the plug-in to create the that tag in git.
    • allowUntracked XML tag to whether allow untracked file during the checking.
    • flowInitContext XML tag is used to override the default and branch name of the jgit-flow plug-in
    • In above code snippet, there is only two branches, master from where that code will be pulled and a intermediate branch that will be used by the jgit-flow plug-in. as I have discussed that jgit-flow plug-in uses the branches to keep it records. so development branch will be created by the plug-in that resides in the local not remotely, to track the release version etc.
  1. To put your releases into the repository manager add these lines
    <distributionManagement>
      <repository>
        <id><auth id></id>
        <url><repo url of repository managers></url>
      </repository>
      <snapshotRepository>
        <id><auth id></id>
        <url><repo url of repository managers></url>
      </snapshotRepository>
    </distributionManagement>
  2. Put the following lines into your m2/settings.xml with your repository manager credentials
    <settings>
      <servers>
        <server>
            <id><PUT THE ID OF THE REPOSITORY OR SNAPSHOTS ID HERE></id>
           <username><USERNAME></username>
           <password><PASSWORD></password>
        </server>
      </servers>
    </settings>

Start Release jgit-flow maven plugin command

To start the new release execute jgitflow:release-start.

Finish Release jgit-flow maven plugin  command

To finish new release, execute mvn jgitflow:release-finish.
For a example I have created a repository in github.com. for testing and two branch master-test and deploy-test. It is assumed that you have configured maven and git your system.

In the deploy-test branch run following command
$ mvn clean -Dmaven.test.skip=true install jgitflow:release-start

This command will take input from you for release version and create a release branch with release/. then it will push this release branch into github repository for temporarily because we are not saving the intermediate branched

Now At the end run this command
$ mvn -Dmaven.test.skip=true jgitflow:release-finish
after finishing this command it will delete release/ from local and remote.

Now you can check the changes in pom file by jgitflow. in the above snapshot, it is master-test branch, you can see in the tag it has removed the snapshot and also increased the version.  It hold the current version of the application.

And in the deploy-test branch it show you new branch on which developers are working on

Opstree SHOA Part 1: Build & Release

At Opstree we have started a new initiative called SHOA, Saturday Hands On Activity. Under this program we pick up a concept, tool or technology and do a hands on activity. At the end of the day whatever we do is followed by a blog or series of blog that we have understood during the day.
 
Since this is the first Hands On Activity so we are starting with Build & Release.

What we intend to do 

Setup Build & Release for project under git repository https://github.com/OpsTree/ContinuousIntegration.

What all we will be doing to achieve it

  • Finalize a SCM tool that we are going to use puppet/chef/ansible.
  • Automated setup of Jenkins using SCM tool.
  • Automated setup of Nexus/Artifactory/Archiva using SCM tool.
  • Automated setup of Sonar using SCM tool.
  • Dev Environment setup using SCM tool: Since this is a web app project so our Devw443 environment will have Nginx & tomcat.
  • QA Environment setup using SCM tool: Since this is a web app project so our QA environment will have Nginx & tomcat.
  • Creation of various build jobs
    • Code Stability Job.
    • Code Quality Job.
    • Code Coverage Job.
    • Functional Test Job on dev environment.
  • Creation of release Job.
  • Creation of deployment job to do deployment on Dev & QA environment.
This activity is open for public as well so if you have any suggestion or you want to attend it you are most welcome

Revert a patch in most awesome way

If you are a Release Engineer, System Admin or Support Engineer you have definitely come across a requirement where you have to apply patches to the target systems be it production or non-production. I’m assuming that you are using some automated system to manage the patches i.e applying them and reverting them. In this blog I would be discussing about the standard way of patch management and how you can have an out of the box solution to revert your patch in most simplistic way and without much fuss. At the end of the blog I would like to see an expression where you will say what the hell it’s so awesome yet so simple :).

People usually use some tool to apply patch to a target system which in addition to applying a patch also manage the history the patches so that it can be reverted in case the patch goes wrong. The patch history usually contains below details:

  1. The new files that were added in the patch, while reverting the patch those files should be deleted.
  2. The files that were deleted by the patch, while reverting the patch the deleted files should be restored back.
  3. The files that were modified by the patch, while reverting the patch the modified files should be restored back.
You can definitely create a tool that can revert the patch for you as the use cases are not much, but do you really need to put this much effort if you can have an out of the box solution for this. What if I tell you that we use git for managing our patch history and reverting them. As git comes with a local repository concept so we created a local git repository at our app server codebase location only. Git comes with all the file level tracking we map each patch with one git commit, so at the time of reverting a specific patch you can ask git to simply revert the commit for you.

Extra steps to be done after applying patch:
To make git track the changes done in patch, you just need to perform 2 extra commands

git add . : This command will track all the files that have been modififed, added or deleted in the system.
git commit -m “Applying Patch” : This command actually adds the files information tracked by previous command with a message in the git system

Steps to be done in reverting changes done by a patch:
Once you have all the information tracked in git it will become no-brainer to revert the patches.

To view the details of all the patches: You can use git log command that will provide you the list of all the patches that you have applied or reverts that you have done

sandy@sandy:~/test/app1$ git log
commit f622f1f97fc44f6897f9edc25f9c6aab8e425049
Author: sandy
Date:   Thu Jun 19 15:19:53 2014 +0530

    Patch 1 on release2

commit 9a1dd81c7799c2f83d897eed85914eecef304bf0
Author: sandy
Date:   Thu Jun 19 15:16:52 2014 +0530

    Release 2

commit 135e04c00b3c3d5bc868f7774a5f284c3eb8cb29
Author: sandy
Date:   Thu Jun 19 15:16:28 2014 +0530

  Release 1

Now Reverting a patch is as simple as executing a simple command git revert, with the commit id of the patch

git revert f622f1f97fc44f6897f9edc25f9c6aab8e425049
[master 0ba533f] q Revert "Patch 1 on release2"
 1 file changed, 1 deletion(-)

If you run git log command, you will see the patch revert history as well

sandy@sandy:~/test/app1$ git log
commit 0ba533fda95ed4d7fcf0b7e6b23cd1a5589946a7
Author: sandy
Date:   Thu Jun 19 15:20:24 2014 +0530

    Revert "Patch 1 on release2"

    This reverts commit f622f1f97fc44f6897f9edc25f9c6aab8e425049.commit f622f1f97fc44f6897f9edc25f9c6aab8e425049
Author: sandy
Date:   Thu Jun 19 15:19:53 2014 +0530

    Patch 1 on release2

commit 9a1dd81c7799c2f83d897eed85914eecef304bf0
Author: sandy
Date:   Thu Jun 19 15:16:52 2014 +0530

    Release 2

commit 135e04c00b3c3d5bc868f7774a5f284c3eb8cb29
Author: sandy
Date:   Thu Jun 19 15:16:28 2014 +0530

    Release 1

I hope this blog has given you a very different perspective of managing the patches, let me know your thoughts about this. Also if you have such more ideas do share with me.

A wrapper over linode python API bindings

Recently I’ve been working on automating the nodes creation on our Linode infrastructure, in the process I came across the Linode API and it’s bindings. Though they were powerful but lacks at some places i.e:

  1. In case of Linode CLI, while creating a linode you have to enter the root password so you can’t achieve full automation. Also I was not able to find an option to add private ip to the linode
  2. In case of Linode API python binding you can’t straight away create a running linode machine.

Recently I’ve launched a new GitHub project, this project is a wrapper over existing python bindings of linode and will try to ease out the working with linode api. Currently using this project you can create a linode with 3 lines of code
from linode import Linode
linode=Linode(‘node_identifier’)
linode.create()

You just need to have a property file,/data/linode/linode.properties:

[DEFAULT]
UBUNTU_DIST=Ubuntu 12.04
KERNEL_LABEL=Latest 64 bit
DATACENTER_LABEL=Dallas
PLAN_ID=1024
ROOT_SSH_KEY=
LINODE_API=
 The project is still in development, if someone wants to contribute or have any suggestions you are most welcome.

Puppet module to setup nodejs deployment 2

As I said in the previous blog Puppet module to setup nodejs deployment, the nodejs module was for providing the basic infrastructure for automated node app’s deployment & as promised I’ve released the next module “nodeapp” that can be used to setup a node app on the target server.

First of all I’ll talk about what this module will do to facilitate the automated deployment of a nodejs app, as already discussed we are following a convention that all the node app’s code will be present at /home/nodejs/ which is referred by startNodeApp.sh script so we create the directory of nodejs app. The deployNodeApp.sh script was using the upstart to manage the nodejs app instance i.e starting/stoppping the nodejs app, the nodeapp module takes care of creating the require upstart configuration at /etc/init/.conf. Also we use monit to monitor the nodejs app’s so that we can start/stop the nodejs app’s using the web ui of monit & also see various stats such as cpu, memory, load.. consumption of nodejs app.

This nodeapp module is a userdefined type which takes the name of nodeapp as an argument, as a result of which you can setup any number of nodejs app’s on a system.
i.e nodeapp{‘search-demo’: app_name => “search-demo”}
This entry will create below files

/etc/init/search-demo.conf : An upstart configuration file, using which search demo nodejs app can be managed as a service.

#!upstart
description “node.js search-demo server”
author      “sandy”

start on startup
stop on shutdown

script
    export HOME=”/home/nodejs”

    echo $$ > /var/run/search-demo.pid
    exec sudo -u nodejs /home/nodejs/startNodeApp.sh search-demo
end script

pre-start script
    # Date format same as (new Date()).toISOString() for consistency
    echo “[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting” >> /var/log/search-demo.sys.log
end script

pre-stop script
    rm /var/run/search-demo.pid
    echo “[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping” >> /var/log/search-demo.sys.log
end script

/etc/monit/conf.d/search-demo.monit : A monit configuration file, using which search-demo nodejs app can be monitored & even automatedly restarted

check process search-demo with pidfile /var/run/search-demo.pid
 stop program = “/sbin/stop search-demo”
 start program = “/sbin/start search-demo”

So using these 2 modules nodejs & nodeapp you can make any system up & running for nodejs app’s automated deployment

Automated DB Updater Release First Release

Initial version of Automated DB Updater Release ADU

With this blog I’m releasing the intial version of a python utility to provide automated db updates across various environments for different components.

The code for this utility is hosted on github
https://github.com/sandy724/ADU

You can clone the read only copy of this codebase by url given below
https://github.com/sandy724/ADU.git

To understand the basic idea about this utility go thorugh this blog
http://sandy4blogs.blogspot.in/2013/07/automated-db-updater.html

How to use this utility
Checkout the code at some directory, add the path of this directory in PYTHONPATH environment variable
Create a database with a script’s metadata table with given below ddl

CREATE TABLE `script_metadata` (
  `name` varchar(100) DEFAULT NOT NULL,
  `version` int(11) DEFAULT NOT NULL,
  `executed` tinyint(1) NOT NULL DEFAULT ‘0’,
  `env` varchar(30) DEFAULT NOT NULL,
  `releas` varchar(30) DEFAULT NOT NULL,
  `component` varchar(30) DEFAULT NOT NULL
)
Create a database.properties, containing connection properties of each environment database

[common_db]
dbHost=localhost
dbPort=3306
dbUser=root
dbPwd=root
db=test
 
 
[env1]
dbHost=localhost
dbPort=3306
dbUser=root
dbPwd=root
db=test

Here common_db represents connection to database which will contain metadata of scripts for monitoring

Now execute the pythong utility
Copy the client(updateDB.py) to directory of your choice, make sure that property configration file should also be at this directory
python updateDB.py -f -r –env

Automated DB Updater

In continuation with my blog series I’m finally introducing a automated db updater tool. You can read about the idea in previous blogs by going to below links

Manual DB Updates challenges
Manual DB Updates challenges-2

The short form of my tool is ADU(Automated DB Updater). Now some details about this tool

Each application will have database_script folder at the root level, this folder will contain folders corresponding to each release i.e release1, release2, release3…

A database release folder will contain

  • Meta file :sql_sequence.txt, this file will contain the sequence in which sql files will be executed, only files mentioned in this file will be entertained
  • SQL Files : A sql file must have a naming convention like this __.sql/__.sql

Process of automatic execution of scripts on an environment

  • Input
    • release_name : to figure out the folder from where scripts will be executed
    • environment : Environment on which scripts will be executed
  • Execution
    • sql_sequence.txt file will be read line by line having one sql file name in each line
    • The sql file will be verified whether it has been already executed or not
    • If the sql file is already executed then two conditions are verified
      • A new version of sql should be available
      • Undo version of last executed sql should be present
    • After execution of undo file the latest version of the sql file will be executed and the info is stored accordingly that it has been executed so that it will not be picked again
  • Validations & Boundary Conditions
    • All the files mentioned in sql_sequence.txt should exist.
    • Undo script should be present for all the versions of a sql file barring the latest version of sql file.
    • Undo script will only be executed if next version of script is available.

Very soon I’ll share the github url of this project keep waiting 🙂

Puppet module for setting up Multiple mongo’s with replication

In this blog I’ll be talking about a puppet module, that can be used to installing multiple mongo’s with replication on a single machine. Since I’m very new to puppet so you may find this module very crude, but it works :). Their were couple of puppet module already available but most of them are only for installing a single instance of mongo at a machine & I’ve a specific requirement of installing multiple instances of mongo having master slave replication between them. As I already said that this module may be quiet crude or basic so please bear with that & my approach may also seem a bit unconventional so please let me know what all can be improved in this module or how things could have been done in a better way.

So let’s start with the actual details first of this module is hosted on github(https://github.com/sandy724/Mongo), if you want to look at the source code you can clone it from github. For installing mongo you would be executing the command
puppet apply -e “class {mongo:port => , replSet => ,master => ,master_port => ,}”

Command for installing master
puppet apply -e “class {mongo:port => 27017, replSet => sdrepsetcommon,master => master, master_port => 27017,}”

Command for installing slave
puppet apply -e “class {mongo:port => 27018, replSet => sdrepsetcommon,master => slave,master_port => 27017,}”

Before going into the details what all this module is doing I will share some details of mongo

  • You can start mongo by executing mongod command
  • You can provide a configration file which contains details such as
    • log directory where mongo would be generating the logs
    • port at which mongo would be listening for requests
    • dbpath where mongo would be storing all the data
    • pidfilepath containg process id of mongo instance, that would be used to check whether mongo is running or not
    • replSet name of the replicaset
  • You need to have a mongo as a service installed in you system to start an instance of mongo
  • For replication you need to execute rs.initiate command on the master mongo
  • For adding another instance into replication you need to execute rs.add(“:”)  command on the master mongo
Now let’s go into more details what all this component does, I’ll be listing down all the steps in bulleted points
  • As you can figure out this module is expecting few parameters :
    • port : port at which mongo would be listening,
    • replSet : name of replicaset which would be used for managing replication
    • master : A string parameter which would signify whether the mongo setup is for master or slave
    • master_port : Port at which master instance of mongo would be listening
  • First of all we create a mongo user
  • Parent Log directory for the mongo instance is created if it doesn’t exists with mongo user as owner.
  • Mongo db directory is created under /data/mongo with a naming convention replSet_port, i.e if replSet parameter is sdrepsetcommon & port is 27017 then the data directory for this mongo instance would be  /data/mongo/sdrepsetcommon_27017. This directory would be owned by mongo user.
  • A mongo service would be installed if not already their.
  • A mongo restart shell script is also placed at the mongo db directory
  • A file is also placed under the mongo db directory that have a mongo command to setup replication, this file is created conditionally depending on whether we are setting up a master or slave instance.
  • Finally the replication command is executed on mongo server & restart script is also executed
This concludes the setting up of a mongo instance on a machine.

Just for more details to start mongo we are using mongod -f command, this configuration file is saved as a template & the mongo modules processes the template with the values passed & creates the desired mongod.conf. In our case we are evaluating following properties of mongod.conf : logpath, port, dbpath, pidfilepath, replSet

Initial thoughts for an automation testing framework/utility

My first exposure to selenium was in 2010/2011 & I was quiet impressed with it, the way you can use selenium for the testing of web application was totally awesome. At that time I was working with xebia,  our team was working for website revamp of a dutch travel company. We were using selenium for all the regression & functional testing of website, 80% of the website testing was done only by Selenium.

One of the challenge with selenium is that for each test scenario you have to write code for that & if you don’t manage your test scenario/cases effectively, the management of selenium test cases becomes a task in itself. At that point of time also we tried to make maximum use of Java to make selenium test cases as structured & Object oriented possible so that they can be extended & managed easily. I always had a desire to do some improvement in that area so that the selenium test cases management should be made more easier.

In my current company most of the testing is done manually, since I had a prior experience of Selenium & experienced the power it brings to your testing. I was pretty determined to bring the selenium advantage in our company. Off-late an automation testing was set-upped in our company as well which was working on leveraging the power of selenium in testing, but again it was same problem you have to write a lot of code. The other challenge that automation team was facing, the UI of the site was changing very frequently so whatever work they used to do was becoming back to zero after few iterations.

Last week along with my team I’ve started doing some head banging that let’s see if can do something out of the box, in a normal discussion with my team members one idea stuck to us. The manual QA team of our company is very strong & they have complete in & out of idea of whole application, but they have so much work assigned to them that they can’t spend their time in the selenium. We wanted to club the knowledge of our manual testing team & the power of selenium.

As a POC we buit a very simple utility that will read a meta information file and executes the commands listed in that meta file. As an example if they want to open a page one of the line of meta file will contain a command “open url”, similarly if they have to click a button the command will be something like “click . This utility was doing exactly what we wanted to do. We are still in the POC phase where we are trying to include as much commands as possible

Let me know about your thoughts for this approach, suggestions are most welcome.