Automated DB Updater

In continuation with my blog series I’m finally introducing a automated db updater tool. You can read about the idea in previous blogs by going to below links

Manual DB Updates challenges
Manual DB Updates challenges-2

The short form of my tool is ADU(Automated DB Updater). Now some details about this tool

Each application will have database_script folder at the root level, this folder will contain folders corresponding to each release i.e release1, release2, release3…

A database release folder will contain

  • Meta file :sql_sequence.txt, this file will contain the sequence in which sql files will be executed, only files mentioned in this file will be entertained
  • SQL Files : A sql file must have a naming convention like this __.sql/__.sql

Process of automatic execution of scripts on an environment

  • Input
    • release_name : to figure out the folder from where scripts will be executed
    • environment : Environment on which scripts will be executed
  • Execution
    • sql_sequence.txt file will be read line by line having one sql file name in each line
    • The sql file will be verified whether it has been already executed or not
    • If the sql file is already executed then two conditions are verified
      • A new version of sql should be available
      • Undo version of last executed sql should be present
    • After execution of undo file the latest version of the sql file will be executed and the info is stored accordingly that it has been executed so that it will not be picked again
  • Validations & Boundary Conditions
    • All the files mentioned in sql_sequence.txt should exist.
    • Undo script should be present for all the versions of a sql file barring the latest version of sql file.
    • Undo script will only be executed if next version of script is available.

Very soon I’ll share the github url of this project keep waiting 🙂

Ubuntu Rest Assurer

Before I’ll start talking about this utility I would like to talk about the story behind the creation of this utility. Few days back we have a session ERGONOMICS about healthy lifestyle, one of the main thing was that a person should take a break after every 20 minutes. After the session I was having a chit chat about one of my colleague Rahul Narang & we were discussing about this 20:20:20 rule & then we thought about creating a utility that will force a person to leave his/her system after some stipulated time & that’s how the idea came about this utility.

So what this utility does
1.) It runs after every half hour or configured amount of time
2.) Prompts a snoozer dialog box which will allow a person to snooze the screen locking for some amount of time
3.) After that a lunching video will run for 10 seconds
4.) Once a lunching video completes the screen gets locked for a configured amount of time
5.) If user try to unlock the system before configured amount of time the utility will lock the screen again

This complete utility is created using shell script only, we have used couple of command to do that
vlc : To run a launcher video
zenity : To prompt a snooze dialogue box
gnome-screensaver-command : To operate on screen lock

You can find the source code of this utility at my github account
https://github.com/sandy724/REAS

Puppet module for setting up Multiple mongo’s with replication

In this blog I’ll be talking about a puppet module, that can be used to installing multiple mongo’s with replication on a single machine. Since I’m very new to puppet so you may find this module very crude, but it works :). Their were couple of puppet module already available but most of them are only for installing a single instance of mongo at a machine & I’ve a specific requirement of installing multiple instances of mongo having master slave replication between them. As I already said that this module may be quiet crude or basic so please bear with that & my approach may also seem a bit unconventional so please let me know what all can be improved in this module or how things could have been done in a better way.

So let’s start with the actual details first of this module is hosted on github(https://github.com/sandy724/Mongo), if you want to look at the source code you can clone it from github. For installing mongo you would be executing the command
puppet apply -e “class {mongo:port => , replSet => ,master => ,master_port => ,}”

Command for installing master
puppet apply -e “class {mongo:port => 27017, replSet => sdrepsetcommon,master => master, master_port => 27017,}”

Command for installing slave
puppet apply -e “class {mongo:port => 27018, replSet => sdrepsetcommon,master => slave,master_port => 27017,}”

Before going into the details what all this module is doing I will share some details of mongo

  • You can start mongo by executing mongod command
  • You can provide a configration file which contains details such as
    • log directory where mongo would be generating the logs
    • port at which mongo would be listening for requests
    • dbpath where mongo would be storing all the data
    • pidfilepath containg process id of mongo instance, that would be used to check whether mongo is running or not
    • replSet name of the replicaset
  • You need to have a mongo as a service installed in you system to start an instance of mongo
  • For replication you need to execute rs.initiate command on the master mongo
  • For adding another instance into replication you need to execute rs.add(“:”)  command on the master mongo
Now let’s go into more details what all this component does, I’ll be listing down all the steps in bulleted points
  • As you can figure out this module is expecting few parameters :
    • port : port at which mongo would be listening,
    • replSet : name of replicaset which would be used for managing replication
    • master : A string parameter which would signify whether the mongo setup is for master or slave
    • master_port : Port at which master instance of mongo would be listening
  • First of all we create a mongo user
  • Parent Log directory for the mongo instance is created if it doesn’t exists with mongo user as owner.
  • Mongo db directory is created under /data/mongo with a naming convention replSet_port, i.e if replSet parameter is sdrepsetcommon & port is 27017 then the data directory for this mongo instance would be  /data/mongo/sdrepsetcommon_27017. This directory would be owned by mongo user.
  • A mongo service would be installed if not already their.
  • A mongo restart shell script is also placed at the mongo db directory
  • A file is also placed under the mongo db directory that have a mongo command to setup replication, this file is created conditionally depending on whether we are setting up a master or slave instance.
  • Finally the replication command is executed on mongo server & restart script is also executed
This concludes the setting up of a mongo instance on a machine.

Just for more details to start mongo we are using mongod -f command, this configuration file is saved as a template & the mongo modules processes the template with the values passed & creates the desired mongod.conf. In our case we are evaluating following properties of mongod.conf : logpath, port, dbpath, pidfilepath, replSet

Initial thoughts for an automation testing framework/utility

My first exposure to selenium was in 2010/2011 & I was quiet impressed with it, the way you can use selenium for the testing of web application was totally awesome. At that time I was working with xebia,  our team was working for website revamp of a dutch travel company. We were using selenium for all the regression & functional testing of website, 80% of the website testing was done only by Selenium.

One of the challenge with selenium is that for each test scenario you have to write code for that & if you don’t manage your test scenario/cases effectively, the management of selenium test cases becomes a task in itself. At that point of time also we tried to make maximum use of Java to make selenium test cases as structured & Object oriented possible so that they can be extended & managed easily. I always had a desire to do some improvement in that area so that the selenium test cases management should be made more easier.

In my current company most of the testing is done manually, since I had a prior experience of Selenium & experienced the power it brings to your testing. I was pretty determined to bring the selenium advantage in our company. Off-late an automation testing was set-upped in our company as well which was working on leveraging the power of selenium in testing, but again it was same problem you have to write a lot of code. The other challenge that automation team was facing, the UI of the site was changing very frequently so whatever work they used to do was becoming back to zero after few iterations.

Last week along with my team I’ve started doing some head banging that let’s see if can do something out of the box, in a normal discussion with my team members one idea stuck to us. The manual QA team of our company is very strong & they have complete in & out of idea of whole application, but they have so much work assigned to them that they can’t spend their time in the selenium. We wanted to club the knowledge of our manual testing team & the power of selenium.

As a POC we buit a very simple utility that will read a meta information file and executes the commands listed in that meta file. As an example if they want to open a page one of the line of meta file will contain a command “open url”, similarly if they have to click a button the command will be something like “click . This utility was doing exactly what we wanted to do. We are still in the POC phase where we are trying to include as much commands as possible

Let me know about your thoughts for this approach, suggestions are most welcome.

Automation tips and tricks

As promised I’m back with the summary of cool stuff that I’ve done with my team in Build & Release domain to help us deal with day to day problems in efficient & effective way. As I said this month was about creating tools/utilities that sounds very simple but overall their impact in productivity & agility of build release teams and tech verticals was awesome :).

Automated deployment of Artifacts : If you have ever worked with a set of maven based projects that are interdependent on each other, one of the major problem that you will face in such a setup is to have the latest dependencies in your local system. Here I’m assuming two things you would be using a Maven Repo to host the artifacts & the dependencies would be SNAPSHOT dependencies if their is active development going on dependencies as well. Now the manual way of making sure that maven repo will always have the latest SNAPSHOT version is that every-time somebody does change in the code-base he/she manually deploy that artifact to maven repo. What we have done is that for each & every project we have created a Jenkins job that check if code is checked in for a specific component & if so that component’s SNAPSHOT version get’s deployed to maven repo. The impact of these utilities jobs was huge as now all the developers doesn’t have to focus on deploying their code to maven repo & also keeping track of who last committed the code was also not needed.

Log Parser Utility : We have done further improvement in our event based log analyzer utility. Now we also have a simple log parser utility through which we can parse the logs of a specific component & segregate the logs as per ERROR/WARN/INFO. Most importantly it is integrated with jenkins so you can go to jenkins select a component whose log needs to be analyzed, once analysis is finished the logs are segregated as per our configuration(in our case it is ERROR/WARN/INFO) after that in the left bar these segregations are shown with all the various instances of these categories and user can click on those links to go exactly at the location where that information is present in logs

Auto Code Merge : As I already told we have a team of around 100+ developers & a sprint cycle of 10 days and two sprints overlap each other for 5 days i.e first 5 days for development after tat code freeze is enforced and next 5 days are for bug fixing which means that at a particular point of time there are 3 parallel branches on which work is under progress one branch which is currently deployed in production second branch on which testing is happening and third branch on which active development is happening. You can easily imagine that merging these branches is a task in itself. What we have done is to create an automated code merge utility that tries to merge branches in a per-defined sequence if automatic merge is successful the merge proceeds for next set of branches otherwise a mail is sent to respective developers whose files are in conflict mode

Hope you will get motivated by these set of utilities & come up with new suggestions or point of improvements