How to Manage Amazon Web Services Instances part 1

If you want to minimize the amount of money you spend on Amazon Web Services (AWS) infrastructure, then this blog post is for you. In this post I will be discussing  the rationale behind starting & stopping AWS instances in an automated fashion and more importantly, doing it in a correct way. Obviously you could do it through the web console of AWS as well, but it will need your daily involvement. In addition, you would have to take care of starting/stopping various services running on those instances.

Before directly jumping on how we achieved instance management in an automated fashion, I like to state the problem that we were facing. Our application testing infrastructure is on AWS and it is a multiple components(20+) application distributed among 8-9 Amazon instances. Usually our testing team starts working from 10 am, and continues till 7 pm. Earlier we used to keep our testing infrastructure up for 24 hours, even though we were using it for only 9 hours on weekdays, and not using it at all on weekends. Thus, we were wasting more then 50% of the money that we spent on the AWS infrastructure. The obvious solution to this problem was: we needed an intelligent system that would make sure that our amazon infrastructure was up only during the time when we needed it.

The detailed list of the requirements, and the corresponding things that we did were:

  1. We should shut down our infrastructure instances when we are not using them.
  2. There should be a functionality to bring up the infrastructure manually: We created a group of Jenkins jobs, which were scheduled to run at a specific time to start our infrastructure. Also a set of people have execution access to these jobs to start the infrastructure manually, if the need arises.
  3. We should bring up our infrastructure instances when we need it.
  4. There should be a functionality to shut down the infrastructure manually: We created a group of Jenkins jobs that were scheduled to run at a specific time to shut down our infrastructure. Also a set of people have execution access on these jobs to shut down the infrastructure manually, if the need arises.
  5. Automated application/services start on instance start: We made sure that all the applications and services were up and running when the instance was started.
  6. Automated graceful application/services shut down before instance shut down: We made sure that all the applications and services were gracefully stopped before the instance was shut down, so that there was be no loss of data.
  7. We also had to make sure that all the applications and services should be started as per defined agreed order.

Once we had the requirements ready, implementing them was simple, as Amazon provides a number of APIs to achieve this. We used AWS CLI, and needed to use just 2 simple commands that AWS CLI provides.
The command to start an instance :
aws ec2 start-instances –instance-ids i-XXXXXXXX
The command to stop an instance :
aws ec2 stop-instances –instance-ids i-XXXXXXXX 

Through above commands you can automate starting and stopping AWS instances, but you might not be doing it the correct way. As you didn’t restrict the AWS CLI allow firing of start-instances and stop-instances commands only, you could use other commands and that could turn out to be a problem area. Another important point to consider is that we should restrict the AWS instances on which above commands could be executed, as these commands could be mistakenly run with the instance id of a production amazon instance id as an argument, creating undesirable circumstances 🙂

In the next blog post I will talk about how to start and stop AWS instances in a correct way.

Puppet module to setup nodejs deployment

I would like to share my puppet module to setup nodejs deployment infrastructure on a linux box. This module performs the basic setup required to facilitate the automated deployment of a nodejs app. Very soon I’ll be introducing another generic puppet module that will run on top of this module & provide a full fledged automatic deployment of any node app. To view the source code of this module you can refer my github repository.

Let’s talk about what this module actually does. First of all we create a nodejs user which we will use for all deployment related activities of all the node app’s, as a convention we have created a folder /home/nodejs/nodeapps this folder will contain all the code of our node applications.

This modules adds 2 scripts as well the first one is deployNodeApp.sh, deployNodeApp.sh is a generic script that assumes that node app code will be present in tar form at /home/nodejs it will clean existing code of nodeapp at /home/nodejs/nodeapps untar the code at corresponding directory of node app & restart the node app. As another convention we are using upstart for managing the node app i.e starting & stopping the node app I’ll talk about the upstart configuration in my next blog where I’ll talk about generic puppet module for a node app. Another script startNodeApp.sh will take care of starting the node app after doing some per-processing such as loading some environment specific properties of node app which we don’t want to commit in the codebase i.e want to separate it out from deployment process choosing a specific version of node.

This module also takes cares of installing nvm for nodejs user so that nodejs version can be managed locally for this user or app.

Though we already have a puppet module for nodejs, but I had some specific requirements which I wanted to handle that’s why I’ve created this module.

Let me know if you have some points of improvement in this module, one thing that I wanted to add in this module is to add npm installation but it had some other dependencies also I had some doubts whether I should have npm as part of nodejs module or not.

Build & Release Challenges : Manual DB Updates Part 2

Previos

This blog was supposed to be about the new system, I thought of building to solve the problem that I discussed in my previous blog. Well for your disappointment this blog will be not about that, the reason is scope of the problem changed. In this blog I’ll be discussing about the new scope and how discussion moved forward about it and what is the current state, which means that I’m still not able to solve this problem & suggestions are welcome :).

I’ll again state the problem which was very simple enough, “database updates were not automated in non prod environments as same db scripts were modified during development“. You need to refer to previous blog for more details about this problem. To solve this problem I came up with incremental db update approach, as per this approach all new modifications will be done as a new sql update which means that let’s say you had a file 1.sql, if you need to do any modification a new file 1′.sql should be committed. In this way the system don’t have to track the files, it just have to maintain what all files got executed, the new files which needs to be executed and execute the new files only. This solution can work in a normal setup very well, in fact in my last assignment I was using this approach only to have automated db updates across all environments.

The incremental db updates can’t be run in current setup, the reason for that is we have very huge database order of 100GB, you can easily imagine that we can’t afford to run same script with slight modifications i.e first script adding a column of size 20 then another script to change it’s size to 40 finally renaming it to some other name. Instead of that a single script should be created after consolidating all these scripts.

The first solution that came to my mind after this new issue emerged, during non prod deployments we should already have database dump of previous release, more preferably cold dump. During deployment 3 steps would be performed first load previous release db dump, run all the consolidated scripts which will be consolidated & do the code deployment. Initially this solution looked fine enough but QA team raised concern as loading previous release dump meant that all the test data  they have created on the QA server would be lost and I was at the beginning of square :).

Another solution that could be implemented  was to have rollback script for each & every script committed. This convention will have an advantage of supporting incremental update i.e whenever a script will be updated first it’s corresponding rollback script will be executed & then the script can be executed. This solution has it’s own challenges the first challenge was it’s really difficult to write rollback script of each & every script, another issues was you have to carefully manage the script files so that there will be no tight coupling between them as execution of rollback of one script will impact another script. Third issue although less significant is that you have to deal with data loss

We could also have used a hybrid approach that is combination of incremental & full db updates. Till QA phase we can use incremental db update mechanism in which all new script modifications are done as a new script and then they can be executed incrementally but for staging & production deployment db updates will be done as a full update which requires a human intervention i.e consolidation of scripts. This approach had 2 challenges the first & foremost was that it had manual intervention & second major issue was that we were duplicating the db scripts.

So these were the few approaches that we thought of & none was able to solve our problem completely, so we are still struggling with fully automating the db update process. Again any suggestions are most welcome 🙂