How to implement CI/CD using AWS CodeBuild, CodeDeploy and CodePipeline

As we know that CI/CD (Continuous Integration/Continuous Deployment) is inevitable process in our DevOps culture , we should always look for a better .. more efficient solution to implement the same.

CI/CD gives us the capability to continuously integrate code changes, test it , deploy it and having continuous feedback which helps us to accelerate our development speed , off-course it reduces time in testing perspective and it helps you to make your releases streamline.

So you dont have to worry about anything except CODING as CI/CD will take care of everything for you. 🙂

Continue reading “How to implement CI/CD using AWS CodeBuild, CodeDeploy and CodePipeline”

Security Group Strategy for AWS

Fig .1

Grenadier Guards are an elite British Army infantry regiment. People say they are strong as a mountain and ruthless as hell. They protect the doors of Buckingham Palace which are the world’s most valuable residence. No one dares to enter. Likewise, our hosted resources in the cloud are of utmost important and valuable to us. We need some method to protect it and restrict the initial point of contact from the attacker an unwanted audience. Let’s discuss it further on how we can use the security group in the best way to secure our EC2 instances.

AWS is a cloud provider which means the services which we use are hosted at AWS data center. EC2 (Elastic Cloud Compute ), which we use to create instances, is one of many services provided by AWS.

Continue reading “Security Group Strategy for AWS”

Stop Wasting Money, Start Cost Optimization for AWS!

Generally, organizations move towards AWS to furnish their foundation with the capacity to develop and extend their abilities and because they can only pay for the resources they use. An unfortunate side effect of this methodology is that little costs regularly go unnoticed and can add up over time, prompting high monthly bills. The monetary effect of the current pandemic is forcing the world to adjust spending within their organizations. Everybody is turning over every rock to discover approaches to cut waste without impacting business. One way to get fast cost savings is to eliminate wasted spend on cloud services.

Continue reading “Stop Wasting Money, Start Cost Optimization for AWS!”

That’s Why Iptable Is Not A Good Fit For Domain Name?

Law And Order Svu Nbc GIF by SVU - Find & Share on GIPHY

Context

Let’s first talk about how it all started with and what we achieved.

It’s all started with a healthy discussion with a team where our team members were discussing many aspects of different fields of technology. So, one of our colleagues mentioned OpenVPN. So, we discussed the different working field, architecture, workflow of OpenVPN, in which role of iptables comes into the picture because for Linux architecture, OpenVPN support iptables as it’s primary firewall utility or can say OpenVPN support iptables as it’s a firewall for filtering workflow.

So in-between discussion, I mentioned that I am using iptables in OpenVPN to block traffic for the domain name and it is working fine. So, my colleague asked me about how you implemented & how is it possible to use iptables for domain and they discussed multiple logical explanations like OSI layer support and many other things. So, we decided to do POC of this discussion and try to write-up some blog or points to make clear that is it possible use iptables for the domain name and if not, what are the area that we can cover with iptables for the domain name and try to cover up flaws of this. Continue reading “That’s Why Iptable Is Not A Good Fit For Domain Name?”

Why We Should Use Transit & Direct Connect Gateways!

A BIG THANK YOU TO TRANSIT AND DIRECT CONNECT GATEWAYS

In everyone’s career path, this particular situation always comes when we think that everything will work out fine when, suddenly, out of the blue, we realize that a big issue is waiting to happen. We freak out about what are we gonna do before this issue knocks at your door ..Right? 

Something similar happened to me some time ago, so let me cut to the chase. 🙂

I will explain why there is benefit in using transit and direct connect gateways by telling you what issues we faced without it.

Continue reading “Why We Should Use Transit & Direct Connect Gateways!”

IP Whitelisting Using Istio Policy On Kubernetes Microservices

Recently, we explored Preserving the Source IP address on AWS Classic Loadbalancer and Istio’s envoy using the proxy protocol in our first Part. Continuing to the second part of this series, we will look at How can we apply IP whitelisting on the Kubernetes microservices!

Problem Statement:

There are some microservices behind an internet-facing loadbalancer that we want to have limited access to, based on source IP address. This will prevent our microservices from unauthorized access.

Continue reading “IP Whitelisting Using Istio Policy On Kubernetes Microservices”

Terraforming The Better Way: Part-I

We often face complications after a certain point when we can not change the foundation layer of our code because we haven’t thought it through and didn’t plan or strategize the way of writing code in the beginning, there are certain points which should be taken under consideration similarly there are some common mistakes which we should avoid.

Continue reading “Terraforming The Better Way: Part-I”

Preserve Source IP In AWS Classic Load-Balancer And Istio’s Envoy Using Proxy Protocol

Preserving Source IP address is an important factor in a live environment because the IP address is one of the things which enables you to do some advanced stuff like:

Security: Security is an important factor which we cannot ignore. With the Source IP you can white list the access to the applications which are behind the internet-facing load balancer.

Continue reading “Preserve Source IP In AWS Classic Load-Balancer And Istio’s Envoy Using Proxy Protocol”

AWS RDS cross account snapshot restoration

Many a times you may have faced problem where your production infra is on different AWS account and non prod on different account and you are required to restore the RDS snapshot to non prod account for testing.

Recently I got a task to restore my prod account RDS snapshot to a different account for testing purpose. It was a very interesting and new task for me. and I was in an awe, how AWS thinks about what all challenges we may face in real life and provides a solution to it.

For those who are not aware about RDS, I can brief RDS as a relational database service by Amazon Web Services (AWS), it is a managed service so we don’t have to worry about the underlying Operating System and Database software installation, we just have to use it.

Amazon RDS creates a storage volume snapshot of your DB instance backing up the entire DB instance and not just individual database. As I told you, we have to copy and restore an RDS snapshot to a different aws account. There is a catch!, you can directly copy an aws snapshot to a different region in same aws account, but to copy to a different aws account you need to share the snapshot to aws account and then restore from there, so lets begin.

To share an automated DB snapshot, create a manual DB snapshot by copying the automated snapshot, and then share that copy.

Step 1: Find the snapshot that you want to copy, and select it by clicking the checkbox next to it’s name. You can select a “Manual” snapshot, or one of the “Automatic” snapshots that are prefixed by “rds:”.

Step 2: From the “Snapshot Actions” menu, select “Copy Snapshot”.

Step 3: On the page that appears: Select the target region. In this case, since we have to share this snapshot with another aws account we can select existing region.

  • Specify your new snapshot name in the “New DB Snapshot Identifier” field. This identifier must not already be used by a snapshot in the target region.
  • Check the “Copy Tags” checkbox if you want the tags on the source snapshot to be copied to the new snapshot.
  • Under “Encryption”, leave “Disable Encryption” selected.
  • Click the “Copy Snapshot” button.

Step 4: Once you click on “Copy Snapshot”, you can see the snapshot being created.

Step 5: Once the manual snapshot is created, select the created snapshot, and from the “Snapshot Actions” menu, select “Share Snapshot”.

Step 6: Define the “DB snapshot visibility” as private and add the “AWS account ID” to which we want to share the snapshot and click on save.

Till this point we have shared our db snapshot to the aws account where we need to restore the db.
Now login to the other aws account and go to RDS console and check for snapshot that was shared just recently.

Step 7: Select the snapshot and from the “Snapshot Actions” menu select “Restore Snapshot”.

Step 8: From here we just need to restore the db as we do normally. Fill out the required details like “DB Instance class”, “Multi-AZ-Deployment”, “Storage Type”, “VPC ID”, “Subnet group”, “Availability Zone”, “Database Port”, “DB parameter group”, as per the need and requirement.

Step 9: Finally click on “Restore DB instance” and voila !!, you are done.

Step 10: You can see the db creation in process. Finally, you have restored the DB to a different AWS account !!

Conclusion:

So there you go. Everything you need to know to restore a production AWS RDS into a different AWS account. That’s cool !! Isn’t it ?, but I haven’t covered everything. There is a lot more to explore. We will walk through RDS best practices in our next blog, till then keep exploring our other tech blogs !!.

Image source: https://unsplash.com/photos/lRoX0shwjUQ


The closer you think you are, the less you’ll actually see

I hope you have seen the movie Now you see me, it has a famous quote The closer you think you are, the less you’ll actually see. Well, this blog is not about this movie but how I got stuck into an issue, because I was not paying attention and looking at the things closely and seeing less hence not able to resolve the issue.

There is a lot happening in today’s DevOps world. And HashiCorp has emerged out to be a big player in this game. Terraform is one of the open source tools to manage infrastructure as code. It plays well with most of the cloud provider. But with all these continuous improvements and enhancements there comes a possibility of issues as well. Below article is about such a scenario. And in case you have found yourself in the same trouble. You are lucky to reach the right page.
I was learning terraform and performing a simple task to launch an Ubuntu EC2 instance in us-east-1 region. For which I required the AMI Id, which I copied from the AWS console as shown in below screenshot.

Once I got the AMI Id, I tried to create the instance using terraform, below is the screenshot of the code

provider “aws” {
  region     = “us-east-1”
  access_key = “XXXXXXXXXXXXXXXXXX”
  secret_key = “XXXXXXXXXXXXXXXXXXX”
}
resource “aws_instance” “sandy” {
        ami = “ami-036ede09922dadc9b
        instance_type = “t2.micro”
        subnet_id = “subnet-0bf4261d26b8dc3fc”
}
I was expecting to see the magic of Terraform but what I got below ugly error.

Terraform was not allowing to spin up the instance. I tried couple of things which didn’t work. As you can see the error message didn’t give too much information. Finally, I thought of giving it a try by  doing same task via AWS web console. I searched for the same ubuntu AMI and selected the image as shown below. Rest of the things, I kept to default. And well, this time it got launched.

And it confused me more. Through console, it was working fine but while using Terraform it says not allowed. After a lot of hair pulling finally, I found the culprit which is a perfect example of how overlooking small things can lead to blunder.

Culprit

While copying the AMI ID from AWS console, I had copied the 64-bit (ARM) AMI ID. Please look carefully, the below screenshot

But while creating it through console I was selecting the default configuration which by is 64-bit(x86). Look at the below screenshot.

To explain it further, I tried to launch the VM with 64-bit (ARM) manually. And while selecting the AMI, I selected the 64-bit (ARM).

And here is the culprit. 64-bit(ARM) only supports a1 instance type

Conclusion

While launching the instance with the terraform, I tried using 64-bit (ARM) AMI ID mistakenly, primarily because for same AMI there are 2 AMI IDs and it is not very visible to eyes unless you pay special attention.

So folks, next time choosing an AMI ID keep it in mind what type of AMI you are selecting. It will save you a lot of time.