Recently, we explored Preserving the Source IP address on AWS Classic Loadbalancer and Istio’s envoy using the proxy protocol in our first Part. Continuing to the second part of this series, we will look at How can we apply IP whitelisting on the Kubernetes microservices!
There are some microservices behind an internet-facing loadbalancer that we want to have limited access to, based on source IP address. This will prevent our microservices from unauthorized access.
Do you manage your infrastructure using terraform?
Are you duplicating your code for creating environments like DEV, STAGING, QA or PROD?
Are you tired of writing/managing different codes for your different environments with several complexities?
Well there is a native solution from terraform called as “TERRAFORM WORKSPACE”
Before we dive deep, let’s have a brief introduction to terraform
In this blog I am going to share my opinion on spot instances and why we should go for it. While I was going thorough the category(on-demand, reserved, and spot) that AWS provides to launch our instances into, I found spot instances very fascinating and a little challenging.
Many a times you may have faced problem where your production infra is on different AWS account and non prod on different account and you are required to restore the RDS snapshot to non prod account for testing.
Recently I got a task to restore my prod account RDS snapshot to a different account for testing purpose. It was a very interesting and new task for me. and I was in an awe, how AWS thinks about what all challenges we may face in real life and provides a solution to it.
For those who are not aware about RDS, I can brief RDS as a relational database service by Amazon Web Services (AWS), it is a managed service so we don’t have to worry about the underlying Operating System and Database software installation, we just have to use it.
Amazon RDS creates a storage volume snapshot of your DB instance backing up the entire DB instance and not just individual database. As I told you, we have to copy and restore an RDS snapshot to a different aws account. There is a catch!, you can directly copy an aws snapshot to a different region in same aws account, but to copy to a different aws account you need to share the snapshot to aws account and then restore from there, so lets begin.
To share an automated DB snapshot, create a manual DB snapshot by copying the automated snapshot, and then share that copy.
Step 1: Find the snapshot that you want to copy, and select it by clicking the checkbox next to it’s name. You can select a “Manual” snapshot, or one of the “Automatic” snapshots that are prefixed by “rds:”.
Step 2: From the “Snapshot Actions” menu, select “Copy Snapshot”.
Step 3: On the page that appears: Select the target region. In this case, since we have to share this snapshot with another aws account we can select existing region.
Specify your new snapshot name in the “New DB Snapshot Identifier” field. This identifier must not already be used by a snapshot in the target region.
Check the “Copy Tags” checkbox if you want the tags on the source snapshot to be copied to the new snapshot.
Under “Encryption”, leave “Disable Encryption” selected.
Click the “Copy Snapshot” button.
Step 4: Once you click on “Copy Snapshot”, you can see the snapshot being created.
Step 5: Once the manual snapshot is created, select the created snapshot, and from the “Snapshot Actions” menu, select “Share Snapshot”.
Step 6: Define the “DB snapshot visibility” as private and add the “AWS account ID” to which we want to share the snapshot and click on save.
Till this point we have shared our db snapshot to the aws account where we need to restore the db. Now login to the other aws account and go to RDS console and check for snapshot that was shared just recently.
Step 7: Select the snapshot and from the “Snapshot Actions” menu select “Restore Snapshot”.
Step 8: From here we just need to restore the db as we do normally. Fill out the required details like “DB Instance class”, “Multi-AZ-Deployment”, “Storage Type”, “VPC ID”, “Subnet group”, “Availability Zone”, “Database Port”, “DB parameter group”, as per the need and requirement.
Step 9: Finally click on “Restore DB instance” and voila !!, you are done.
Step 10: You can see the db creation in process. Finally, you have restored the DB to a different AWS account !!
So there you go. Everything you need to know to restore a production AWS RDS into a different AWS account. That’s cool !! Isn’t it ?, but I haven’t covered everything. There is a lot more to explore. We will walk through RDS best practices in our next blog, till then keep exploring our other tech blogs !!.
Have you ever thought about migrating your production database from one platform to another
and dropped this idea later, because it was too risky, you were not ready to
bare a downtime?
If yes, then please pay attention because this is what we are going to perform in this article.
A few days back we’re trying to migrate our production MySQL RDS from AWS to GCP, SQL, and we had to migrate data without downtime, accurate and real-time and that too without the help of any Database Administrator.
After doing a bit research and evaluating few services we finally started working on AWS DMS (Data Migration Service) and figured out this is a great service to migrate a different kind of data.
You can migrate your data to and from the most widely used commercial and open-source databases, and database platforms. Databases like Oracle, Microsoft SQL Server, and PostgreSQL, MongoDB.
The source database remains fully operational during the migration, The service supports homogeneous migrations such as Oracle to Oracle, and also heterogeneous migrations between different database platforms.
Let’s discuss some important features of AWS DMS:
Migrates the database securely, quickly and accurately.
No downtime required, works as schema converter as well.
Supports various type or database like MySQL, MongoDB, PSQL etc.
Migrates real-time data also synchronize ongoing changes.
Data validation is available to verify database.
Compatible with a long range of database platforms like RDS, Google SQL, on-premises etc.
Inexpensive (Pricing is based on the compute resources used during the migration process).
This is a typical migration scenario.
Let’s perform step by step migration:
Note: We’ve performed migration from AWS RDS to GCP SQL, you can choose database source and destination as per your requirement.
Create replication instance:
A replication instance initiates the connection between the source and target databases, transfers the data, cache any changes that occur on the source database during the initial data load.
Use the fields to below to configure the parameters of your new replication instance including network and security information, encryption details, select instance class as per requirement.
After completion, all mandatory fields click the next tab, and you will be redirected to Replication Instance tab.
Grab a coffee quickly while the instance is getting ready.
Hope you are ready with your coffee because the instance is ready now.
Now we are to create two endpoints “Source” and “Target” 2.1 Create Source Endpoint:
Click on “Run test” tab after completing all fields, make sure your Replication instance IP is whitelisted under security group. 2.2 Create Target Endpoint
Click on “Run test” tab again after completing all fields, make sure your Replication instance IP is whitelisted under target DB authorization.
Now we’ve ready Replication Instance, Source Endpoint, and Target Endpoint.
Finally, we’ll create a “Replication Task” to start replication.
Fill the fields like:
Task Name: any name
Replication Instance: The instance we’ve created above
Source Endpoint: The source database
Target Endpoint: The target database
Migration Type: Here I choose “Migration existing data and replication ongoing” because we needed ongoing changes.
4. Verify the task status now.
Once all the fields are completed click on the “Create task” and you will be redirected to “Tasks” Tab.
Check your task status
The task has been successfully completed now, you can verify the inserts tabs and validation tab,
The migration is done successfully if Validation State is “Validated” that means migration has been performed successfully.
This blog talks about the two possible ways of hosting your infrastructure in Cloud, though it will be more close to hosting on AWS as it is a real life example but this problem can be applied to any cloud infrastructure set-up. I’m just sharing my thoughts and pros & cons of both approaches but I would love to hear from the people reading this blog about their take as well what do they think.
Before jumping right away into the real talk I would like to give a bit of background on how I come up with this blog, I was working with a client in managing his cloud infrastructure where we had 4 environments dev, QA, Pre Production and Production and each environment had close to 20 instances, apart from applications instances there were some admin instances as well such as Icinga for monitoring, logstash for consolidating logs, Graphite Server to view the logs, VPN server to manage access of people.
At this point we got into a discussion that whether the current infrastructure set-up is the right one where we are having a separate VPC per environment or the ideal setup would have been a single VPC and the environments could have been separated by subnet’s i.e a pair of subnet(public private) for each environment
Both approaches had some pros & cons associated with them
Single VPC set-up
You only have a single VPC to manage
You can consolidate your admin app’s such as Icinga, VPN server.
As you are separating your environments through subnets you need granular access control at your subnet level i.e instances in staging environment should not be allowed to talk to dev environment instances. Similarly you have to control access of people at granular level as well
Scope of human error is high as all the instances will be on same VPC.
VPC per environment setup
You have a clear separation between your environments due to separate VPC’s.
You will have finer access control on your environment as the access rules for VPC will effectively be access rules for your environments.
As an admin it gives you a clear picture of your environments and you have an option to clone you complete environment very easily.
As mentioned in pros of Single VPC setup you are at some financial loss as you would be duplicating admin application’s across environments
In my opinion the decision of choosing a specific set-up largely depends on the scale of your environment if you have a small or even medium sized environment then you can have your infrastructure set-up as “All environments in single VPC”, in case of large set-up I strongly believe that VPC per environment set-up is the way to go.
Let me know your thoughts and also the points in favour or against of both of these approaches.
If you want to minimize the amount of money you spend on Amazon Web Services (AWS) infrastructure, then this blog post is for you. In this post I will be discussing the rationale behind starting & stopping AWS instances in an automated fashion and more importantly, doing it in a correct way. Obviously you could do it through the web console of AWS as well, but it will need your daily involvement. In addition, you would have to take care of starting/stopping various services running on those instances.
Before directly jumping on how we achieved instance management in an automated fashion, I like to state the problem that we were facing. Our application testing infrastructure is on AWS and it is a multiple components(20+) application distributed among 8-9 Amazon instances. Usually our testing team starts working from 10 am, and continues till 7 pm. Earlier we used to keep our testing infrastructure up for 24 hours, even though we were using it for only 9 hours on weekdays, and not using it at all on weekends. Thus, we were wasting more then 50% of the money that we spent on the AWS infrastructure. The obvious solution to this problem was: we needed an intelligent system that would make sure that our amazon infrastructure was up only during the time when we needed it.
The detailed list of the requirements, and the corresponding things that we did were:
We should shut down our infrastructure instances when we are not using them.
There should be a functionality to bring up the infrastructure manually: We created a group of Jenkins jobs, which were scheduled to run at a specific time to start our infrastructure. Also a set of people have execution access to these jobs to start the infrastructure manually, if the need arises.
We should bring up our infrastructure instances when we need it.
There should be a functionality to shut down the infrastructure manually: We created a group of Jenkins jobs that were scheduled to run at a specific time to shut down our infrastructure. Also a set of people have execution access on these jobs to shut down the infrastructure manually, if the need arises.
Automated application/services start on instance start: We made sure that all the applications and services were up and running when the instance was started.
Automated graceful application/services shut down before instance shut down: We made sure that all the applications and services were gracefully stopped before the instance was shut down, so that there was be no loss of data.
We also had to make sure that all the applications and services should be started as per defined agreed order.
Once we had the requirements ready, implementing them was simple, as Amazon provides a number of APIs to achieve this. We used AWS CLI, and needed to use just 2 simple commands that AWS CLI provides.
The command to start an instance : aws ec2 start-instances –instance-ids i-XXXXXXXX
The command to stop an instance : aws ec2 stop-instances –instance-ids i-XXXXXXXX
Through above commands you can automate starting and stopping AWS instances, but you might not be doing it the correct way. As you didn’t restrict the AWS CLI allow firing of start-instances and stop-instances commands only, you could use other commands and that could turn out to be a problem area. Another important point to consider is that we should restrict the AWS instances on which above commands could be executed, as these commands could be mistakenly run with the instance id of a production amazon instance id as an argument, creating undesirable circumstances 🙂
In the next blog post I will talk about how to start and stop AWS instances in a correct way.
This blog will talk about how to mount a new volume to an existing EC2 instance, though it is very straightforward & simple, but it’s good to have a checklist ready with you so that you can do things in one go instead of searching here and there. The most important thing to take note of in this blog is that you have to do couple of manual operations apart from mounting the volume through AWS Web UI.
Go to the AWS Volumes screen, create a new volume if not created already.
Select Attach Volume in Actions button
Choose the instance, to which this volume needs to be mounted
Confirm the volume state changes from availableto in-use
Go to the AWS Instances screen, select the EC2 instance to which volume was attached
Check the Block Devices in the details section you can see the new volume details their. Let’s say it is mounted at /dev/sdf.
Now log in to the EC2 instance machine, you can’t see the mounted volume yet(it is like an external un-formatted hdd that is connected to a linux box)
To make it usable execute below commands
sudo su – [Switch to superuser]
mkfs -t ext3 /dev/xvdf [Format the drive if it is a new volume]
mkdir /home/mettl/mongo [Simply create a new directory]
mount /dev/xvdf /home/mettl/mongo [Mount the drive on newly created directory]
Make sure to change permissions according to how you use it.
To mount EBS volumes automatically on startup add an entry in /etc/fstab
Hope you will find this blog useful, rest assured this is the starting point of a new series I would be talking about couple of other best practices such as why do you need to have this kind of setup, how you will upgrade volume in case of a running ec2-instance..
One of the suggested practices in cloud administration is to always host your applications on a Virtual Private Cloud. Also, you should have a public subnet hosting the public facing apps, and a private subnet which hosts the private apps (like a database or a back-end service/app). To know more about why you need such kind of a setup, please read more about VPC.
This blog will talk about a scenario where you have multiple Virtual Private Clouds (hereafter referred to as VPC), and you need to access a private app hosted in one VPC from another VPC. An example of this scenario could be that you have a VPC for your staging environment and another VPC for production environment, then you’d like to sync the database from of production environment from the staging environment. In this case, it might not be straight forward to do this, as you might not be able to access the production database from outside the production VPC.
One of the solutions for this problem would be to first take a dump of the production database on one of the public facing machines in the production VPC, and then copy that dump to a public facing machine in the Staging VPC and finally applying this dump to the private database of Staging environment. This approach will work, but it would not be a perfect solution, as you have to copy the db dump between VPC’s.
A much better approach would be if you could directly connect to the production database from the Staging VPC & execute the dump & restore command, for that you need direct access of production database from staging environment. This approach is called port-forwarding. We configure port-forwarding at one of the public facing machines(NAT is the preferred one) in the production VPC in such a manner that if a request comes on this machine at port x it will be forwarded to port y on a private facing machine in the production VPC which is the database production in this case.
In the next blog I will talk about other alternate approaches that can be used to solve this problem.