Learn the Hacks for Running Custom Scripts at Spot Termination

Nowadays, it is very common to run applications on Spot instances. In this scenario, where a spot instance could be terminated at any point of time because of AWS pulling back their resource or ASG Scale-In incident, we need to have something in place to handle the termination smoothly so that we can complete our final tasks before the system shutdown. It could be executing some scripts, unmounting some storage device, shipping final log files to S3, or uploading cache data in a centralized server like Redis.
Today, I will attempt to cater to this problem.

First of all, let’s think of trying to run a custom script prior to shutdown in our local system. If everything works fine, the same would be applicable for ec2 spot instances too.

I will be using a Ubuntu 20 machine that uses systemd as the service manager for this entire blog (works for Ubuntu 16/18/20/22 as well).

As all of you might know, we can add our custom service file in the /etc/systemd/system/ location and install it in any of the target levels to run your program/script as a service.

Below is an example of a service file that can run your custom script prior to shutdown, and will hold the shutdown process until your script is completed.

[root@ss ~]# cat /etc/systemd/system/run-pre-shutdown.service
[Unit]
Description=Run custom task prior shutdown
DefaultDependencies=no
Before=shutdown.target

[Service]
Type=oneshot
ExecStart=<your script here>
TimeoutStartSec=0

[Install]
WantedBy=shutdown.target

So, this script basically tells the system to run the ExecStart to run before shutdown. The  TimeoutStartSec directive tells the time the system will wait for the process to start (or completed in this case). One can read details about the directives here.

Next, make sure you have executable permissions on the run-pre-shutdown.service and the custom script mentioned in ExecStart.
Then run systemctl daemon-reload followed by systemctl enable run-pre-shutdown.service.

So far so good, but this script won’t be able to access the internet or do a curl and hence will not be able to ship logs to the S3 bucket or put data into Redis. This is because the network will be already shut down when systemd reaches to execute this script. To handle this we can use a systemd unit file similar to this below. This unit file ships data from the /var/log/ directory in your system to your S3 bucket.

ss@ubuntumini:/etc/systemd/system$ cat run-before-shutdown.service
[Unit]
Description=Send Logs
Requires=network-online.target
After=network.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStop=/usr/bin/aws s3 --region ap-south-1 sync /var/log/ s3://bucket-name/path-to-folder/


[Install]
WantedBy=network.target

Let’s break down the parts of this unit file.

The shutdown ordering of units in systemd is the reverse of the startup order. Hence, any unit that is ordered After=network.target can be sure that it is stopped before the network is shut down if the system is powered off/terminated. And our purpose is solved here.

Note: It is not mandatory here to install it under network.target, we can also specify as WantedBy=multi-user.target

For better understanding, I will just rename the file name, since systemd orders units in the alphabet order as well, I will rename it to aa-run-before-shutdown.service

root@ubuntumini:/etc/systemd/system# mv run-before-shutdown.service aa-run-before-shutdown.service

root@ubuntumini:/etc/systemd/system# systemctl daemon-reload
root@ubuntumini:/etc/systemd/system# systemctl enable aa-run-before-shutdown.service
Created symlink /etc/systemd/system/network.target.wants/aa-run-before-shutdown.service → /etc/systemd/system/aa-run-before-shutdown.service.

root@ubuntumini:/etc/systemd/system# systemctl start aa-run-before-shutdown.service
root@ubuntumini:/etc/systemd/system# systemctl status aa-run-before-shutdown.service
● aa-run-before-shutdown.service - Send Logs
     Loaded: loaded (/etc/systemd/system/aa-run-before-shutdown.service; enabled; vendor preset: enabled)
     Active: active (exited) since Fri 2022-02-04 06:35:19 UTC; 2s ago

Feb 04 06:35:19 ubuntumini systemd[1]: Finished Send Logs.
root@ubuntumini:/etc/systemd/system#

Note:  It is mandatory that the service is Started and the status is Active as above.

[optional]: Now if you run the command – systemctl list-dependencies --after shutdown.target  we will be able to see the service with green (meaning active state) at the top.


In our case, make sure you have awscli installed and have Access/Secret keys configured with proper permissions to Get/Put on the S3 bucket.

Next, let’s shut down the system to see this working.

You will see a job is running that has halted the shutdown process.  Finally when the logs are sent the process will be stopped, the shutdown will be complete. Check your S3 bucket and the files should be there.


The same procedure can be applied on a Spot instance and it works absolutely perfectly. And the entire steps can be automated via Ansible as well.


Challenges:-

  • Big Logs files take longer to upload, so we should zip the files before sending to S3.

For this one can use a utility like gzip or logrotate to zip the files before sending them to S3.

And we will have to add one more line in our service file:-

ExecStop=/usr/bin/gzip -r /var/log/apt/
ExecStop=/usr/bin/aws s3 --region ap-south-1 sync /var/log/apt/ s3://your-bucket-name/newfolder/

Gzip has almost had a great compression ratio (reduce file size up to 20 times) and even a 1GB file will be reduced to a 50MB .gz file which can be very easily uploaded in S3 in a limited time.

  • AWS recommended approach –

    If your system needs to run scripts for graceful shutdown that takes much much longer time than this, then you can follow these docs –

1.  Scrapping an AWS API which returns the termination time 2 mins prior and you have 2 mins to handle and make all necessary preparation before shutdown.https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/

2. Since, scrapping an endpoint continuously from within the system is a resource-consuming process, it is recommended to use AWS EventsBridge and SNS to send notifications.


Hope you found the article useful!

Demo –

Blog Pundit: Naveen Verma

Opstree is an End to End DevOps solution provider

Connect Us

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: