As we’re going through a pandemic majority of business have taken things online with options like work from home and as things get more and moreover the internet our concerns regarding cybersecurity become more and more prominent. We start to dig a little to have standards in place and terms like Compliance, Hardening, CIS, HIPPA, PCI-DSS are minted out. Today we’ll be discussing why to have CIS benchmarks in place in the least and how we at Opstree have automated this for our clients.
Before moving forward get familiar with basic terms:
All we could think of imagining a routine day of a NOC guy is looking at all the fancy and colorful multiple screens around, but is this all it?
The answer to the above statement is NO! As a NOC, We have access to the information that is critical to analyze and plot company infra strength and on top of that, access to the servers and protected network makes the situation more critical if thing happens to be in wrong hands.
A shell initialization file is a shell script that runs automatically each time the shell executes. The initialization file sets up the “work environment” and “customizes” the shell environment for the user. The main agenda of Shell initialization files are to persist common shell configuration, such as Continue reading “Shell initialization files”
Word “data” is very crucial since early 2000 and within a span of these 2 decades is it becoming more crucial. According to Forbes Google believe that in future every organisation will lead to becoming a data company. Well, when it comes to data, security is one of the major concerns that we have to face.
We have several common techniques to store data in today’s environment like MySql, Oracle, MsSql, Cassandra, Mongo etc and these techs will keep on changing in future. But according to DataAnyz, MySql Still has a 33% share of the market. So here we are with a technique to secure our MySQL data.
Before getting more into this article, let us know what are possible combined approaches to secure MySQL data
Mysql Server hardening
Mysql Application-level hardening
Mysql data encryption at transit
Mysql data at rest encryption
Mysql Disk Encryption
You may explore all the approaches but in this article, we will understand the concept of Mysql data at encryption and hands-on too.
The concept of “Data at Rest Encryption” in MySQL was introduced in Mysql 5.7 with the initial support of InnoDB storage engine only and with the period it has evolved significantly. So let’s understand about “Data at Rest Encryption” in MySQL
What is “Data at Rest Encryption” in MySql?
The concept of “data at rest encryption” uses two-tier encryption key architecture, which used below two keys
Tablespace keys: This is an encrypted key which is stored in the tablespace header
Master Key: the Master key is used to decrypt the tablespace keys
So let’s Understand its working
Let’s say we have a running MySQL with InnoDB storage engine and tablespace is encrypted using a key, referred as table space key. This key is then encrypted using a master key and stored in the tablespace header
Now when a request is made to access MySQL data, InnoDB use master key to decrypt tablespace key present tablespace header. After getting decrypted tablespace key, the tablespace is decrypted and make is available to perform read/write operations
Note: The decrypted version of a tablespace key never changes, but the master key can be rotated.
Data at rest encryption implemented using keyring file plugin to manage and encrypt the master key
After understanding the concept of encryption and decryption below are few Pros and Cons for using DRE
A strong Encryption of AES 256 is used to encrypt the InnoDB tables
It is transparent to all applications as we don’t need any application code, schema, or data type changes
Key management is not done by DBA.
Keys can be securely stored away from the data and key rotation is very simple.
Encrypts only InnoDB tables
Can’t encrypt binary logs, redo logs, relay logs on unencrypted slaves, slow log, error log, general log, and audit log
Though we can’t encrypt binary logs, redo logs, relay logs on Mysql 5.7 but MariaDB has implemented this with a mechanism to encrypt undo/redo logs, binary logs/relay logs, etc. by enabling few flags in MariaDB Config File
Let’s Discuss its problem/solutions and few solutions to them
Running MySQL on a host will have access from root user and the MySQL user and both of them may access key file(keyring file) present on the same system. For this problem, we may have our keys on mount/unmount drive which can be unmounted after restarting MySQL.
Data will not be in encrypted form when it will get loaded onto the RAM and can be dumped and read
If MySQL is restarted with skip-grant-tables then again it’s havoc but this can be eliminated using an unmounted drive for keyring
As tablespace key remains the same so our security relies on Master key rotation which can be used to save our master key
NOTE: Do not to lose the master key file, as we cant decrypt data and will suffer data loss
Doing Is Learning, so let’s try
As a prerequisite, we need a machine with MySQL server up and running Now for data at rest encryption to work we need to enable
Enable file per table on with the help of the configuration file.
[root@mysql ~]# vim /etc/my.cnf [mysqld]
Along with the above parameter, enable keyring plugin and keyring path. This parameter should always be on the top in configuration so that it will get load initially when MySQL starts up. Keyring plugin is already installed in MySQL server we just need to enable it.
[root@mysql ~]# vim /etc/my.cnf [mysqld] early-plugin-load=keyring_file.so keyring_file_data=/var/lib/mysql/keyring-data/keyring innodb_file_per_table=ON
And save the file with a restart to MySQL
[root@mysql ~]# systemctl restart mysql
We can check for the enabled plugin and verify our configuration.
mysql> SELECT plugin_name, plugin_status FROM INFORMATION_SCHEMA.PLUGINS WHERE plugin_name LIKE 'keyring%'; +--------------+---------------+ | plugin_name | plugin_status | +--------------+---------------+ | keyring_file | ACTIVE | +--------------+---------------+ 1 rows in set (0.00 sec)
verify that we have a running keyring plugin and its location
mysql> show global variables like '%keyring%'; +--------------------+-------------------------------------+ | Variable_name | Value | +--------------------+-------------------------------------+ | keyring_file_data | /var/lib/mysql/keyring-data/keyring | | keyring_operations | ON | +--------------------+-------------------------------------+ 2 rows in set (0.00 sec)
Verify that we have enabled file per table
MariaDB [(none)]> show global variables like 'innodb_file_per_table'; +-----------------------+-------+ | Variable_name | Value | +-----------------------+-------+ | innodb_file_per_table | ON | +-----------------------+-------+ 1 row in set (0.33 sec)
Now we will test our set up by creating a test DB with a table and insert some value to the table using below commands
mysql> CREATE DATABASE test_db; mysql> CREATE TABLE test_db.test_db_table (id int primary key auto_increment, payload varchar(256)) engine=innodb; mysql> INSERT INTO test_db.test_db_table(payload) VALUES('Confidential Data');
After successful test data creation, run below command from the Linux shell to check whether you’re able to read InnoDB file for your created table i.e. Before encryption
Along with that, we see that our keyring file is also empty before encryption is enabled
[root@mysql ~]# strings /var/lib/mysql/test_db/test_db_table.ibd infimum supremum Confidential DATA
At this point of time if we try to check our keyring file we will not find anything
You can encrypt data at rest by using keyring plugin and we can control and manage it by master key rotation. Creating an encrypted Mysql data file setup is as simple as firing a few simple commands. Using an encrypted system is also transparent to services, applications, and users with minimal impact of system resources. Further with Encryption of data at rest, we may also implement encryption in transit.
I hope you found this article informative and interesting. I’d really appreciate any and all feedback.
Yesterday was a good and bad day for me, bad day because one of my linux server has been hacked. Good day because it was one of the most important task in my pipeline which I wanted to take up, that is securing my systems. As people say being agile or lazy :), do when it is actually required and yesterday was that day.
I’m a novice in infrastructure management, but I really liked this field that’s why I plunged into this domain and now I’m really loving it because of such challenges. Now let’s cut the crap and straightaway jump to the point, I’ve figured few of the best practices that you should always do while configuring your “SECURE” linux server:
Don’t use default ssh port for login into the system, or best you can have a policy where you will change your ssh port every month or 2 month.
To go a step forward disable the password based login and just enable key base login.
Use some intrusion prevention framework, I’ve figured out fail2ban is a good one.
Keep all non public facing machines on private ip.
In case of public machines only open those ports which are actually required.
User firewall to it’s maximum effect. Iptables can be a good option.
Have a strong alert system that can monitor your system and raise an alert in case of any suspicious activity. We use Icinga.
Though this list may not cover all the required things that you can take care of, but it can serve as a very good starting point. Also I would love to hear more suggestions that can be used.
Sometimes you feel constrained due to the the RAM limit of your system especially when you are running heavy duty software’s, in this blog I’ll talk about how you can overcome this problem by having an extra swap space to give you extra computing power
First of all you can execute swapon command to check how much swap space you already have in your system $ swapon -s
Filename Type Size Used Priority
/dev/sda5 partition 8130556 44732 -1
The above output gives you an indication that you already have a swap space at partition /dev/sda5. The numbers under “Size” and “Used” are in kilobytes. Though I have considerable amount of swap space configured on my system :), let’s continue and try to create a new swap using file system. Before starting with creation of swap space let’s make sure that I’ve enough disk space available in my system
So I’ve a powerful system with 303G of disk space still available, that means I have a liberty of creating a swap space of my liking. I’ll user the data dump(dd) command to my supplementary swap file, make sure that you would be running this command using root user. $dd if=/dev/zero of=/home/sandy/extraswap bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 2.41354 s, 222 MB/s
Now we have created a file /home/sandy/extraswap of size 512M which we will be using as a swap partition. Swap can be created by issuing mkswap command $mkswap /home/sandy/extraswap
Setting up swapspace version 1, size = 524284 KiB
no label, UUID=685ac04a-ad31-48a8-83df-9ffa3dbc6982
Finally we have to run swapon command on our newly created swap partition to bring it into the game $swapon -s
Filename Type Size Used Priority
/dev/sda5 partition 8130556 46248 -1
Filename Type Size Used Priority
/dev/sda5 partition 8130556 46248 -1
/home/sandy/extraswap file 524284 0 -2
As you can notice when we first executed the swapon -s command at that time swap partition was not in the picture, once we executed the command swapon /home/sandy/extraswap the swap partition was available.
One last thing that we have to do is to add the entry of this swap partition in our /etc/fstab file as with the next system boot the swap partition will not be active by default we have to do the entry of this swap in our /etc/fstab file.
One of the problem I used to have as build & release engineer is to manage login to huge number of boxes through my Linux system. At the scale of 5-10 machine it’s a not a big problem but once you have close to 100+ boxes then it is not humanly possible to remember the ip’s of those boxes.
The usual approach for this problem is to maintain a reference file, from where you map machine name with the ip & find the ip of the box from this file, but again after some time this solution seems to be not that efficient. Another solution is to have a DNS server where you can store such mappings & then you can access these machines using their names only, this is the idle solution but what if you don’t have DNS server also still you have to execute the ssh command ‘ssh user@machine”.
I developed a simple solution for this problem, I created a utility script connect.sh, this script takes machine name as an argument & then we have multiple conditions statements which checks which ssh command to be executed for the machine name.
if [ “mc1” == $1 ]; then
elif [ “mc2” == $1 ]; then
elif [ “mc3” == $1 ]; then
This solution worked really well for me as now I’m saved from executing whole ssh command, also for machine name I’ve followed a convention i.e _ for example the entry for a machine for release environment that is hosting an application catalog the machine name would be release_catalog, similarly dev_catalog, staging_catalog, pt_catalog.. so you don’t have to remember machine names as well :).