Unix File Tree Part-2

For those who have surfed straight to this blog, please check out the previous part of this series Unix File Tree Part-1 and those who have stayed tuned for this part, welcome back.In the previous part, we discussed the philosophy and the need for file tree. In this part, we will dive deep into the significance of each directory.

Image result for horizontal file tree linux

Dayum!! that’s a lot of stuff to gulp at once, we’ll kick out things one after the other.

Major directories

Let’s talk about the crucial directories which play a major role.

  • /bin: When we started crawling on Linux this helped us to get on our feet yes, you read it right whether you want to copy any file, move it somewhere, create a directory, find out date, size of a file, all sorts of basic operations without which the OS won’t even listen to you (Linux yawning meanwhile) happens because of the executables present in this directory. Most of the programs in /bin are in binary format, having been created by a C compiler, but some are shell scripts in modern systems.
  • /etc: When you want things to behave the way you want, you go to /etc and put all your desired configuration there (Imagine if your girlfriend has an /etc life would have been easier). whether it is about various services or daemons running on your OS it will make sure things are working the way you want them to.
  • /var: He is the guy who has kept an eye over everything since the time you have booted the system (consider him like Heimdall from Thor). It contains files to which the system writes data during the course of its operation. Among the various sub-directories within /var are /var/cache (contains cached data from application programs), /var/games(contains variable data relating to games in /usr), /var/lib (contains dynamic data libraries and files), /var/lock (contains lock files created by programs to indicate that they are using a particular file or device), /var/log (contains log files), /var/run (contains PIDs and other system information that is valid until the system is booted again) and /var/spool (contains mail, news and printer queues).
  • /proc: You can think of /proc just like thoughts in your brain which are illusions and virtual. Being an illusionary file system it does not exist on disk instead, the kernel creates it in memory. It is used to provide information about the system (originally about processes, hence the name). If you navigate to /proc The first thing that you will notice is that there are some familiar-sounding files, and then a whole bunch of numbered directories. The numbered directories represent processes, better known as PIDs, and within them, a command that occupies them. The files contain system information such as memory (meminfo), CPU information (cpuinfo), and available filesystems.
  • /opt: It is like a guest room in your house where the guest stayed for prolong period and became part of your home. This directory is reserved for all the software and add-on packages that are not part of the default installation.
  • /usr: In the original Unix implementations, /usr was where the home directories of the users were placed (that is to say, /usr/someone was then the directory now known as /home/someone). In current Unixes, /usr is where user-land programs and data (as opposed to ‘system land’ programs and data) are. The name hasn’t changed, but its meaning has narrowed and lengthened from “everything user related” to “user usable programs and data”. As such, some people may now refer to this directory as meaning ‘User System Resources’ and not ‘user’ as was originally intended.

Potato or Potaaato what is the difference? 

We’ll be discussing those directories which confuse us always, which have almost a similar purpose but still are in separate locations and when asked about them we go like ummmm…….

/bin vs /usr/bin vs /sbin vs /usr/local/bin

This might get almost clear out when I explained the significance of /usr in the above paragraph. Since Unix designers planned /usr to be the local directories of individual users so it contained all of the sub-directories like /usr/bin, /usr/sbin, /usr/local/bin. But the question remains the same how the content is different?

/usr/bin:

  • /usr/bin is a standard directory on Unix-like operating systems that contains most of the executable files that are not needed for booting or repairing the system. 
  • A few of the most commonly used are awk, clear, diff, du, env, file, find, free, gzip, less, locate, man, sudo, tail, telnet, time, top, vim, wc, which, and zip.

/usr/sbin:

  • The /usr/sbin directory contains non-vital system utilities that are used after booting.
  • This is in contrast to the /sbin directory, whose contents include vital system utilities that are necessary before the /usr directory has been mounted (i.e., attached logically to the main filesystem). 
  • A few of the more familiar programs in /usr/sbin are adduser, chroot, groupadd, and userdel. 
  • It also contains some daemons, which are programs that run silently in the background, rather than under the direct control of a user, waiting until they are activated by a particular event or condition such as crond and sshd.

I hope I have covered most of the directories which you might come across frequently and your questions must have been answered.
Now that we know about the significance of each UNIX directory, It’s time to use them wisely the way they are supposed to be.
Please feel free to reach me out for any suggestions.
Goodbye till next time!

References: https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/usr.htmlhttps://askubuntu.com/questions/130186/what-is-the-rationale-for-the-usr-directoryhttps://askubuntu.com/questions/308045/differences-between-bin-sbin-usr-bin-usr-sbin-usr-local-bin-usr-localhttp://index-of.es/Varios-2/How%20Linux%20Works%20What%20Every%20Superuser%20Should%20Know.pdf
https://imgflip.com/memegenerator

What Without Internet

What without Internet?

I had a dream a few days ago in which the existence of the internet was gone, When I woke up I thought about what would happen if there is no Internet for a day?


Sure, it would cause quite a bit of panic and uproar and it would be havoc for an organization to work without the internet, but if the internet resumed normally after 24 hours are over, things would return to normal pretty quickly.


Now, switch it off for a longer time, possibly a week or a month, that would have a more lasting impact, since, in that time, a significant number of people would find themselves unable to meet their obligations or do their business at all. This would be somewhat mitigated by the fact that the situation is a sort of a ‘natural disaster’, but still, those who really depend on the internet for their business would likely feel a lasting negative impact.
               
What if I say there are some organizations that work in a situation like there is no internet, yes it’s right due to some sort of security reasons they don’t prefer to use the public internet. Banks, space organizations, and many security agencies fall under this category.


Now, a question arises here how they manage to do regular updates and the installation of different packages in their different systems? The answer is quite simple: “the use of satellite server”.


Recently I got a task in relation to this context, in which:

1. A prerequisite here is that you don’t have internet connectivity in your system but one of the systems with which you can connect has internet connectivity.
2. Setup individual satellite server in your local network.
3. Install packages and regular updates.


To do so here I prefer to use the FTP satellite server

How to implement Ftp satellite server

Pre-requisites

An Ubuntu Server, and a non-root user with sudo privileges.
The system is configured with vsftpd

Suppose we are doing the installation of Jenkins

Make a directory pkg.jenkins.io in /var/www/html/



 Contents of pkg.jenkins.io


 

 

 

 

 

 

 

 

 

 

 

Paste the host link of debian file in /etc/apt/source.list.d

 

 Run the command sudo apt-get update

 Now run the command for installation of the package

 


“ The internet made fame wack and anonymity cool ”

So, far from the above context, we have learned about setting up FTP for users with a local account. If you need to use an external authentication source, you might want to look into vsftpd’s support of virtual users. This offers a rich set of options through the use of PAM, the Pluggable Authentication Modules, and is a good choice if you manage users in another system such as LDAP or Kerberos.

I hope I explained everything clearly enough to understand. If you have some better way of implementing a satellite server please help me to improve this blog.

Thanks for reading my writing. I’d really appreciate any kind of feedback in the comments.

Cheers till next time!!!!

Where there is a shell, There is a way.

Well, as a DevOps; I like to play around with shell scripts and shell commands especially on a remote system as it just adds some level of fun in it. But what’s more thrilling than running shell scripts and command on the remote server, making them return the dynamic web pages or JSON from that remote system.

Yes for most of us it comes as a surprise that just like PHP, JSP, ASP shell scripts can also return us dynamic web pages but, as long time ago a wise man said: “where there is a shell there is a way”.

Isn’t PHP or JSP a better option for web development?

For a web developer … yes, but as a DevOps, I want to do all possible stuff from a shell script. And it is quite useful for us to have a shell script as a server-side language for us as we all know the power of shell scripts.

Why do we need this exactly?

Isn’t ‘for fun’ is an obvious reason. But for those who want more than that, I got some points

  • We can use it as a time series based data exporter.
  • We might want an API that returns us the system info in the form of JSON, and we don’t have access to PHP.
  • We might want to see the system information as a web page when we hit a URL.
  • It’s not only limited to system info you can do whatever you want from it.
  • With bare minimum on your machine, you can get the max out of it

Let’s get started

Now let’s get done with the boring part i.e. configuring Apache
Now I am assuming that Apache is installed on that system as it is needed in order to serve your web pages. So, in order to let Apache serve your script, you need to enable the CGI config by simple commands.
$ cd /etc/apache2/mods-enabled
$ sudo ln -s ../mods-available/cgi.load
and you are ready to go.
Now move to dir where you are going to put your shell scripts.
$ cd /usr/lib/cgi-bin
Once in the dir create a new file hello.sh
$ vim hello.sh
and write the following scripts
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "hello world! from shell script"
Make sure you make that file executable.
Now I think you have got the pretty much idea what your webpage is going to display.
So restart the Apache server
$ sudo systemctl restart apache2.service

Let’s take it to the next level

Now let’s see what else can we do, Unlike PHP or JAVA or Python we don’t have any framework for shell scripts, so we might have to work a bit. But that’s the fun part, right?
So let’s get started

Now we are simply going to display that which user is using /usr/sbin/nologin shell
So here are some files that I created in cgi-bin directory in order to display that data as the web page
Header file

<!doctype html>
<html lang="en">
  <head>
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">

    <!-- Bootstrap CSS -->
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">

    <title>Hello, world!</title>
  </head>
  <body>
    <h1>All the user using /usr/sbin/nologin shell</h1>
 
 <table class="table">
  <thead>
    <tr>
      <th scope="col">Name</th>
      <th scope="col">User Id</th>
      <th scope="col">Group Id</th>
    </tr>
  </thead>
  <tbody>
Footer file

</tbody>
</table>

    <!-- Optional JavaScript -->
    <!-- jQuery first, then Popper.js, then Bootstrap JS -->
    https://code.jquery.com/jquery-3.3.1.slim.min.js
    https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js
    https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js
  </body>
</html>

hello.sh

#!/bin/bash
echo "Content-type: text/html"
echo ""
cat header
cat /etc/passwd | awk -F ':' '{if($7 == "/usr/sbin/nologin"){print ""$1""$3""$4""}}'
cat footer 
So let’s just see what all those files are
Header file and footer file basically contains the starter template of bootstrap that gives you a prebuild web template, and in hello.sh we are extracting those file by using cat and in the middle, we are writing a shell command in order to get the users that are using /usr/sbin/nologin shell and making a template from it using awk.
So now when you hit the same URL output will be like

Now I guess we got the base idea that how can we use a shell script to display web pages of our need. We can also use it as an API as it can return JSON as well. But it’s up to the individual how well we can use it for.

Summary

So, in this blog, we saw how with bare minimum we can get most out of it. It is not limited to just some use cases it can be used to create an API which can return valuable information of system or services running on the system. With some good scripting and some tricky HTML template designing, we can achieve a lot.

Resolving Segmentation Fault (“Core dumped”) in Ubuntu

This error may strike your Ubuntu at any point of the moment.
A few days ago when I was doing my routine work in my Ubuntu laptop, suddenly
I encountered with an error “Segmentation fault ( core dumped)” then I got to know
that, this error can strike you Ubuntu or any other operating system at any point of the moment as binaries crashing doesn’t depend on us. Segmentation fault is when your system tries to access a page of memory that doesn’t exist. Core dumped means when a part of code tries to perform read and write operation on a read-only or free location. Segfaults are generally associated with the file named core and It generally happens during up gradation.
 
 

While running some commands during the core-dump situation you may encounter
with “ Unable to open Lock file” this is because the system is trying to capture a
bit block which is not existing, This is due to crashing of binaries of some specific
programs.

You may do backtracking or debugging to resolve it but the solution is to repair the
broken packages and we can do it by performing the below-mentioned steps:

Command-line:
Step 1: Remove the lock files present at different locations.

sudo rm -rf /var/lib/apt/lists/lock /var/cache/apt/archives/lock /var/lib/dpkg/lock and restart your system.

Step 2: Remove repository cache.

sudo apt-get clean all

Step 3: Update and upgrade your repository cache.

sudo apt-get update && sudo apt-get upgrade

Step 4: Now upgrade your distribution, it will update your packages.

sudo apt-get dist-upgrade

Step 5: Find the broken packages and delete it forcefully.

sudo dpkg -l | grep ^..r | apt-get purge

Apart from the command line the best way which will always work is:

Step 1: Run ubuntu in startup mode by pressing Esc key after restart.

Step 2: Select Advanced options for Ubuntu

 

Step 3: Run Ubuntu in the recovery mode and you will be listed with many options.

 
 

Step 4: First select “Repair broken packages”

 

Step 5: Then select “Resume normal boot”

So, we have two methods of resolving segmentation fault: CLI and the GUI. Sometimes, it may also happen that “apt” command is not working because of segfault, so our CLI method will not work, in that case also don’t worry as GUI method gonna work for us always.