Recap Amrita InCTF 2019 | Part 1

Amrita InCTF 10th Edition, is an offline CTF(Capture the Flag) event hosted by Amrita university at their Amritapuri campus 10 KM away from Kayamkulam in Kerala, India. In this year’s edition two people from Opstree got invited to the final round after roughly two months of solving challenges online. The dates for the final rounds were 28th,29th and 30th December 2019. The first two days comprised of talks by various people from the industry and the third day was kept for the final competition. In the upcoming three blog series starting now, we’d like to share all the knowledge, experiences and learning from this three day event.

Talk from Cisco

The hall was full of a little more than 300 people, among which a lot were college students all the way ranging from sophomore year up till final as well as pre-final year. Also, to our surprise there were roughly 50+  school students sitting ready to compete for the final event as well. The initial talk by CISCO was refreshing and very insightful for everyone present in the room. The talk majorly focused on how technology is changing lives all around the world be it with machine learning to help doctors treat faster or be it use drones to put off fire or IoT enabled system to provide efficient irrigation at remote areas. The speakers also made a point on how learning in a broader segment of technologies and tools serves longer than in depth knowledge of limited technology.
One thing that really stuck with me was that never learn a technology just for the sake of it or for the hype around it. But learn with a thought on how it can solve a problem around us.

Talk Title: Cyberoam startup and experiences -Hemanth Patel

Hemal Patel talked about his couple of startups and how he has always learned through failures. The talk was full of experiences and it is always serene to listen to someone telling about how they failed over and over again which eventually led them to succeed at whatever they are doing today. He talked about CyberRoam which is a Sophos Company, secures organizations with its wide range of product offerings at the network gateway. The talk went on to give us an overview of how business is done along different governments all around the world and how Entrepreneurship is so much more than just tackling a problem at a business level. And the how Cyberroam ended up making the product that they have today. 

Talk on Security by Cisco – Radhika Singh and Prapanch Ramamoorthy 

This was a wide range talk about a lot of things affecting us. We’ll try to list down most of it here. 

The talk started out with exploring Free/Open WiFi. Though it has a huge benefit of wifi being free it comes with a lot of risks as well. To name a few : 

→ Sniffing

→ Snooping 

→ Spoofing

These just to mention a few ways you can be compromised over a free WiFi. 

You can read up more on it here :

The talk also presented us with facts over data, how only 1% of the total data is generated via laptops and computers, Rest all are generated by smart phones, smart TVs as other IoT devices. Hence comes a very important point of securing IoT devices. 

It was pointed out during the talk that majority of the companies worry about security over the end of the entire IoT chain i.e. over the cloud etc. But not many people are caring about the edge devices and how lack of security measures here can compromise them. 

There was this really interesting case study about of IoT devices brought down the internet for the entire US east coast and how this attack was just meant to get some more time to submit an assignment at it’s initial days. Read more on this story from 2016 here

Hackers prefer to exploit IOT devices over cloud infrastructure.

Memes apart, The talk also focused on privacy vs security and how  Google’s dns resolution encryption helps in securing DNS based internet traffic on the world wide web. 

National Critical Information Infrastructure Protection Centre(NCIIPC)

National Critical Information Infrastructure Protection Centre (NCIIPC) is an organisation of the Government of India created under Sec 70A of the Information Technology Act, 2000 (amended 2008), through a gazette notification on 16th Jan 2014, based in New Delhi, India. It is designated as the National Nodal Agency in respect of Critical Information Infrastructure Protection. 

Representatives from this organization was there to speak at the event and they talked in detail about defining what is a CII (Critical Information Infrastructure) is and how any company with such infrastructure needs to inform the government about it. 

A CII is basically any Information Infrastructure (by any financial/medical etc institute) which if compromised can affect the national security of the country. And attacking any such infrastructure is an act of terrorism as defined by the article 66F in the IT Act,2018. 

They talked about some of the threats they deal with at the national level. They particularly talked about how BGP routing protocol which works on trust was compromised lately to route all Indian traffic via Pakistan servers/routers.

Image result for darknet dark web internet

One more interesting talk was about the composition of Internet.

How we think that the internet we see would comprise of 90% of the total internet but in reality it’s just 4%, bummer right? .  Deep web is the one which comprises of 90% of the total internet and as a matter of fact that no one completely knows about the DarkNet and it’s volume. Hence even the numbers mentioned above are as good as a guess.

 This was a very insightful talk and put a lot of things in perspective.

Digital Forensics – Beyond the Good Ol’ Data Recovery by Ajith Ravindran 

Image result for data forensics

This talk by Ajith Ravindran mainly focused on Computer forensics, which is the application of investigation and analysis techniques to gather and preserve evidence from a particular computing device in a way that is suitable for presentation in a court of law.

The majority of tips and tricks shared were about getting data from Windows based machines even after it is deleted from the system and how such data can be retrieved in order to show as proof for crimes.

Some of the tricks talked about are mentioned below : 

The prefetch files in Windows  gives us the list of files and executables last accessed and the number of times executed. 

Userassist allows investigators to see what programs were recently executed on a system.

Shellbags list down files that are accessed via a user at least once.

Master file table enables us to get a list of all the files in the system, or even entered the system via network of USB drives.

$usrnjrnl gives us information regarding all user activities in past 1-2 days.

Hiberfil.sys is a file the system creates when the computer goes into hibernation mode. Hibernate mode uses the Hiberfil.sys file to store the current state (memory) of the PC on the hard drive and the file is used when Windows is turned back on.

This was all from day 1 talk, Come back on next Tuesday for talks from Day 2. And as the final segment of this series we’ll be updating about attack/defense and jeopardy CTF experience.

Stay Tuned, Happy Blogging!

Jenkins Pipeline Global Shared Libraries

Although, the coding language used here is groovy but Jenkins does not allow us to use Groovy to its fullest,  so we can say that Jenkins Pipelines are not exactly groovy. Classes that you may write in src, they are processed in a “special Jenkins way” and you have no control over this. Depending on the various scenarios objects in Groovy don’t behave as you would expect objects to work.

Our thought is putting all pipeline functions in vars is much more practical approach, while there is no other good way to do inheritance, we wanted to use Jenkins Pipelines the right way but it has turned out to be far more practical to use vars for global functions.

Practical Strategy
As we know Jenkins Pipeline’s shared library support allows us to define and develop a set of shared pipeline helpers in this repository and provides a straightforward way of using those functions in a Jenkinsfile.This simple example will just illustrate how you can provide input to a pipeline with a simple YAML file so you can centralize all of your pipelines into one library. The Jenkins shared library example:And the example app that uses it:

Directory Structure

You would have the following folder structure in a git repo:

└── vars
    ├── opstreePipeline.groovy
    ├── opstreeStatefulPipeline.groovy
    ├── opstreeStubsPipeline.groovy
    └── pipelineConfig.groovy

Setting up Library in Jenkins Console.

This repo would be configured in under Manage Jenkins > Configure System in the Global Pipeline Libraries section. In that section Jenkins requires you give this library a Name. Example opstree-library

Pipeline.yaml

Let’s assume that project repository would have a pipeline.yaml file in the project root that would provide input to the pipeline:Pipeline.yaml

ENVIRONMENT_NAME: test
SERVICE_NAME: opstree-service
DB_PORT: 3079
REDIS_PORT: 6079

Jenkinsfile

Then, to utilize the shared pipeline library, the Jenkinsfile in the root of the project repo would look like:

@Library ('opstree-library@master') _
opstreePipeline()

PipelineConfig.groovy

So how does it all work? First, the following function is called to get all of the configuration data from the pipeline.yaml file:

def call() {
  Map pipelineConfig = readYaml(file: "${WORKSPACE}/pipeline.yaml")
  return pipelineConfig
}

opstreePipeline.groovy

You can see the call to this function in opstreePipeline(), which is called by the Jenkinsfile.

def call() {
    node('Slave1') {

        stage('Checkout') {
            checkout scm
        }

         def p = pipelineConfig()

        stage('Prerequistes'){
            serviceName = sh (
                    script: "echo ${p.SERVICE_NAME}|cut -d '-' -f 1",
                    returnStdout: true
                ).trim()
        }

        stage('Build & Test') {
                sh "mvn --version"
                sh "mvn -Ddb_port=${p.DB_PORT} -Dredis_port=${p.REDIS_PORT} clean install"
        }

        stage ('Push Docker Image') {
            docker.withRegistry('https://registry-opstree.com', 'dockerhub') {
                sh "docker build -t opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} ."
                sh "docker push opstree/${p.SERVICE_NAME}:${BUILD_NUMBER}"
            }
        }

        stage ('Deploy') {
            echo "We are going to deploy ${p.SERVICE_NAME}"
            sh "kubectl set image deployment/${p.SERVICE_NAME} ${p.SERVICE_NAME}=opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} "
            sh "kubectl rollout status deployment/${p.SERVICE_NAME} -n ${p.ENVIRONMENT_NAME} "

    }
}

You can see the logic easily here. The pipeline is checking if the developer wants to deploy on which environment what db_port needs to be there.

Benefits

The benefits of this approach are many, some of them are as mentioned below:

  • How to write groovy code is now none of the developer’s perspective.
  • Structure of the Pipeline.yaml is really flexible, where entire data structures can be passed as input to the pipeline.
  • Code redundancy saved to a large extent.

 Jenkinsfiles could actually just look more commonly, like this:

@Library ('opstree-library@master') _
opstreePipeline()

and opstreePipeline() would just read the the project type from pipeline.yaml and dynamically run the exact function, like opstreeStatefulPipeline(), opstreeStubsPipeline.groovy() . since pipeline are not exactly groovy, this isn’t possible. So one of the drawback is that each project would have to have a different-looking Jenkinsfile. The solution is in progress!So, what do you think?

Reference links: 
Image: Google image search (jenkins.io)

The closer you think you are, the less you’ll actually see

I hope you have seen the movie Now you see me, it has a famous quote The closer you think you are, the less you’ll actually see. Well, this blog is not about this movie but how I got stuck into an issue, because I was not paying attention and looking at the things closely and seeing less hence not able to resolve the issue.

There is a lot happening in today’s DevOps world. And HashiCorp has emerged out to be a big player in this game. Terraform is one of the open source tools to manage infrastructure as code. It plays well with most of the cloud provider. But with all these continuous improvements and enhancements there comes a possibility of issues as well. Below article is about such a scenario. And in case you have found yourself in the same trouble. You are lucky to reach the right page.
I was learning terraform and performing a simple task to launch an Ubuntu EC2 instance in us-east-1 region. For which I required the AMI Id, which I copied from the AWS console as shown in below screenshot.

Once I got the AMI Id, I tried to create the instance using terraform, below is the screenshot of the code

provider “aws” {
  region     = “us-east-1”
  access_key = “XXXXXXXXXXXXXXXXXX”
  secret_key = “XXXXXXXXXXXXXXXXXXX”
}
resource “aws_instance” “sandy” {
        ami = “ami-036ede09922dadc9b
        instance_type = “t2.micro”
        subnet_id = “subnet-0bf4261d26b8dc3fc”
}
I was expecting to see the magic of Terraform but what I got below ugly error.

Terraform was not allowing to spin up the instance. I tried couple of things which didn’t work. As you can see the error message didn’t give too much information. Finally, I thought of giving it a try by  doing same task via AWS web console. I searched for the same ubuntu AMI and selected the image as shown below. Rest of the things, I kept to default. And well, this time it got launched.

And it confused me more. Through console, it was working fine but while using Terraform it says not allowed. After a lot of hair pulling finally, I found the culprit which is a perfect example of how overlooking small things can lead to blunder.

Culprit

While copying the AMI ID from AWS console, I had copied the 64-bit (ARM) AMI ID. Please look carefully, the below screenshot

But while creating it through console I was selecting the default configuration which by is 64-bit(x86). Look at the below screenshot.

To explain it further, I tried to launch the VM with 64-bit (ARM) manually. And while selecting the AMI, I selected the 64-bit (ARM).

And here is the culprit. 64-bit(ARM) only supports a1 instance type

Conclusion

While launching the instance with the terraform, I tried using 64-bit (ARM) AMI ID mistakenly, primarily because for same AMI there are 2 AMI IDs and it is not very visible to eyes unless you pay special attention.

So folks, next time choosing an AMI ID keep it in mind what type of AMI you are selecting. It will save you a lot of time.

My stint with Runc vulnerability

Today I was given a task to set up a new QA environment. I said no issue should be done quickly as we use Docker, so I just need to provision VM run the already available QA ready docker image on this newly provisioned VM. So I started and guess what Today was not my day. I got below error while running by App image.

docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused “process_linux.go:293: copying bootstrap data to pipe caused \”write init-p: broken pipe\””: unknown.

I figured out my Valentine’s Day gone for a toss. As usual I took help of Google God to figure out what this issue is all about, after few minutes I found out a blog pretty close to issue that I was facing

https://medium.com/@dirk.avery/docker-error-response-from-daemon-1d46235ff61d

Bang on I got the issue identified. There is a new runc vulnerability identified few days back.

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736

The fix of this vulnerability was released by Docker on February 11, but the catch was that this fix makes docker incompatible with 3.13 Kernel version.

While setting up QA environment I installed latest stable version of docker 18.09.2 and since the kernel version was 3.10.0-327.10.1.el7.x86_64 thus docker was not able to function properly.

So as suggested in the blog I upgraded the Kernel version to 4.x.

rpm –import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum repolist
yum –enablerepo=elrepo-kernel install kernel-ml
yum repolist all
awk -F\’ ‘$1==”menuentry ” {print i++ ” : ” $2}’ /etc/grub2.cfg
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

And here we go post that everything is working like a charm.

So word of caution to every even
We have a major vulnerability in docker CVE-2019-5736, for more details go through the link

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736

As a fix, upgrade your docker to 18.09.2, as well make sure that you have Kernel 4+ as suggested in the blog.

https://docs.docker.com/engine/release-notes/

Now I can go for my Valentine Party 👫

Prometheus Overview and Setup

Overview

Prometheus is an opensource monitoring solution that gathers time series based numerical data. It is a project which was started by Google’s ex-employees at SoundCloud. 

To monitor your services and infra with Prometheus your service needs to expose an endpoint in the form of port or URL. For example:- {{localhost:9090}}. The endpoint is an HTTP interface that exposes the metrics.

For some platforms such as Kubernetes and skyDNS Prometheus act as directly instrumented software that means you don’t have to install any kind of exporters to monitor these platforms. It can directly monitor by Prometheus.

One of the best thing about Prometheus is that it uses a Time Series Database(TSDB) because of that you can use mathematical operations, queries to analyze them. Prometheus uses SQLite as a database but it keeps the monitoring data in volumes.

Pre-requisites

  • A CentOS 7 or Ubuntu VM
  • A non-root sudo user, preferably one named prometheus

Installing Prometheus Server

First, create a new directory to store all the files you download in this tutorial and move to it.

mkdir /opt/prometheus-setup
cd /opt/prometheus-setup
Create a user named “prometheus”

useradd prometheus

Use wget to download the latest build of the Prometheus server and time-series database from GitHub.


wget https://github.com/prometheus/prometheus/releases/download/v2.0.0/prometheus-2.0.0.linux-amd64.tar.gz
The Prometheus monitoring system consists of several components, each of which needs to be installed separately.

Use tar to extract prometheus-2.0.0.linux-amd64.tar.gz:

tar -xvzf ~/opt/prometheus-setup/prometheus-2.0.0.linux-amd64.tar.gz .
 Place your executable file somewhere in your PATH variable, or add them into a path for easy access.

mv prometheus-2.0.0.linux-amd64  prometheus
sudo mv  prometheus/prometheus  /usr/bin/
sudo chown prometheus:prometheus /usr/bin/prometheus
sudo chown -R prometheus:prometheus /opt/prometheus-setup/
mkdir /etc/prometheus
mv prometheus/prometheus.yml /etc/prometheus/
sudo chown -R prometheus:prometheus /etc/prometheus/
prometheus --version
  

You should see the following message on your screen:

  prometheus,       version 2.0.0 (branch: HEAD, revision: 0a74f98628a0463dddc90528220c94de5032d1a0)
  build user:       root@615b82cb36b6
  build date:       20171108-07:11:59
  go version:       go1.9.2
Create a service for Prometheus 

sudo vi /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus

[Service]
User=prometheus
ExecStart=/usr/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /opt/prometheus-setup/

[Install]
WantedBy=multi-user.target
systemctl daemon-reload

systemctl start prometheus

systemctl enable prometheus

Installing Node Exporter


Prometheus was developed for the purpose of monitoring web services. In order to monitor the metrics of your server, you should install a tool called Node Exporter. Node Exporter, as its name suggests, exports lots of metrics (such as disk I/O statistics, CPU load, memory usage, network statistics, and more) in a format Prometheus understands. Enter the Downloads directory and use wget to download the latest build of Node Exporter which is available on GitHub.

Node exporter is a binary which is written in go which monitors the resources such as cpu, ram and filesystem. 

wget https://github.com/prometheus/node_exporter/releases/download/v0.15.1/node_exporter-0.15.1.linux-amd64.tar.gz

You can now use the tar command to extract : node_exporter-0.15.1.linux-amd64.tar.gz

tar -xvzf node_exporter-0.15.1.linux-amd64.tar.gz .

mv node_exporter-0.15.1.linux-amd64 node-exporter

Perform this action:-

mv node-exporter/node_exporter /usr/bin/

Running Node Exporter as a Service

Create a user named “prometheus” on the machine on which you are going to create node exporter service.

useradd prometheus

To make it easy to start and stop the Node Exporter, let us now convert it into a service. Use vi or any other text editor to create a unit configuration file called node_exporter.service.


sudo vi /etc/systemd/system/node_exporter.service
This file should contain the path of the node_exporter executable, and also specify which user should run the executable. Accordingly, add the following code:

[Unit]
Description=Node Exporter

[Service]
User=prometheus
ExecStart=/usr/bin/node_exporter

[Install]
WantedBy=default.target

Save the file and exit the text editor. Reload systemd so that it reads the configuration file you just created.


sudo systemctl daemon-reload
At this point, Node Exporter is available as a service which can be managed using the systemctl command. Enable it so that it starts automatically at boot time.

sudo systemctl enable node_exporter.service
You can now either reboot your server or use the following command to start the service manually:
sudo systemctl start node_exporter.service
Once it starts, use a browser to view Node Exporter’s web interface, which is available at http://your_server_ip:9100/metrics. You should see a page with a lot of text:

Starting Prometheus Server with a new node

Before you start Prometheus, you must first edit a configuration file for it called prometheus.yml.

vim /etc/prometheus/prometheus.yml
Copy the following code into the file.

# my global configuration which means it will applicable for all jobs in file
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. scrape_interval should be provided for scraping data from exporters 
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. Evaluation interval checks at particular time is there any update on alerting rules or not.

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'. Here we will define our rules file path 
#rule_files:
#  - "node_rules.yml"
#  - "db_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape: In the scrape config we can define our job definitions
scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped from this config.
  - job_name: 'node-exporter'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'. 
    # target are the machine on which exporter are running and exposing data at particular port.
    static_configs:
      - targets: ['localhost:9100']
After adding configuration in prometheus.yml. We should restart the service by

systemctl restart prometheus
This creates a scrape_configs section and defines a job called a node. It includes the URL of your Node Exporter’s web interface in its array of targets. The scrape_interval is set to 15 seconds so that Prometheus scrapes the metrics once every fifteen seconds. You could name your job anything you want, but calling it “node” allows you to use the default console templates of Node Exporter.
Use a browser to visit Prometheus’s homepage available at http://your_server_ip:9090. You’ll see the following homepage. Visit http://your_server_ip:9090/consoles/node.html to access the Node Console and click on your server, localhost:9100, to view its metrics.

Logstash Timestamp

Introduction

A few days back I encountered with a simple but painful issue. I am using ELK to parse my application logs  and generate some meaningful views. Here I met with an issue which is, logstash inserts my logs into elasticsearch as per the current timestamp, instead of the actual time of log generation.
This creates a mess to generate graphs with correct time value on Kibana.
So I had a dig around this and found a way to overcome this concern. I made some changes in my logstash configuration to replace default time-stamp of logstash with the actual timestamp of my logs.

Logstash Filter

Add following piece of code in your  filter plugin section of logstash’s configuration file, and it will make logstash to insert logs into elasticsearch with the actual timestamp of your logs, besides the timestamp of logstash (current timestamp).
 
date {
  locale => "en"
  timezone => "GMT"
  match => [ "timestamp", "yyyy-mm-dd HH:mm:ss +0000" ]
}
In my case, the timezone was GMT  for my logs. You need to change these entries  “yyyy-mm-dd HH:mm:ss +0000”  with the corresponding to the regex for actual timestamp of your logs.

Description

Date plugin will override the logstash’s timestamp with the timestamp of your logs. Now you can easily adjust timezone in kibana and it will show your logs on correct time.
(Note: Kibana adjust UTC time with you bowser’s timezone)

Classless Inter Domain Routing Made Easy (Cont..)

Introduction :

As we had a discussion  about Ip addresses and their classes in the previous blog,we can now start with Sub-netting.

Network Mask /Subnet Mask –

As mask means to cover something,
IP Address is made up of two components, One is the network address and the other is the host address.The Ip Address needs to be separated into the network and host address, and this separation of network and host address in done by Subnet Mask.The host part of an IP Address is further divided into subnet and host address if more subnetworks are needed and this can be done by subnetting. It is called as a subnet mask or Network mask as it is used to identify network address of an IP address by performing a bitwise AND operation on the netmask.
Subnet Mask is of 32 Bit and is used to divide the network address and host addresses of an IP.
In a Subnet Mask all the network bits are set to 1’s and all the host bits are set to 0’s.
 
Whenever we see an IP Address – We can easily Identify that
WHAT IS NETWORK PART OF THAT IP
WHAT IS THE HOST PART OF THAT IP
 
FORMAT :
mmmmmmmm.mmmmmmmm.mmmmmmmm.mmmmmmmm
(Either it will have 1 or 0 Continuously)
EXAMPLE :
A Class Network Mask
In Binary : 11111111.00000000.00000000.00000000         – First 8 Bits will be Fixed
In Decimal : 255.0.0.0
Let the IP Given is – 10.10.10.10
When we try to Identify it we know that it belong to class A, So the subnet mask will be : 255.0.0.0
And the Network Address will be : 10.0.0.0
 
B Class Network Mask  
In Binary : 11111111.11111111.00000000.00000000           – First 16 Bits will be Fixed
In Decimal : 255.255.0.0
Let the IP Given is -150.150.150.150
When we try to Identify it we know that it belong to class B, So the subnet mask will be : 255.255.0.0
And the Network Address will be : 150.150.0.0
 
C Class Network Mask  
In Binary : 11111111.111111111.11111111.00000000           – First 32 Bits will be Fixed
In Decimal : 255.255.255.0
Let the IP Given is – 200.10.10.10
When we try to Identify it we know that it belong to class C, So the subnet mask will be : 255.255.255.0
And the Network Address will be : 200.10.10.0

Subnetting :

The method of dividing a network into two or more networks is called subnetting.
A subnetwork, or subnet, is a logically subdivision of an IP network
Subnetting provides Better Security
Smaller collision and Broadcast  Domains
Greater administrative control of each network.
Subnetting – WHY ??
Answer : Shortage of IP Addresses
SOLUTIONS : –
1) SUBNETTING – To divide Bigger network into the smaller networks and to reduce the wastage
2) NAT –  Network Address Translation
3) Classless IP Addressing –
No Bits are reserved for Network and Host
 
**Now the Problem that came is how to Identify the Class of IP Address :**
Let a IP Be : 10.10.10.10
If we talk about classful we can say it is of class A But in classless : We can check it through subnetwork mask.
255.255.255.0
So by this we can say that first 24 bits are masked for network and the left 8 are for host.
Bits Borrowed from Host and added to Network
Network ID(N)
Network ID(N)
Host ID(H)
Host ID(H)
Network ID(N)
Network ID(N)
Subnet
Host ID(H)
Network ID(N)
Network ID(N)
Subnet
Subnet/Host
Let we have a
150.150.0.0 – Class Identifier/Network Address
150.150.2.4 – Host Address – IP GIVEN TO A HOST
255.255.255.0 – Subnet Mask
150.150.2.0 – Subnet Address

CIDR : Classless Inter Domain Routing

CIDR (Classless Inter-Domain Routing, sometimes called supernetting) is a way to allow more flexible allocation of Internet Protocol addresses than was possible with the original system of IP Address classes. As a result, the number of available Internet addresses was greatly increased, which along with widespread use of network address translation, has significantly extended the useful life of IPv4.
Let a IP be – 200.200.200.200
 
Network ID(N)
Host ID(H)
——–24 Bit ——– ——-8 bit ———–
   
Network Mask tells that the number of 1’s are Masked
Here First 24 Bits are Masked
In Decimal : 255.255.255.0
In Binary : 11111111.11111111.11111111.00000000
   Here the total Number of 1’s : 24
So we can say that 24 Bits are masked.
 
This method of Writing the network mask can be represented in one more way
And that representation is called as CIDR METHOD/CIDR NOTATION

CIDR  – 200.200.200.200/24
24 : Is the Number of Ones – Or we can say Bits Masked
Basically the method ISP’s(Internet Service Provider)use to  allocate an amount of addresses to a company, a home
 
EX :
190.10.20.30/28 : Here 28 Bits are Masked that represents the Network and the remaining 4 bits represent the Host
/ – Represents how many bits are turned on (1s)

CLASS C SUBNETTING :

 
Determining Available Host Address :
 
200
10
20
0
11001000               00001010               00010100                 00000000 – 1
                                                                                              00000001 – 2     
                                      00000011 – 3
                                                                          .
                                                                                                    .
                                                                                                    .
                                                                                              11111101 – 254
                                                                                              11111110 – 255
                                                                                              11111111 – 256     
                                                                                                                    -2
                                                                                                               ———
                                                                                                                   254
    2^N – 2  = 2^8 -2 = 254
           (Coz we have 8 bits in this case)               – 2 (Because 2 Address are Reserved)
254 Address are available here
 
FORMULAS :
 
Number of Subnets : ( 2^x ) – 2     (x : Number of Bits Borrowed)
Number of Hosts : ( 2^y ) – 2         (y : Number of Zero’s)
Magic Number or Block Size = Total Number of Address : 256 – Mask
Let a IP ADDRESS BE 200.10.20.20/24
Number of subnets : 5
 
Network Address   :
200
10
20
20
255
255
255
0
(as total Number of 1’s : 24)
IP in Binary
11001000
00001010
00010100
00010100
MASK
11111111
11111111
11111111
00000000

And Operation in IP And Mask
11001000
00001010
00010100
00000000
In Binary
200
10
20
0
As we need 5 Subnets :
2^n -2 => 5
So the value of n = 3 that satisfies the condition
So, We need to turn 3 Zero’s to One’s to create 5 subnets
 
200
10
20
0
11001000
00001010
00010100
00000000
 
11001000
00001010
00010100
11100000
 (3 Zero’s changed to 3 one’s)    
200
10
20
224
                                                                                  
Subnet 0   
200
10
20
0/27  
Subnet 1                                           +32 – Block Size
200
10
20
32/27
Subnet 2                                            +32
200
10
20
64/27
Subnet 3
200
10
20
96/27
Subnet 4
200
10
20
128/27
Subnet 5   
200
10
20
160/27
Subnet 6
200
10
20
192/27
Subnet 7
200
10
20
224/27

How to Put Host ADD.
Subnet 0   
200
10
20
0/27  
Subnet Broadcast Number 0
200
10
20
31 /27  
Subnet 1                                           +32 – Block Size
200
10
20
31/27
200
10
20
32/27
200
10
20
33/27
                                                          .
                                                          .
                                                          .
200
10
20
62/27
Subnet Broadcast Subnet 1
200
10
20
63/27
200.10.20.33 ….and so on till 200.10.20.62   – 13 Host can be assigned IP Address.

Conclusion :

As the world is growing rapidly towards digitalization, use of IP Addresses is also increasing, So to decrease the wastage of IP Addresses, the implementation of CIDR is important that allows more organizations and users to take advantage of IPV4.