One more reason to use Docker

Recently I was working on a project which includes Terraform and AWS stuff. While working on that I was using my local machine for terraform code testing and luckily everything was going fine. But when we actually want to test it for the production environment we got some issues there. Then, as usual, we started to dig into the issue and finally, we got the issue which was quite a silly one 😜. The production server Terraform version and my local development server Terraform version was not the same. 

After wasting quite a time on this issue, I decided to come up with a solution so this will never happen again.

But before jumping to the solution, let’s think is this problem was only related to Terraform or do we have faced the similar kind of issue in other scenarios as well.

Well, I guess we face a similar kind of issue in other scenarios as well. Let’s talk about some of the scenario’s first.

Suppose you have to create a CI pipeline for a project and that too with code re-usability. Now pipeline is ready and it is working fine in your project and then after some time, you have to implement the same kind of pipeline for the different project. Now you can use the same code but you don’t know the exact version of tools which you were using with CI pipeline. This will lead you to error elevation. 

Let’s take another example, suppose you are developing something in any of the programming languages. Surely that utility or program will have some dependencies as well. While installing those dependencies on the local system, it can corrupt your complete system or package manager for dependency management. A decent example is Pip which is a dependency manager of Python😉.

These are some example scenarios which we have faced actually and based on that we got the motivation for writing this blog.

The Solution

To resolve all this problem we just need one thing i.e. containers. I can also say docker as well but container and docker are two different things.

But yes for container management we use docker.

So let’s go back to our first problem the terraform one. If we have to solve that problem there are multiple ways to solve this. But we tried it to solve this using Docker.

As Docker says

Build Once and Run Anywhere

So based on this statement what we did, we created a Dockerfile for required Terraform version and stored it alongside with the code. Basically our Dockerfile looks like this:-

FROM alpine:3.8

MAINTAINER OpsTree.com

ENV TERRAFORM_VERSION=0.11.10

ARG BASE_URL=https://releases.hashicorp.com/terraform

RUN apk add --no-cache curl unzip bash \
    && curl -fsSL -o /tmp/terraform.zip ${BASE_URL}/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \
    && unzip /tmp/terraform.zip -d /usr/bin/

WORKDIR /opstree/terraform

USER opstree

In this Dockerfile, we are defining the version of Terraform which needs to run the code.
In a similar fashion, all other above listed problem can be solved using Docker. We just have to create a Dockerfile with exact dependencies which are needed and that same file can work in various environments and projects.

To take it to the next level you can also dump a Makefile as well to make everyone life easier. For example:-

IMAGE_TAG=latest
build-image:
    docker build -t opstree/terraform:${IMAGE_TAG} -f Dockerfile .

run-container:
    docker run -itd --name terraform -v ~/.ssh:/root/.ssh/ -v ~/.aws:/root/.aws -v ${PWD}:/opstree/terraform opstree/terraform:${IMAGE_TAG}

plan-infra:
    docker exec -t terraform bash -c "terraform plan"

create-infra:
    docker exec -t terraform bash -c "terraform apply -auto-approve"

destroy-infra:
    docker exec -t terraform bash -c "terraform destroy -auto-approve"

And trust me after making this utility available the reactions of the people who will be using this utility will be something like this:-

Now I am assuming you guys will also try to simulate the Docker in multiple scenarios as much as possible.

There are a few more scenarios which yet to be explored to enhance the use of Docker if you find that before I do, please let me know.

Thanks for reading, I’d really appreciate any and all feedback, please leave your comment below if you guys have any feedback.

Cheers till the next time.

Jenkins Pipeline Global Shared Libraries

Although, the coding language used here is groovy but Jenkins does not allow us to use Groovy to its fullest,  so we can say that Jenkins Pipelines are not exactly groovy. Classes that you may write in src, they are processed in a “special Jenkins way” and you have no control over this. Depending on the various scenarios objects in Groovy don’t behave as you would expect objects to work.

Our thought is putting all pipeline functions in vars is much more practical approach, while there is no other good way to do inheritance, we wanted to use Jenkins Pipelines the right way but it has turned out to be far more practical to use vars for global functions.

Practical Strategy
As we know Jenkins Pipeline’s shared library support allows us to define and develop a set of shared pipeline helpers in this repository and provides a straightforward way of using those functions in a Jenkinsfile.This simple example will just illustrate how you can provide input to a pipeline with a simple YAML file so you can centralize all of your pipelines into one library. The Jenkins shared library example:And the example app that uses it:

Directory Structure

You would have the following folder structure in a git repo:

└── vars
    ├── opstreePipeline.groovy
    ├── opstreeStatefulPipeline.groovy
    ├── opstreeStubsPipeline.groovy
    └── pipelineConfig.groovy

Setting up Library in Jenkins Console.

This repo would be configured in under Manage Jenkins > Configure System in the Global Pipeline Libraries section. In that section Jenkins requires you give this library a Name. Example opstree-library

Pipeline.yaml

Let’s assume that project repository would have a pipeline.yaml file in the project root that would provide input to the pipeline:Pipeline.yaml

ENVIRONMENT_NAME: test
SERVICE_NAME: opstree-service
DB_PORT: 3079
REDIS_PORT: 6079

Jenkinsfile

Then, to utilize the shared pipeline library, the Jenkinsfile in the root of the project repo would look like:

@Library ('opstree-library@master') _
opstreePipeline()

PipelineConfig.groovy

So how does it all work? First, the following function is called to get all of the configuration data from the pipeline.yaml file:

def call() {
  Map pipelineConfig = readYaml(file: "${WORKSPACE}/pipeline.yaml")
  return pipelineConfig
}

opstreePipeline.groovy

You can see the call to this function in opstreePipeline(), which is called by the Jenkinsfile.

def call() {
    node('Slave1') {

        stage('Checkout') {
            checkout scm
        }

         def p = pipelineConfig()

        stage('Prerequistes'){
            serviceName = sh (
                    script: "echo ${p.SERVICE_NAME}|cut -d '-' -f 1",
                    returnStdout: true
                ).trim()
        }

        stage('Build & Test') {
                sh "mvn --version"
                sh "mvn -Ddb_port=${p.DB_PORT} -Dredis_port=${p.REDIS_PORT} clean install"
        }

        stage ('Push Docker Image') {
            docker.withRegistry('https://registry-opstree.com', 'dockerhub') {
                sh "docker build -t opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} ."
                sh "docker push opstree/${p.SERVICE_NAME}:${BUILD_NUMBER}"
            }
        }

        stage ('Deploy') {
            echo "We are going to deploy ${p.SERVICE_NAME}"
            sh "kubectl set image deployment/${p.SERVICE_NAME} ${p.SERVICE_NAME}=opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} "
            sh "kubectl rollout status deployment/${p.SERVICE_NAME} -n ${p.ENVIRONMENT_NAME} "

    }
}

You can see the logic easily here. The pipeline is checking if the developer wants to deploy on which environment what db_port needs to be there.

Benefits

The benefits of this approach are many, some of them are as mentioned below:

  • How to write groovy code is now none of the developer’s perspective.
  • Structure of the Pipeline.yaml is really flexible, where entire data structures can be passed as input to the pipeline.
  • Code redundancy saved to a large extent.

 Jenkinsfiles could actually just look more commonly, like this:

@Library ('opstree-library@master') _
opstreePipeline()

and opstreePipeline() would just read the the project type from pipeline.yaml and dynamically run the exact function, like opstreeStatefulPipeline(), opstreeStubsPipeline.groovy() . since pipeline are not exactly groovy, this isn’t possible. So one of the drawback is that each project would have to have a different-looking Jenkinsfile. The solution is in progress!So, what do you think?

Reference links: 
Image: Google image search (jenkins.io)

Docker-Compose As A Bundled Application

When docker was released as a new containerization tool, it took the market by a storm. With its lightweight images, multi-os support, and ability to ship containers, it’s popularity only roared. I have been using it for more than six months now, I can see why it is so. Hypervisors, another type of virtualizing tools,  have been hard on hardware. Which means they require a lot of resources to run. This increases the cost of running applications way more than those running on containers. This is the problem docker solved and hence, it’s popularity. Docker engine just sits on host OS and translates the instructions from an application to the underlying OS. It does not need one extra layer of virtual OS, just the binaries and libraries of application bundled in the image. Right? Now, hold on to that thought. We all have been working with docker and an extension with docker-compose. Why? Because it makes our job easy, We are spared from typing hundreds of ad-hoc commands in terminal to set up a slightly or very complicated application with certain dependencies. We can just describe it in a `docker-compose.yml` file and our job is done. However, the problem arises when we have to share that compose file:

  • Other users might need to use the file in a different environment, so they will need to edit all the values pertaining to their need, manually, and keep separate compose files for each environment.
  • Troubleshooting various configuration issues can be a tedious task since there is no single place where the configuration of the application can be stored. Changes will have to be made in the file.
  • This also makes communication between Dev and Ops team more tricky than it has to be resulting in communication gap and time wastage.

To have a more clear picture of the issue, we can have look at the below image:

We have compose file and configuration for separate environments, we make changes according to environment needs in different compose files, which could be a long manual task depending on the size of our project.


All of this points to the fact that there is no way to bundle the applications that use efficiently-bundled docker images. See the irony here? Well, there “was” no way, until there was. Enter ‘docker-app’. This, relatively, new tool is the answer to packaging docker-compose applications. I came across it when I was, myself, struggling to re-use a docker-compose application I had written in another environment. As soon as I read about it, I had to try it, which I did and loved. It made the task much easier as it provided a template of compose file and a key-value store for environment dependent parameters.


Now, we have an artefact with extention of ‘.dockerapp’. We can pass configuration values either through CLI or files or both and it will render compose file according to those values.

Let us now go through an example of how the docker app works. I am going to deploy a dummy application Spring3hibernate from Opstree Github repository in QA env and later in PROD by making simple configuration changes.
Installing docker-app is easy, though, there is one thing one should keep in mind: it can be installed as a plugin in docker-CLI or as standalone CLI tool itself. I will be installing it as a standalone CLI tool on linux. If you wish to install it as a plugin to docker-CLI and/or on another OS, visit their Github page: https://github.com/docker/app (Also, please visit github page for basics)
Before continuing, please ensure you have docker-CLI and docker-compose installed.
Please follow below steps to install docker-app:

$ export OSTYPE="$(uname | tr A-Z a-z)"
$ curl -fsSL --output "/tmp/docker-app-${OSTYPE}.tar.gz" \
"https://github.com/docker/app/releases/download/v0.8.0/docker-app-${OSTYPE}.tar.gz"
$ tar xf "/tmp/docker-app-${OSTYPE}.tar.gz" -C /tmp/
$ install -b "/tmp/docker-app-standalone-${OSTYPE}" /usr/local/bin/docker-app

Create a new directory in your home, we’ll call it app home:

$ cd ~
$ mkdir spring3hibernate-app
$ cd spring3hibernate-app/

Now, clone the app from Opstree Github repository. This app needs only mysql as a dependency.

$ git clone https://github.com/opstree/spring3hibernate.git

We need to update database properties file and nginx config file with below contents respectively:

$ vim ~/spring3hibernate-app/spring3hibernate/src/main/resources/database.properties

Replace below content over there:

database.driver=com.mysql.jdbc.Driver
database.url=jdbc:mysql://mysql:3306/employeedb
database.user=admin
database.password=password
hibernate.dialect=org.hibernate.dialect.MySQLDialect
hibernate.show_sql=true
hibernate.hbm2ddl.auto=update
upload.dir=c:/uploads

For nginx conf file:

$ vim ~/spring3hibernate-app/spring3hibernate/nginx/default.conf
server {
    listen       80;
    server_name  localhost;

    location / {
        stub_status on;
        proxy_pass http://springapp1:8080/;

    }
# redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Move ‘default.conf’ to ~/spring3hibernate-app/spring3hibernate/nginx/conf/qa/ as we have different conf file for PROD which goes to ~/spring3hibernate-app/spring3hibernate/nginx/conf/prod/

upstream s3hbackend {
    server springapp1:8080;
    server springapp2:8080;
}
server {
       listen 80;
       location / {
           stub_status on;
           proxy_pass http://s3hbackend;
       }
  
       # redirect server error pages to the static page /50x.html
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   /usr/share/nginx/html;
       }

}

This is the configuration for the nginx load balancer. Remember this, we’ll use it later. Let’s create our docker-app now, make sure you are in the app home directory
when executing this command:

$ docker-app init --single-file s3h

This will create a single file named s3h.dockerapp which will look like this: 

# This section contains your application metadata.
# Version of the application
version: 0.1.0
# Name of the application
name: s3h
# A short description of the application
description:
# List of application maintainers with name and email for each
maintainers:
  - name: ubuntu
    email:


---
# This section contains the Compose file that describes your application services.
version: "3.6"
services: {}


---
# This section contains the default values for your application parameters.

{}

As you can see this file is divided into three parts, metadata, compose, and parameters. They are all in one file because we used –single-file switch. We can divide them up in multiple files by using docker-app split command in app home directory, docker-app merge will put them back in one file. Now, for QA, we have the following configuration for s3h.dockerapp file:

version: 0.1.0
name: s3h
description:
maintainers:
  - name: atbk5
    email: adeel.ahmad@opstree.com


---
version: "3.7"
services:
  mysql:
    image: mysql:5.7
    container_name: mysql
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.env.rootpass}
      MYSQL_DATABASE: ${mysql.env.database}
      MYSQL_USER: ${mysql.env.user}
      MYSQL_PASSWORD: ${mysql.env.userpass}
    restart: always
    networks:
      - backend
    volumes:
      - db_data:/var/lib/mysql


  spring1:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp1
    restart: always
    networks:
      - backend
      - frontend


  spring2:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp2
    restart: always
    networks:
      - backend
      - frontend
    x-enabled: ${spring.app2}


  nginx:
    depends_on:
      - spring1
    image: nginx:alpine
    container_name: proxy
    restart: always
    networks:
      - frontend
    volumes:
      - ${nginx.conf}:/etc/nginx/conf.d
    ports:
      - ${nginx.port}:80
    x-enabled: ${nginx.status}


networks:
  frontend:
  backend:


volumes:
  db_data:


---
mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/qa
  port: 81
  status: true
spring:
  app2: false

As mentioned before, first part contains app metadata, second part contains actual compose file with lots of variables, and last part contains values of those variables. Special mention here is x-enabled variable, docker-app provides functionality to temporarily disable a service using this variable. Now, try a few commands:

$ docker-app inspect

It will produce summary of whole app.

$ docker-app render

It will replace all variables with their values and will produce a compose file

$ docker-app render --set nginx.status=”false”

It will remove nginx from docker-app compose as well as deploy

$ docker-app render | docker-compose -f - up

It will spin up all the containers according to rendered compose file. We can see the application running on port 81 of our machine.

$ docker-app --help

To check out more commands and play around a bit.
At this point, it will be better to create two directories in app home: qa and prod. Create a file in qa: qa-params.yml. Another file in prod: prod-params.yml. Copy all parameters from above s3h.dockerapp file to qa-params.yaml (or not). More importantly, copy below changes in parameters to prod-params.yml

mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/prod
  port: 80
  status: true
spring:
  app2: true

We are going to loadbalance springapp1 and springapp2 in PROD environment, since we have enabled springapp2 using x-enabled parameter. We have also changed nginx conf bind path to the new conf file and host port for nginx to 80 (for Production). All so easily. Run command:

$ docker-app render --parameters-file ./prod/prod-params.yaml

This command will produce a compose file ready for production deployment. Now run:

$ docker-app render --parameters-file ./prod/prod-params.yml | docker-compose -f - up

And production is deployed … Visit port 80 of your localhost to verify. What’s more exciting is that we can also share our docker-apps through docker hub, we can tag the app and push it to our remote registry as images after logging in:

$ docker login

Provide your username and password for docker hub, create an account if not yet created.

$ docker-app push --tag atbk5/s3h.dockerapp:latest

If we wish to upload additional files as well, we will have to split our project using docker-app split and put additional files in the directory before pushing. The additional files will go as attachments which can be accessed later.

Conclusion

With the arrival of docker app, our large, composite, and containerized applications can also be shipped and re-used as images. That is cool. But there’s something cooler which we haven’t explored yet. Deploying our docker-apps on kubernetes with the goal of exploring how far in management, and how optimal in delivery, we can go with our applications. Let’s keep this as a topic for the next blog. Until then, have a nice one. 🙂

Image Source: https://reflectoring.io/externalize-configuration/

Can you integrate a GitHub Webhook with Privately hosted Jenkins No? Think again

Introduction

One of the most basic requirement of CI implementation using Jenkins is to automatically trigger a Jenkins job post every commit. As you are already aware there are two ways in which a Jenkins job can be triggered in an automated fashion is:

  • Pull | PollSCM
  • Push | Webhook

It is a no-brainer that a Push-based trigger is the most efficient way of triggering a Jenkins job else you would be unnecessarily hogging your resources. One of the hurdles in implementing a push-based trigger is that your VCS & Jenkins server should be in the same network or in simple terms they can talk to each other.

In a typical CI setup, there is a SAAS VCS i.e GitHub/GitLab and a privately hosted Jenkins server, which make a Push-based triggering of Jenkins job impossible. Till a few days back I was under the same impression until I found this awesome blog that talks about how you can integrate a Webhook with your private Jenkins server.

In this blog, I’ll be trying to explain how I implemented the Webhook relay. Most importantly the reference blog was about integration of WebhookRelay with GitHub, with GitLab still there were some unexplored areas and I faced some challenges while doing the integration. This motivated me to write a blog so that people will have a ready reference on how to integrate GitLab with Webhook Relay.

Overall Workflow

Step 1: Download WebHook Relay Agent on the local system

Copy and execute the command

curl -sSL https://storage.googleapis.com/webhookrelay/downloads/relay-linux-amd64 > relay && chmod +wx relay && sudo mv relay /usr/local/bin

Note: Webhook Relay and Webhook Relay agent are different. Webhook Relay is running on public IP which triggers by GitLab and Webhook Relay Agent is a service which gets trigger by Webhook relay.

Step 2: Create a Webhook Relay Account

After successfully signing up we will land on Webhook Relay home page.

Step 3: Setting up the Webhook Relay Agent.

We have to create Access Tokens.
Now after navigating through Access token, click on Create Token button. Then we are provided with a Key and Secret pair.
Copy and execute:

relay login -k token-key -s token-secret
 
 
If it prompts a success message it means our Webhook relay agent is successfully setup.

Step 4: Create GItLab Repository

We will keep our repository a public one to keep things simple and understandable. Let’s say our Gitlab repository’s name is  WebhookProject.

Step 5: Install GitLab and GitLab Hook Plugin.

Go to Manage Jenkins →  Manage Plugins → Available
 

Step 6: Create Jenkins Job

 
Configure job: Add Gitlab repository link
 
Now we’ll choose the build trigger option:
 
 
 
Save the job.

Step 7: Connecting GitLab Repository, Webhook Relay, and Webhook Relay Agent

The final and most important step is to Connect the Overall flow.

Start forwarding Webhooks to Jenkins

Open terminal and type command:

relay forward --bucket gitlab-jenkins http://localhost:8080/project/webhook-gitlab-test
Note: Bucket name can be anything
 
 
Note: Do not stop this process by doing (ctrl+c).Open a new terminal or a new tab for commit to gitlab.

The most critical part of the workflow is the link generated by the Webhook Relay Agent. Copy this link and paste Gitlab repository(webhookProject) → Settings → Integrations

Paste the link.
For the sake of simplicity uncheck the Enable SSL Verification and click Add webhook button
Until now all major configuration has been done. Now Clone GitLab repository and push commits to the remote repository.
Go to Jenkins job and see build is triggered by GitLab webhook.
To see GitLab webhook Relay Logs, Go to :
Gitlab Repository → Settings → Integrations → webhook → Edit
 
 
To see Logs of Webhook Relay Agent trigger Jenkins, Go to:
Webhook Relay UI page → Relay Logs.

So now you know how to do WebHook integration between your VCS & Jenkins even when they are not directly reachable to each other.
Can you integrate a GitHub Webhook with Privately hosted Jenkins? Yes
Cheers Till Next Time!!!!

Gitlab-CI with Nexus

Recently I was asked to set up a CI- Pipeline for a Spring based application.
I said “piece of cake”, as I have already worked on jenkins pipeline, and knew about maven so that won’t be a problem. But there was a hitch, “pipeline of Gitlab CI“. I said “no problem, I’ll learn about it” with a Ninja Spirit.
So for starters what is gitlab-ci pipeline. For those who have already work on Jenkins and maven, they know about the CI workflow of  Building a code , testing the code, packaging, and deploy it using maven. You can add other goals too, depending upon the requirement.
The CI process in GitLab CI is defined within a file in the code repository itself using a YAML configuration syntax.
The work is then dispatched to machines called runners, which are easy to set up and can be provisioned on many different operating systems. When configuring runners, you can choose between different executors like Docker, shell, VirtualBox, or Kubernetes to determine how the tasks are carried out.

What we are going to do?
We will be establishing a CI/CD pipeline using gitlab-ci and deploying artifacts to NEXUS Repository.

Resources Used:

  1. Gitlab server, I’m using gitlab to host my code.   
  2. Runner server, It could be vagrant or an ec2 instance. 
  3. Nexus Server, It could be vagrant or an ec2 instance.

     Before going further, let’s get aware of few terminologies. 

  • Artifacts: Objects created by a build process, Usually project jars, library jar. These can include use cases, class diagrams, requirements, and design documents.
  • Maven Repository(NEXUS): A repository is a directory where all the project jars, library jar, plugins or any other project specific artifacts are stored and can be used by Maven easily, here we are going to use NEXUS as a central Repository.
  • CI: A software development practice in which you build and test software every time a developer pushes code to the application, and it happens several times a day.
  • Gitlab-runner: GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. It is used in conjunction with GitLab CI, the open-source continuous integration service included with GitLab that coordinates the jobs.
  • .gitlab-ci.yml: The YAML file defines a set of jobs with constraints stating when they should be run. You can specify an unlimited number of jobs which are defined as top-level elements with an arbitrary name and always have to contain at least the script clause. Whenever you push a commit, a pipeline will be triggered with respect to that commit.

Strategy to Setup Pipeline

Step-1:  Setting up GitLab Repository. 

I’m using a Spring based code Spring3Hibernate, with a directory structure like below.
$    cd spring3hibernateapp
$    ls
     pom.xml pom.xml~ src

# Now lets start pushing this code to gitlab

$    git remote -v
     origin git@gitlab.com:/spring3hibernateapp.git (fetch)
     origin git@gitlab.com:/spring3hibernateapp.git (push)
# Adding the code to the working directory

$    git add -A
# Committing the code

$    git commit -m "[Master][Add] Adding the code "
# Pushing it to gitlab

$    git push origin master

Step-2:  Install GitLab Runner manually on GNU/Linux

# Simply download one of the binaries for your system:

$    sudo wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-386

# Give it permissions to execute:

$    sudo chmod +x /usr/local/bin/gitlab-runner 

# Optionally, if you want to use Docker, install Docker with:

$    curl -sSL https://get.docker.com/ | sh 

# Create a GitLab CI user:

$    sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash 

# Install and run as service:

$    sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
$    sudo gitlab-runner start

Step-3: Registering a Runner

To get the runner configuration you need to move to gitlab > spring3hibernateapp > CI/CD setting > Runners
And get the registration token for runners.

# Run the following command:

$     sudo gitlab-runner register
       Runtime platform                                    arch=amd64 os=linux pid=1742 revision=3afdaba6 version=11.5.0
       Running in system-mode.                             

# Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):

https://gitlab.com/

# Please enter the gitlab-ci token for this runner:

****8kmMfx_RMr****

# Please enter the gitlab-ci description for this runner:

[gitlab-runner]: spring3hibernate

# Please enter the gitlab-ci tags for this runner (comma separated):

build
       Registering runner... succeeded                     runner=ZP3TrPCd

# Please enter the executor: docker, docker-ssh, shell, ssh, virtualbox, docker+machine, parallels, docker-ssh+machine, kubernetes:

docker

# Please enter the default Docker image (e.g. ruby:2.1):

maven       
       Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

# You can also create systemd service in /etc/systemd/system/gitlab-runner.service.

[Unit]
Description=GitLab Runner
After=syslog.target network.target
ConditionFileIsExecutable=/usr/local/bin/gitlab-runner

[Service]
StartLimitInterval=5
StartLimitBurst=10
ExecStart=/usr/local/bin/gitlab-runner "run" "--working-directory" "/home/gitlab-runner" "--config" "/etc/gitlab-runner/config.toml" "--service" "gitlab-runner" "--syslog" "--user" "gitlab-runner"
Restart=always
RestartSec=120

[Install]
WantedBy=multi-user.target

Step-4: Setting up Nexus Repository
You can setup a repository installing the open source version of Nexus you need to visit Nexus OSS and download the TGZ version or the ZIP version.
But to keep it simple, I used docker container for that.
# Install docker

$    curl -sSL https://get.docker.com/ | sh

# Launch a NEXUS container and bind the port

$    docker run -d -p 8081:8081 --name nexus sonatype/nexus:oss

You can access your nexus now on http://:8081/nexus.
And login as admin with password admin123.

Step-5: Configure the NEXUS deployment

Clone your code and enter the repository

$    cd spring3hibernateapp/

# Create a folder called .m2 in the root of your repository

$    mkdir .m2

# Create a file called settings.xml in the .m2 folder

$    touch .m2/settings.xml

# Copy the following content in settings.xml

<settings xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd"
    xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  
    
      central
      ${env.NEXUS_REPO_USER}
      ${env.NEXUS_REPO_PASS}
    
    
      snapshots
      ${env.NEXUS_REPO_USER}
      ${env.NEXUS_REPO_PASS}
    
  

 Username and password will be replaced by the correct values using variables.
# Updating Repository path in pom.xml


     central
     Central
     ${env.NEXUS_REPO_URL}central/
   
 
          snapshots
          Snapshots
          ${env.NEXUS_REPO_URL}snapshots/
        

Step-6: Configure GitLab CI/CD for simple maven deployment.

GitLab CI/CD uses a file in the root of the repo, named, .gitlab-ci.yml, to read the definitions for jobs that will be executed by the configured GitLab Runners.
First of all, remember to set up variables for your deployment. Navigate to your project’s Settings > CI/CD > Variables page and add the following ones (replace them with your current values, of course):

Now it’s time to define jobs in .gitlab-ci.yml and push it to the repo:

image: maven

variables:
  MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
  MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"

cache:
  paths:
    - .m2/repository/
    - target/

stages:
  - build
  - test
  - package 
  - deploy

codebuild:
  tags:
    - build      
  stage: build
  script: 
    - mvn compile

codetest:
  tags:
    - build
  stage: test
  script:
    - mvn $MAVEN_CLI_OPTS test
    - echo "The code has been tested"

Codepackage:
  tags:
    - build
  stage: package
  script:
    - mvn $MAVEN_CLI_OPTS package -Dmaven.test.skip=true
    - echo "Packaging the code"
  artifacts:
    paths:
      - target/*.war
  only:
    - master  

Codedeploy:
  tags:
    - build
  stage: deploy
  script:
    - mvn $MAVEN_CLI_OPTS deploy -Dmaven.test.skip=true
    - echo "installing the package in local repository"
  only:
    - master

Now add the changes, commit them and push them to the remote repository on gitlab. A pipeline will be triggered with respect to your commit. And if everything goes well our mission will be accomplished.
Note: You might get some issues with maven plugins, which will need to managed in pom.xml, depending upon the environment.
In this blog, we covered the basic steps to use a Nexus Maven repository to automatically publish and consume artifacts.