Achieve SSO in Privately Hosted Jenkins

Introduction

Providing OAuth 2.0 user authentication directly or using Google+ Sign-in reduces your CI overhead. It also provides a trusted and secure login system that’s familiar to users, consistent across devices, and removes the burden of users having to remember another username and password. One of the hurdles in implementing a Gmail authentication is that Google developer console and your  Jenkins server should be in the same network or in simple terms they can talk to each other.

Resources Used

  • Privately Hosted Jenkins
  • Google developer console
  • Ngrok
In this blog, I’m trying to explain how to integrate Gmail authentication feature in your privately hosted Jenkins server so that you get free of filling the form by the time of creating a new user.

Setup 1: Setup Ngrok

NGROK
 
Ngrok is multiplatform tunneling, reverse proxy software that establishes secure tunnels from a public endpoint such as the internet to a locally running network service while capturing all traffic for detailed inspection and replay.
We are using Ngrok to host our Jenkins service (running on port 8080) to public IP.

 
Go to google and search for Download Ngrok.
 
 
 
Either Login with google account or do Ngrok own signup.
 
 
After Logged in Ngrok Download it.
 
 
After Download Ngrok, Go to the console and unzip the downloaded zip file and then move it to /usr/local/bin.
Note: Moving part is optional, we do so for accessing ngrok from anywhere.
 
 
 
Go to ngrok UI page , copy the authentication key and paste it.
Note: Remove ” . / ” sign because we moved ngrok file to /usr/local/bin
 
 
 Major configuration for Ngrok is done. Now type the command:
ngrok http 8080
 Assuming that Jenkins is running on port 8080.
 
 
Now Ngrok Host our Jenkins Service to public IP.
 
Copy this IP, we will use it in the google developer console.
 
Note: Make this terminal up and running.(don’t do ctrl+c)

Step 2: Setup Google Developer Console

Go to google and search for google developer console.
 
 
After sign in into google developer console, we will redirect to Google developer console UI screen.
Go to Select a project  → New Project
 
 
Give Project Name, here I will use “JenkinsGmailAuthentication” and create a project. Creating a project takes 1 or 2 minutes.
 
 
After Project created, we will be redirected to the UI page as shown below. Now click on on the “Credentials” Tab on the left slide bar.
 
 
 
After Go to the OAuth consent screen tab and give the below entries. Here I will give Application name to “JenkinsGmailAuthentication”.
 
 
The important part of the Google developer console is Public IP we created using Ngrok. Copy Public IP in Authorized domains and note to remove ” http:// ” in Authorized domains.
 
 
After Setting OAuth consent screen, Go to   “Credentials Tab”→ Create Credentials→OAuthClientID
 
 
Select Application type as Web Application, give the name “JenkinsGmailAuthentication”.
Major Part of Create Credential has Authorized JavaScript origins and Authorized redirect URIs.
 
 
Copy Client ID and Client Secret because we are going to use these in Jenkins.
 

Step 3: Setup Jenkins

I am assuming that Jenkins is already installed in your system.
Go to Manage Jenkins → Manage Plugins→ Available
 
 
Search for “Google Login Plugin” and add it.
 
 
Go to Manage Jenkins → Configure Global Security
 
 
The major part of Jenkins Setup is to Configure Global Security.
Check the Enable security → Login with Google and Paste the Client ID and Client secret generated in Create Credential Step and Save.
 
 
Up to here, we are done with the Setup part.
Now Click on login button on Jenkins UI, you will redirect to Gmail for login.
 
 
Select the account from which you want to log in.
 
 
After selecting Account you will redirect to Jenkins and you are logged in as selected user.
 
 
You may be facing a problem when you log in again.
Logout from the current user and login again.
 
 
After redirected to Gmail select another user.
 
 
After selecting user you will be redirected to Error Page showing: HTTP ERROR 404.
 
 
Don’t worry, you have to just remove “securityRealm/” or enter again “localhost:8080”.
 
 
You are logged in with the selected user.
 
 
So now you know how to do Gmail Authentication between Google developer console and Jenkins when they are not directly reachable to each other.
Here the main bridge between both is Ngrok which host our Privately hosted Jenkins to outer internet.
 
 
 

Jenkins Pipeline Global Shared Libraries

Although, the coding language used here is groovy but Jenkins does not allow us to use Groovy to its fullest,  so we can say that Jenkins Pipelines are not exactly groovy. Classes that you may write in src, they are processed in a “special Jenkins way” and you have no control over this. Depending on the various scenarios objects in Groovy don’t behave as you would expect objects to work.

Our thought is putting all pipeline functions in vars is much more practical approach, while there is no other good way to do inheritance, we wanted to use Jenkins Pipelines the right way but it has turned out to be far more practical to use vars for global functions.

Practical Strategy
As we know Jenkins Pipeline’s shared library support allows us to define and develop a set of shared pipeline helpers in this repository and provides a straightforward way of using those functions in a Jenkinsfile.This simple example will just illustrate how you can provide input to a pipeline with a simple YAML file so you can centralize all of your pipelines into one library. The Jenkins shared library example:And the example app that uses it:

Directory Structure

You would have the following folder structure in a git repo:

└── vars
    ├── opstreePipeline.groovy
    ├── opstreeStatefulPipeline.groovy
    ├── opstreeStubsPipeline.groovy
    └── pipelineConfig.groovy

Setting up Library in Jenkins Console.

This repo would be configured in under Manage Jenkins > Configure System in the Global Pipeline Libraries section. In that section Jenkins requires you give this library a Name. Example opstree-library

Pipeline.yaml

Let’s assume that project repository would have a pipeline.yaml file in the project root that would provide input to the pipeline:Pipeline.yaml

ENVIRONMENT_NAME: test
SERVICE_NAME: opstree-service
DB_PORT: 3079
REDIS_PORT: 6079

Jenkinsfile

Then, to utilize the shared pipeline library, the Jenkinsfile in the root of the project repo would look like:

@Library ('opstree-library@master') _
opstreePipeline()

PipelineConfig.groovy

So how does it all work? First, the following function is called to get all of the configuration data from the pipeline.yaml file:

def call() {
  Map pipelineConfig = readYaml(file: "${WORKSPACE}/pipeline.yaml")
  return pipelineConfig
}

opstreePipeline.groovy

You can see the call to this function in opstreePipeline(), which is called by the Jenkinsfile.

def call() {
    node('Slave1') {

        stage('Checkout') {
            checkout scm
        }

         def p = pipelineConfig()

        stage('Prerequistes'){
            serviceName = sh (
                    script: "echo ${p.SERVICE_NAME}|cut -d '-' -f 1",
                    returnStdout: true
                ).trim()
        }

        stage('Build & Test') {
                sh "mvn --version"
                sh "mvn -Ddb_port=${p.DB_PORT} -Dredis_port=${p.REDIS_PORT} clean install"
        }

        stage ('Push Docker Image') {
            docker.withRegistry('https://registry-opstree.com', 'dockerhub') {
                sh "docker build -t opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} ."
                sh "docker push opstree/${p.SERVICE_NAME}:${BUILD_NUMBER}"
            }
        }

        stage ('Deploy') {
            echo "We are going to deploy ${p.SERVICE_NAME}"
            sh "kubectl set image deployment/${p.SERVICE_NAME} ${p.SERVICE_NAME}=opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} "
            sh "kubectl rollout status deployment/${p.SERVICE_NAME} -n ${p.ENVIRONMENT_NAME} "

    }
}

You can see the logic easily here. The pipeline is checking if the developer wants to deploy on which environment what db_port needs to be there.

Benefits

The benefits of this approach are many, some of them are as mentioned below:

  • How to write groovy code is now none of the developer’s perspective.
  • Structure of the Pipeline.yaml is really flexible, where entire data structures can be passed as input to the pipeline.
  • Code redundancy saved to a large extent.

 Jenkinsfiles could actually just look more commonly, like this:

@Library ('opstree-library@master') _
opstreePipeline()

and opstreePipeline() would just read the the project type from pipeline.yaml and dynamically run the exact function, like opstreeStatefulPipeline(), opstreeStubsPipeline.groovy() . since pipeline are not exactly groovy, this isn’t possible. So one of the drawback is that each project would have to have a different-looking Jenkinsfile. The solution is in progress!So, what do you think?

Reference links: 
Image: Google image search (jenkins.io)

What Without Internet

What without Internet?

I had a dream a few days ago in which the existence of the internet was gone, When I woke up I thought about what would happen if there is no Internet for a day?


Sure, it would cause quite a bit of panic and uproar and it would be havoc for an organization to work without the internet, but if the internet resumed normally after 24 hours are over, things would return to normal pretty quickly.


Now, switch it off for a longer time, possibly a week or a month, that would have a more lasting impact, since, in that time, a significant number of people would find themselves unable to meet their obligations or do their business at all. This would be somewhat mitigated by the fact that the situation is a sort of a ‘natural disaster’, but still, those who really depend on the internet for their business would likely feel a lasting negative impact.
               
What if I say there are some organizations that work in a situation like there is no internet, yes it’s right due to some sort of security reasons they don’t prefer to use the public internet. Banks, space organizations, and many security agencies fall under this category.


Now, a question arises here how they manage to do regular updates and the installation of different packages in their different systems? The answer is quite simple: “the use of satellite server”.


Recently I got a task in relation to this context, in which:

1. A prerequisite here is that you don’t have internet connectivity in your system but one of the systems with which you can connect has internet connectivity.
2. Setup individual satellite server in your local network.
3. Install packages and regular updates.


To do so here I prefer to use the FTP satellite server

How to implement Ftp satellite server

Pre-requisites

An Ubuntu Server, and a non-root user with sudo privileges.
The system is configured with vsftpd

Suppose we are doing the installation of Jenkins

Make a directory pkg.jenkins.io in /var/www/html/



 Contents of pkg.jenkins.io


 

 

 

 

 

 

 

 

 

 

 

Paste the host link of debian file in /etc/apt/source.list.d

 

 Run the command sudo apt-get update

 Now run the command for installation of the package

 


“ The internet made fame wack and anonymity cool ”

So, far from the above context, we have learned about setting up FTP for users with a local account. If you need to use an external authentication source, you might want to look into vsftpd’s support of virtual users. This offers a rich set of options through the use of PAM, the Pluggable Authentication Modules, and is a good choice if you manage users in another system such as LDAP or Kerberos.

I hope I explained everything clearly enough to understand. If you have some better way of implementing a satellite server please help me to improve this blog.

Thanks for reading my writing. I’d really appreciate any kind of feedback in the comments.

Cheers till next time!!!!

Can you integrate a GitHub Webhook with Privately hosted Jenkins No? Think again

Introduction

One of the most basic requirement of CI implementation using Jenkins is to automatically trigger a Jenkins job post every commit. As you are already aware there are two ways in which a Jenkins job can be triggered in an automated fashion is:

  • Pull | PollSCM
  • Push | Webhook

It is a no-brainer that a Push-based trigger is the most efficient way of triggering a Jenkins job else you would be unnecessarily hogging your resources. One of the hurdles in implementing a push-based trigger is that your VCS & Jenkins server should be in the same network or in simple terms they can talk to each other.

In a typical CI setup, there is a SAAS VCS i.e GitHub/GitLab and a privately hosted Jenkins server, which make a Push-based triggering of Jenkins job impossible. Till a few days back I was under the same impression until I found this awesome blog that talks about how you can integrate a Webhook with your private Jenkins server.

In this blog, I’ll be trying to explain how I implemented the Webhook relay. Most importantly the reference blog was about integration of WebhookRelay with GitHub, with GitLab still there were some unexplored areas and I faced some challenges while doing the integration. This motivated me to write a blog so that people will have a ready reference on how to integrate GitLab with Webhook Relay.

Overall Workflow

Step 1: Download WebHook Relay Agent on the local system

Copy and execute the command

curl -sSL https://storage.googleapis.com/webhookrelay/downloads/relay-linux-amd64 > relay && chmod +wx relay && sudo mv relay /usr/local/bin

Note: Webhook Relay and Webhook Relay agent are different. Webhook Relay is running on public IP which triggers by GitLab and Webhook Relay Agent is a service which gets trigger by Webhook relay.

Step 2: Create a Webhook Relay Account

After successfully signing up we will land on Webhook Relay home page.

Step 3: Setting up the Webhook Relay Agent.

We have to create Access Tokens.
Now after navigating through Access token, click on Create Token button. Then we are provided with a Key and Secret pair.
Copy and execute:

relay login -k token-key -s token-secret
 
 
If it prompts a success message it means our Webhook relay agent is successfully setup.

Step 4: Create GItLab Repository

We will keep our repository a public one to keep things simple and understandable. Let’s say our Gitlab repository’s name is  WebhookProject.

Step 5: Install GitLab and GitLab Hook Plugin.

Go to Manage Jenkins →  Manage Plugins → Available
 

Step 6: Create Jenkins Job

 
Configure job: Add Gitlab repository link
 
Now we’ll choose the build trigger option:
 
 
 
Save the job.

Step 7: Connecting GitLab Repository, Webhook Relay, and Webhook Relay Agent

The final and most important step is to Connect the Overall flow.

Start forwarding Webhooks to Jenkins

Open terminal and type command:

relay forward --bucket gitlab-jenkins http://localhost:8080/project/webhook-gitlab-test
Note: Bucket name can be anything
 
 
Note: Do not stop this process by doing (ctrl+c).Open a new terminal or a new tab for commit to gitlab.

The most critical part of the workflow is the link generated by the Webhook Relay Agent. Copy this link and paste Gitlab repository(webhookProject) → Settings → Integrations

Paste the link.
For the sake of simplicity uncheck the Enable SSL Verification and click Add webhook button
Until now all major configuration has been done. Now Clone GitLab repository and push commits to the remote repository.
Go to Jenkins job and see build is triggered by GitLab webhook.
To see GitLab webhook Relay Logs, Go to :
Gitlab Repository → Settings → Integrations → webhook → Edit
 
 
To see Logs of Webhook Relay Agent trigger Jenkins, Go to:
Webhook Relay UI page → Relay Logs.

So now you know how to do WebHook integration between your VCS & Jenkins even when they are not directly reachable to each other.
Can you integrate a GitHub Webhook with Privately hosted Jenkins? Yes
Cheers Till Next Time!!!!

The Reset Button !!!!

!!!! The Reset button !!!!

Anyone who has recently used the Google Compute Engine for creating the VM instances will be aware of the reset button available.

Since I wasn’t very much sure of it , I just clicked it without much know-how . This resulted in making all the servers to their original state as they were freshly build and which is certainly a very bad thing for us.

But , we had  puppet that we used to create the whole infrastructure as it is . All the modules we had used and changes we made were committed to GitHub repo and this certainly was a boon to us, else we have to sit whole day long for making those changes on the servers.

Just in couple of minutes the new  instances were created using the compute engine-create group instance feature. We did  installation of the foreman  and git on one of the servers and set up the  puppet clients agents accordingly . This took around 15 more crucial minutes and then cloned our GitHub repo which contains all the  necessary modules and configurations required for the rest of infrastructure.

These are the conditions where Configuration Management Tools like Puppet come in picture and help us get on the track in the shortest possible manner.
It was a hectic day but definitely made us learn several important aspects. Using puppet for maintain the infrastructure is really important now days. It is reliable,efficient and fast for deploying configurations on the servers  and making ready for the production work load.

Setup Jenkins using Ansible

In this document I’ll walk you through how you can setup Jenkins using Ansible.

Prerequisites

  •  OS – Ubuntu {at least two machine required in production}
  •  First machine for Ansible  installation
  •  Second machine where we will install jenkins server
  • You should have basic understanding of ansible workflow.
Note :  You should have password less login enabled in second machine. use this link 
http://www.linuxproblem.org/art_9.html

Ansible Installation
Before starting with installing Jenkins using Ansible, you need to have Ansible installed in your system.

$ curl https://raw.githubusercontent.com/OpsTree/AnsiblePOC/alok/scripts/Setup/setup_ansible.sh | sudo bash

Setup jenkins using Ansible
Install jenkins ansible roles
Once we have ansible installed in our system, we can start installing the jenkins using ansible. To install we will use an already available ansible role to setup jenkins

$ ansible-galaxy install geerlingguy.jenkins
to know more about the jenkins role hit this link https://galaxy.ansible.com/detail#/role/440

Ansible roles default directory path is /etc/ansible/roles
Make ansible playbook file

Now the next step is to use the installed jenkins roles to install the jenkins. For this purpose we will create a playbook  and hosts file with below content

$ cd ~/MyPlaybook/jenkins
create a file hosts and add below content
[jenkins_hosts]
192.168.33.15

Screenshot from 2015-11-30 12:55:41.png
Next create  a file site.yml and add below content
– hosts: jenkins_hosts
 roles:
     – { role: geerlingguy.jenkins }

Screenshot from 2015-11-30 12:59:08.png
so configuration file is done, the next step is to run ansible playbook command

$ ansible-playbook -i hosts site.yml

Now that Jenkins is running, go to http://192.168.33.15:8080. You’ll be welcome by the default Jenkins screen.

Opstree SHOA Part 1: Build & Release

At Opstree we have started a new initiative called SHOA, Saturday Hands On Activity. Under this program we pick up a concept, tool or technology and do a hands on activity. At the end of the day whatever we do is followed by a blog or series of blog that we have understood during the day.
 
Since this is the first Hands On Activity so we are starting with Build & Release.

What we intend to do 

Setup Build & Release for project under git repository https://github.com/OpsTree/ContinuousIntegration.

What all we will be doing to achieve it

  • Finalize a SCM tool that we are going to use puppet/chef/ansible.
  • Automated setup of Jenkins using SCM tool.
  • Automated setup of Nexus/Artifactory/Archiva using SCM tool.
  • Automated setup of Sonar using SCM tool.
  • Dev Environment setup using SCM tool: Since this is a web app project so our Devw443 environment will have Nginx & tomcat.
  • QA Environment setup using SCM tool: Since this is a web app project so our QA environment will have Nginx & tomcat.
  • Creation of various build jobs
    • Code Stability Job.
    • Code Quality Job.
    • Code Coverage Job.
    • Functional Test Job on dev environment.
  • Creation of release Job.
  • Creation of deployment job to do deployment on Dev & QA environment.
This activity is open for public as well so if you have any suggestion or you want to attend it you are most welcome