Lab: Externalizing Application Configuration
Background: Application Configuration with ConfigMaps
Most applications require configuration using environment variables, configuration files and command line arguments. These configuration artifacts should be externalized form the application and the docker image content in order to keep the image portable across environments.
In previous labs, the nationalparks
application was configured
with database credentials using environment variables. While environment
variables is a useful way to configure applications, it is difficult to manage
hundreds of environment variables which are scattered across various containers
in a project. Fortunately, there is a convenient and platform-independent
mechanism in OpenShift to configure applications, which is called ConfigMap
.
The ConfigMap
object in OpenShift provides mechanisms to provide configuration
data to the application container while keeping the application images both
portable across environments and independent of OpenShift Container Platform. A
ConfigMap
can be used to store key-value properties, configuration files, JSON
blobs and alike.
In this lab, you will replace the environment variables provided in the
previous labs and use a ConfigMap
instead to configure the
nationalparks
application.
Exercise: Create a ConfigMap
You can create a ConfigMap
by pointing at a file containing the application
configuration. Download this properties file to your local machine which
contains the database credentials:
http://gitlab-ce-workshop-infra.cloudapps.sds.openshift3roadshow.com/userXX/nationalparks/raw/1.2.0/ose3/application-dev.properties
Verify that the contents of the file are correct. If you have downladed the file with Internet Explorer it might contain incorrect chars or different charset. Try to use "Google Chrome", "Firefox" or "curl". |
Create a ConfigMap
using the following command in the explore-XX
project:
$ oc create configmap nationalparks --from-file=application.properties=./application-dev.properties
The --from-file
option specifies a key-value pair with the key used as the
file name that is provided to the application and value as the content of the
file. In the above command, the content of application-dev.properties
file
will be provided to the application container as a properties file called
application.properties
List and verify that the ConfigMap
is created successfully containing the
database credentials:
$ oc describe configmap nationalparks
Name: nationalparks
Namespace: demo
Labels: <none>
Annotations: <none>
Data
====
application.properties: 123 bytes
You can review the content of the ConfigMap
using the oc get
command:
$ oc get configmap nationalparks -o yaml
apiVersion: v1
data:
application.properties: |
# NationalParks MongoDB
mongodb.server.host=mongodb
mongodb.user=mongodb
mongodb.password=mongodb
mongodb.database=mongodb
kind: ConfigMap
metadata:
creationTimestamp: 2016-11-16T09:17:02Z
name: nationalparks
namespace: explore-XX
resourceVersion: "8421"
selfLink: /api/v1/namespaces/demo/configmaps/nationalparks
uid: 6f4536cf-abdd-11e6-9282-525400c3c0db
Exercise: Wire the ConfigMap inside nationalparks Container
Configuration data can be consumed in pods in a variety of ways. A ConfigMap
can be used to:
-
Populate the value of environment variables
-
Set command-line arguments in a container
-
Populate configuration files in a volume
The nationalparks
Spring Boot application can be configured through a
properties file called application.properties
which should reside in a specific
location in the container filesystem. Using the following command to mount the
ConfigMap
inside the nationalparks
pod:
$ oc set volumes dc/nationalparks --add -m /deployments/config --configmap-name=nationalparks
The above command makes the content of the configmap ConfigMap
, which you
created from a file, called application.properties
, available in the
/opt/openshift/config
directory. The nationalparks
DeploymentConfiguration
detects the configuration change, and automatically deploys the Pod with
the new configuration.
Also, as we have configured nationalparks
through ConfigMap
, you can remove
the database environment variables set in the previous labs:
$ oc env dc/nationalparks MONGODB_USER- MONGODB_PASSWORD- MONGODB_DATABASE- MONGODB_SERVER_HOST-
You have now externalized nationalparks
configuration. Visit the nationalparks
web
service to very the database connection is working correctly.:
http://nationalparks-explore-XX.cloudapps.sds.openshift3roadshow.com/ws/data/all/
If you check the new Pod’s logs once it comes up, you should see no errors.
Lab: Application Health
Background: Readiness and Liveness Probes
As we have seen before in the UI via warnings, there is a concept of application health checks in OpenShift. These come in two flavors:
-
Readiness probe
-
Liveness probe
From the Application Health section of the documentation, we see the definitions:
- Liveness Probe
-
A liveness probe checks if the container in which it is configured is still running. If the liveness probe fails, the kubelet kills the container, which will be subjected to its restart policy. Set a liveness check by configuring the
template.spec.containers.livenessprobe
stanza of a pod configuration. - Readiness Probe
-
A readiness probe determines if a container is ready to service requests. If the readiness probe fails a container, the endpoints controller ensures the container has its IP address removed from the endpoints of all services. A readiness probe can be used to signal to the endpoints controller that even though a container is running, it should not receive any traffic from a proxy. Set a readiness check by configuring the
template.spec.containers.readinessprobe
stanza of a pod configuration.
It sounds complicated, but it really isn’t. We will use the web console to add
these probes to our nationalparks
application.
Exercise: Add Health Checks
As we are going to be implementing a realistic CI/CD pipeline, we will be doing some testing of the "development" version of the application. However, in order to test the app, it must be ready. This is where OpenShift’s application health features come in very handy.
We are going to add both a readiness and liveness probe to the existing
nationalparks
deployment. This will ensure that OpenShift does not add any
instances to the service until they pass the readiness checks, and will ensure
that unhealthy instances are restarted (if they fail the liveness checks).
Click Applications → Deployments on the left-side bar. Click
nationalparks
. You will see the warning about health checks, with a link to
click in order to add them. Click Add health checks now.
You will want to click both Add Readiness Probe and Add Liveness Probe and then fill them out as follows:
Readiness Probe
-
Path:
/ws/healthz/
-
Initial Delay:
20
-
Timeout:
1
Liveness Probe
-
Path:
/ws/healthz/
-
Initial Delay:
120
-
Timeout:
1
Click Save and then click the Overview button in the left navigation. You will notice that these changes caused a new deployment — they counted as a configuration change.
You will also notice that the circle around the new deployment stays light blue for a while. This is a sign that the pod(s) have not yet passed their readiness checks — it’s working!
Lab: Automating Deployment with CI/CD Pipeline
Background: CI/CD Pipeline
Continuous Integration and Continuous Deployment refer to a set of practises with the intention of automating various aspects of delivery software. One of these practises is Pipeline which is an automated process to define the steps a change in code or configuration has to go through in order to reach upper environments such as staging and production. OpenShift supports CI/CD Pipelines by integrating the popular Jenkins pipelines into the platform and enables defining truly complex workflows directly within OpenShift.
In a previous lab, you deployed the nationalparks
application using the
Source-to-Image (S2I)
mechanism. S2I already provides build automation by automatically running builds
when source code changes, or an underlying image changes. Deployments are also automated
by S2I and can be triggered when the image they are based on changes. In this lab,
you will create a more complex workflow by creating a pipeline that extends the S2I
functionality by adding more steps to the build and deploy process. The following
diagram shows the pipeline you will create in this lab.
There are two environments for the nationalparks
application in this pipeline.
Dev container is the for development and test purposes where all code and
configuration changes are deployed so that you can run automated tests against it.
Furthermore, the test teams can run their manual tests on this container and
report any bugs discovered through their test cases. If the tests are all successful
and the Deployment Manager in the team approves the change, it is then deployed to the
Live container which is the production environment with defined SLA and is
critical to function properly at all times.
The pipeline execution starts with a developer making a change in the application code or configuration. For every change, the following steps are executed with the goal of determining if the change is appropriate for deployment in the Live environment:
-
Clone the code from Git repo
-
Build the code and run unit tests
-
Build a docker image from the code (S2I)
-
Deploy the docker image into Dev
-
Run automated tests against the Dev deployment
-
Run manual tests against the Dev deployment
-
Wait for the Deployment Manager to either approve or reject the deployment (e.g. manual tests have revealed an unacceptable number of bugs)
-
If approved, deploy to Live
Let’s move on to deploy Jenkins
and create this pipeline on OpenShift.
Exercise: Deploy Jenkins
OpenShift provides a supported Jenkins image which includes a rich set of plugins that enable the full pipeline flow. Click on the Add to project button. Then, scroll down to the Technologies section and click on Continuous Integration & Deployment:
Find the jenkins-ephemeral
template, and click on it:
You can customize the Jenkins properties such as service name, admin password, memory allocation, etc through the parameters in the web console. We can leave all of the default values, so just click on Create to deploy Jenkins.
OpenShift deploys a Jenkins pod and also creates a service and route for the deployed container.
Click on the Jenkins route in order to open the Jenkins Console. You will again need to accept the certificate. The Jenkins image that is provided by Red Hat uses an OAuth integration with OpenShift. Your OpenShift user credentials also become the admin credentials for Jenkins:
Click Login with OpenShift and you will be taken to an OpenShift-branded login screen. Use your username (userXX) and password (openshift3) to access Jenkins. You will then be prompted to grant access:
Click Allow selected permissions.
The OpenShift Jenkins plugin uses the OpenShift REST API in order to integrate into various OpenShift operations. Since we want Jenkins to be able to do more than just look at our project, we will need to grant additional permissions. A Jenkins service account was created automatically when deploying Jenkins via the template. Run the following CLI command to allow the Jenkins service account to retrieve information and invoke action in OpenShift:
$ oc policy add-role-to-user edit -z jenkins
Exercise: Remove Dev from parksmap
Since we are going to be replacing the current nationalparks
application with
a Live version, we should remove the Dev version from the parksmap
by
taking away the Route label:
$ oc label route nationalparks type-
Exercise: Create Live Environment
Before creating the pipeline, you need to create a Live deployment that runs
the live version of nationalparks
application. The parksmap
front-end will
talk to the Live nationalparks
. This allows developers to make frequent
changes in the Dev deployment without interfering with the live application.
Live MongoDB
First you need to create a new MongoDB deployment for the Live environment. In the
web console in your explore-XX
project, click the Add to
Project button, and then find the mongodb-ephemeral
template, and click it.
Use the following values in their respective fields:
-
Database Service Name :
mongodb-live
-
MongoDB Connection Username :
mongodb
-
MongoDB Connection Password :
mongodb
-
MongoDB Database Name:
mongodb
-
MongoB Admin Password :
mongodb
You can leave the rest of the values as their defaults, and then click Create. Then click Continue to overview. The MongoDB instance should quickly be deployed. If you’re interested, take a look at Mongo’s logs to see what it does when it starts up.
Live ConfigMap
The database configuration for the Dev nationalparks
webservice was changed
to use ConfigMaps in a previous lab. Similarly, we will use a ConfigMap for
nationalparks-live
. Download the live properties file to your local machine
and create a distinct ConfigMap. The file is located here:
http://gitlab-ce-workshop-infra.cloudapps.sds.openshift3roadshow.com/userXX/nationalparks/raw/1.2.0/ose3/application-live.properties
Then, run the following command to create the live ConfigMap:
$ oc create configmap nationalparks-live --from-file=application.properties=./application-live.properties
Live Deployment
Now you can create the Live deployment based on the same nationalparks
Docker image created in previous labs. Click on Builds →
Images and then nationalparks
to inspect the ImageStream.
The default behavior for OpenShift has every
S2I
build creating a new Docker image that is pushed into the internal registry,
identified with the latest
tag. Since we do not want to immediately run or
deploy the Live version of nationalparks
when the image changes, we want the
ability for the Dev and Live deployments to run different versions of the
nationalparks
image simultaneously. This will allow developers to continue
changing and deploying Dev without affecting the Live environment. In
order to achieve that, you will create a new Docker image tag using the CLI.
This new tag will be what the Live deployment will look for changes to:
$ oc tag nationalparks:latest nationalparks:live
You should have seen a change on the ImageStream page in the UI.
This command says "please use the existing image that the tag
nationalparks:latest
points to and also point it at nationalparks:live
." Or,
in other words "create a new tag (live
) that points to whatever latest
points to.
While new builds will update the latest
tag, only a manual command (or an
automated workflow, like we will implement with Jenkins) will update the live
tag. The live
tag keeps referring to the pervious Docker image and therefore
leaves the Live environment intact.
After creating the tag, you are ready to deploy the Live nationalparks
based
on the nationalparks:live
image tag. In the web console in your
explore-XX
project, click the Add to Project button, and then
Deploy Image tab. Choose the Image Stream Tag radio button and use following
values in each respective field:
-
Namespace:
explore-XX
-
ImageStream:
nationalparks
-
Tag:
live
Once you make your three dropdown selections in the Image Stream Tag area, you will see the rest of the standard deployment options "open up".
There are only a few things to change:
-
Name:
nationalparks-live
If you forget to change the name to |
You can leave the rest of the values as their defaults, and then click Create. Then click Continue to overview.
Attach ConfigMap
Deploying the nationalparks-live
image through the UI did not utilize the
ConfigMap, so we have one more step — to tell OpenShift where to put the
properties file. Since you have already created the ConfigMap, all you have to
do is use the oc set volumes
command to put it in the right place:
$ oc set volumes dc/nationalparks-live --add -m /deployments/config --configmap-name=nationalparks-live
Group Services
Group the Live services
by clicking on the Group Service on the right side of NATIONALPARKS LIVE
container and choosing mongodb-live
from the drop-down list.
Add Route
If you look at the web console, you will notice that, when you create the application this way, OpenShift doesn’t create a Route for you. Click on Create Route on the top right corner of NATIONALPARKS LIVE and then Create to create a route with the default values.
Similar to the previous labs, populate the database by pointing your browser to the
nationalparks-live
route url:
http://nationalparks-live-explore-XX.cloudapps.sds.openshift3roadshow.com/ws/data/load/
If the application has not been deployed yet, you might get a 502 Bad Gateway error webpage. This means that the application backing up the route is not yet ready. Wait until the pod is up. |
Label Service
As discussed in previous labs, the parksmap
web app queries the OpenShift API and
looks for routes that have the label type=parksmap-backend
and interrogates the
discovered endpoints to visualize their map data. After creating the pipeline,
parksmap
should use the Live container instead of the Dev container so that
deployments to the Dev container does not disrupt the parksmap
application.
You can do that by removing the type
label from the Dev route and adding it
to the Live route:
$ oc label route nationalparks-live type=parksmap-backend
Exercise: Disable Automatic Deployment of nationalparks (dev)
When we created the nationalparks
build earlier in the workshop, OpenShift
configured the deployment of the image to occur automatically whenever the
:latest
tag was updated.
In our pipeline example, Jenkins is going to handle telling OpenShift to deploy
the dev version of nationalparks
if it builds successfully. In order to
prevent two deployments, we will need to disable automatic deployments with a
simple CLI statement:
$ oc set triggers dc/nationalparks --from-image=nationalparks:latest --remove
Exercise: Create OpenShift Pipeline
The Pipeline is in fact a type of build that allows developers to define a Jenkins pipeline for execution by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a Jenkinsfile, either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.
In order to create the pipeline, click on the Add to project button,
find the dev-live-pipeline
template, and click on it. Specify the project name
and click on Create
Specify the name of the project (e.g. explore-XX ) where
nationalparks Dev and Live containers are deployed.
|
In order to start the pipeline that you created in a previous step, Go to
Builds → Pipelines on the left side-bar. Click nationalparks-pipeline
and click on Start Build to start the execution. You can click on View
Log to take a look at the build logs as they progress through the pipeline or
on Build #N to see the details of this specific pipeline execution as well as
the pipeline definition using the
Jenkins DSL.
Because of the way the pipeline was defined, if you return to the overview page you will also see the pipeline status there, associated with the relevant deployments:
Pipeline execution will pause after running automated tests against the Dev
container. Visit the nationalparks
Dev web service to query for data and
verify the service works as expected.
http://nationalparks-explore-XX.cloudapps.sds.openshift3roadshow.com/ws/data/all/
If the application has not been deployed yet, you might get a 502 Bad Gateway error webpage. This means that the application backing up the route is not yet ready. Wait until the pod is up. |
After the test stage, pipeline waits for manual approval in order to deploy to the Live container.
Click on Input Required link which takes you to the Jenkins Console for approving the deployment. This step typically will be integrated into your workflow process (e.g. JIRA Service Desk and ServiceNow) and will be performed as part of the overall deployment process without interacting directly with Jenkins. For simplicity in this lab, click on Proceed button to approve the build.
Pipeline execution continues to promote and deploy the nationalparks
image.
This is achieved by tagging the image that was just built and tested as "live",
which causes the imagechange
trigger on the Live deployment to act. This
likely already happened before you finished reading this paragraph.
In Builds → Pipelines, click on View History to go to the pipeline overview which shows the pipeline execution history as well as build time metrics so that you can iteratively improve the build process as well detect build time anomalies which usually signal a bad change in the code or configuration.
Build metrics are generated and displayed after a few executions of the pipeline to determine trends. |
Congratulations! Now you have a CI/CD Pipeline for the nationalparks
application. If you visit the parks map again, you should see the map points!
'디지털 양피지 > Docker' 카테고리의 다른 글
openshift04 (0) | 2017.07.04 |
---|---|
openshift2 (0) | 2017.07.04 |
8. Open Shift (0) | 2017.07.04 |
7.Kubernetes - Advanced (0) | 2017.07.03 |
6. Kubenetes 리소스 관리 (0) | 2017.06.30 |