https://www.openshift.com/promotions/docker-security.html
Environment Overview
You will be interacting with an OpenShift environment that is running on Amazon Web Services. The environment consists of the following systems:
-
1 master nodes
-
1 infrastructure nodes
-
13 "application" nodes
-
An NFS server
The infrastructure node is providing several services:
-
Aggregated logging
-
Cluster metrics
-
GitLab
-
This lab manual
-
The OpenShift Docker registry
-
The OpenShift router
-
Etherpad
Lab: Installing the OpenShift CLI
Command Line Interface
OpenShift ships with a feature rich web console as well as command line tools to provide users with a nice interface to work with applications deployed to the platform. The OpenShift tools are a single executable written in the Go programming language and is available for the following operating systems:
-
Microsoft Windows
-
Apple OS X
-
Linux
Downloading the tools
During this lab, we are going to download the client tool and add them to our operating system $PATH environment variables so the executable is accessible from any directory on the command line.
The first thing we want to do is download the correct executable for your operating system as linked below:
Once the file has been downloaded, you will need to extract the contents as it is a compressed archive. I would suggest saving this file to the following directories:
Windows:
C:\OpenShift
OS X:
~/OpenShift
Linux:
~/OpenShift
Extracting the tools
Once you have the tools downloaded, you will need to extract the contents:
Windows:
In order to extract a zip archive on windows, you will need a zip utility installed on your system. With newer versions of windows (greater than XP), this is provided by the operating system. Just right click on the downloaded file using file explorer and select to extract the contents.
OS X:
Open up a terminal window and change to the directory where you downloaded the file. Once you are in the directory, enter in the following command:
$ tar zxvf oc-macosx.tar.gz
The tar.gz file name needs to be replaced by the entire name that was downloaded in the previous step.
Linux:
Open up a terminal window and change to the directory where you downloaded the file. Once you are in the directory, enter in the following command:
$ tar zxvf oc-linux.tar.gz
The tar.gz file name needs to be replaced by the entire name that was downloaded in the previous step.
Adding oc
to your PATH
Windows:
Because changing your PATH on windows varies by version of the operating system, we will not list each operating system here. However, the general workflow is right click on your computer name inside of the file explorer. Select Advanced system settings. I guess changing your PATH is considered an advanced task? :) Click on the advanced tab, and then finally click on Environment variables. Once the new dialog opens, select the Path variable and add ";C:\OpenShift" at the end. For an easy way out, you could always just copy it to C:\Windows or a directory you know is already on your path. For more detailed instructions:
Windows 10 - Follow the directions above.
OS X:
$ export PATH=$PATH:~/OpenShift
Linux:
$ export PATH=$PATH:~/OpenShift
Verify
At this point, we should have the oc tool available for use. Let’s test this out by printing the version of the oc command:
$ oc version
You should see the following (or something similar):
oc v1.5.1+7b451fc\nkubernetes v1.5.2+43a9be4
If you get an error message, you have not updated your path correctly. If you need help, raise your hand and the instructor will assist.
Lab: Deploy a Docker Image
Application description
In this lab, we’re going to deploy the web component of the ParksMap application which is also called parksmap
and uses OpenShift service discovery mechanism to discover the backend services deployed and shows their data on the map.

Exercise: Deploying your first Image
Let’s start by doing the simplest thing possible - get a plain old Docker-formatted image to run on OpenShift. This is incredibly simple to do. With OpenShift Container Platform 3.5 it can be done directly from the web console.
Return to the web console:
Find your explore-XX project and click it. Next, click "Add to project" at the top of the screen.
There are several options, but we are only concerned with "Deploy Image". Click it. We will learn more about image streams and image stream tags later. For now, select the "Image Name" option, and copy/paste the following into the box:
docker.io/openshiftroadshow/parksmap:1.2.0
Your screen will end up looking something like this:

Either press enter or click on the magnifying glass. OpenShift will then go out to the Docker registry specified and interrogate the image. You then are presented with some options to add things like environment variables, labels, and etc. — which we will learn about later.
Hit the blue "Create" button at the bottom of the screen and then click the "Continue to overview" link. Take a moment to look at the various messages that you now see on the overview page.
WINNING! These few steps are the only ones you need to run to get a "vanilla" Docker-formatted image deployed on OpenShift. This should work with any Docker-formatted image that follows best practices, such as defining an EXPOSE port, not needing to run specifically as the root user or other user name, and a single non-exiting CMD to execute on start.
Background: Containers and Pods
Before we start digging in we need to understand how containers and Pods are related. Given the morning sessions where we discussed the OpenShift platform and how it uses containers and Pods, we will not be covering the background on these technologies in this lab. Instead, we will dive right in and start using them.
In OpenShift, the smallest deployable unit is a Pod. A Pod is a group of one or more Docker containers deployed together and guaranteed to be on the same host. From the doc:
Each pod has its own IP address, therefore owning its entire port space, and
containers within pods can share storage. Pods can be "tagged" with one or
more labels, which are then used to select and manage groups of pods in a
single operation.
Pods can contain multiple Docker instances. The general idea is for a Pod to contain a "server" and any auxiliary services you want to run along with that server. Examples of containers you might put in a Pod are, an Apache HTTPD server, a log analyzer, and a file service to help manage uploaded files.
Exercise: Examining the Pod
In the web console’s overview page you will see that there is a single Pod that was created by your actions. This Pod contains a single container, which happens to be the parks map application - a simple Spring Boot/Java application.
You can also examine Pods from the command line:
$ oc get pod
You should see output that looks similar to:
NAME READY STATUS RESTARTS AGE
parksmap-1-hx0kv 1/1 Running 0 2m
The above output lists all of the Pods in the current Project, including the Pod name, state, restarts, and uptime. Once you have a Pod's name, you can get more information about the Pod using the oc get command. To make the output readable, I suggest changing the output type to YAML using the following syntax:
Make sure you use the correct Pod name from your output. |
$ oc get pod parksmap-1-hx0kv -o yaml
You should see something like the following output (which has been truncated due to space considerations of this workshop manual):
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"explore-00","name":"parksmap-1","uid":"f1b37b1b-e3e2-11e6-81a2-0696d1181070","apiVersion":"v1","reso
urceVersion":"36222"}}
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu, memory request for container
parksmap; cpu, memory limit for container parksmap'
openshift.io/deployment-config.latest-version: "1"
openshift.io/deployment-config.name: parksmap
openshift.io/deployment.name: parksmap-1
openshift.io/generated-by: OpenShiftWebConsole
openshift.io/scc: restricted
creationTimestamp: 2017-01-26T16:17:36Z
generateName: parksmap-1-
labels:
app: parksmap
deployment: parksmap-1
deploymentconfig: parksmap
name: parksmap-1-bvaz6
...............
The web interface also shows a lot of the same information on the Pod details page. If you click in the Pod circle, and then click the Pod name, you will find the details page. You can also get there by clicking "Applications", then "Pods", at the left, and then clicking the Pod name.
Getting the parks map image running may take a little while to complete. Each
OpenShift node that is asked to run the image has to pull (download) it if the
node does not already have it cached locally. You can check on the status of the
image download and deployment in the Pod details page, or from the command
line with the oc get pods
command that you used before.
Background: A Little About the Docker Daemon
Whenever OpenShift asks the node’s Docker daemon to run an image, the Docker daemon will check to make sure it has the right "version" of the image to run. If it doesn’t, it will pull it from the specified registry.
There are a number of ways to customize this behavior. They are documented in specifying an image as well as image pullpolicy.
Background: Services
Services provide a convenient abstraction layer inside OpenShift to find a group of like Pods. They also act as an internal proxy/load balancer between those Pods and anything else that needs to access them from inside the OpenShift environment. For example, if you needed more parks map servers to handle the load, you could spin up more Pods. OpenShift automatically maps them as endpoints to the Service, and the incoming requests would not notice anything different except that the Service was now doing a better job handling the requests.
When you asked OpenShift to run the image, it automatically created a Service for you. Remember that services are an internal construct. They are not available to the "outside world", or anything that is outside the OpenShift environment. That’s OK, as you will learn later.
The way that a Service maps to a set of Pods is via a system of Labels and Selectors. Services are assigned a fixed IP address and many ports and protocols can be mapped.
There is a lot more information about Services, including the YAML format to make one by hand, in the official documentation.
Now that we understand the basics of what a Service is, let’s take a look at the Service that was created for the image that we just deployed. In order to view the Services defined in your Project, enter in the following command:
$ oc get services
You should see output similar to the following:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
parksmap 172.30.169.213 <none> 8080/TCP 3h
In the above output, we can see that we have a Service named parksmap
with an
IP/Port combination of 172.30.169.213/8080TCP. Your IP address may be different, as
each Service receives a unique IP address upon creation. Service IPs are
fixed and never change for the life of the Service.
In the web console, service information is available by clicking "Applications" and then clicking "Services" in the "Networking" submenu.
You can also get more detailed information about a Service by using the following command to display the data in YAML:
$ oc get service parksmap -o yaml
You should see output similar to the following:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: 2016-10-03T15:33:17Z
labels:
app: parksmap
name: parksmap
namespace: explore-XX
resourceVersion: "6893"
selfLink: /api/v1/namespaces/explore-XX/services/parksmap
uid: b51260a9-897e-11e6-bdaa-2cc2602f8794
spec:
clusterIP: 172.30.169.213
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
deploymentconfig: parksmap
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Take note of the selector
stanza. Remember it.
It is also of interest to view the JSON of the Pod to understand how OpenShift
wires components together. For example, run the following command to get the
name of your parksmap
Pod:
$ oc get pods
You should see output similar to the following:
NAME READY STATUS RESTARTS AGE
parksmap-1-hx0kv 1/1 Running 0 3h
Now you can view the detailed data for your Pod with the following command:
$ oc get pod parksmap-1-hx0kv -o yaml
Under the metadata
section you should see the following:
labels:
app: parksmap
deployment: parksmap-1
deploymentconfig: parksmap
-
The Service has
selector
stanza that refers todeploymentconfig=parksmap
. -
The Pod has multiple Labels:
-
deploymentconfig=parksmap
-
app=parksmap
-
deployment=parksmap-1
-
Labels are just key/value pairs. Any Pod in this Project that has a Label that matches the Selector will be associated with the Service. To see this in action, issue the following command:
$ oc describe service parksmap
You should see something like the following output:
Name: parksmap
Namespace: explore-XX
Labels: app=parksmap
Selector: deploymentconfig=parksmap
Type: ClusterIP
IP: 172.30.169.213
Port: 8080-tcp 8080/TCP
Endpoints: 10.1.2.5:8080
Session Affinity: None
No events.
You may be wondering why only one end point is listed. That is because there is only one Pod currently running. In the next lab, we will learn how to scale an application, at which point you will be able to see multiple endpoints associated with the Service.
Lab: Scaling and Self Healing
Background: Deployment Configurations and Replication Controllers
While Services provide routing and load balancing for Pods, which may go in and out of existence, ReplicationControllers (RC) are used to specify and then ensure the desired number of Pods (replicas) are in existence. For example, if you always want your application server to be scaled to 3 Pods (instances), a ReplicationController is needed. Without an RC, any Pods that are killed or somehow die/exit are not automatically restarted. ReplicationControllers are how OpenShift "self heals".
A DeploymentConfiguration (DC) defines how something in OpenShift should be deployed. From the deployments documentation:
Building on replication controllers, OpenShift adds expanded support for the
software development and deployment lifecycle with the concept of deployments.
In the simplest case, a deployment just creates a new replication controller and
lets it start up pods. However, OpenShift deployments also provide the ability
to transition from an existing deployment of an image to a new one and also
define hooks to be run before or after creating the replication controller.
In almost all cases, you will end up using the Pod, Service, ReplicationController and DeploymentConfiguration resources together. And, in almost all of those cases, OpenShift will create all of them for you.
There are some edge cases where you might want some Pods and an RC without a DC or a Service, and others, so feel free to ask us about them after the labs.
Exercise: Exploring Deployment-related Objects
Now that we know the background of what a ReplicatonController and
DeploymentConfig are, we can explore how they work and are related. Take a
look at the DeploymentConfig (DC) that was created for you when you told
OpenShift to stand up the parksmap
image:
$ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
parksmap 1 1 1 config,image(parksmap:1.2.0)
To get more details, we can look into the ReplicationController (RC).
Take a look at the ReplicationController (RC) that was created for you when
you told OpenShift to stand up the parksmap
image:
$ oc get rc
NAME DESIRED CURRENT READY AGE
parksmap-1 1 1 0 4h
This lets us know that, right now, we expect one Pod to be deployed
(Desired
), and we have one Pod actually deployed (Current
). By changing
the desired number, we can tell OpenShift that we want more or less Pods.
OpenShift’s HorizontalPodAutoscaler effectively monitors the CPU usage of a set of instances and then manipulates the RCs accordingly.
You can learn more about the CPU-based Horizontal Pod Autoscaler here
Exercise: Scaling the Application
Let’s scale our parksmap "application" up to 2 instances. We can do this with
the scale
command. You could also do this by clicking the "up" arrow next to
the Pod in the OpenShift web console on the overview page. It’s your choice.
$ oc scale --replicas=2 dc/parksmap
To verify that we changed the number of replicas, issue the following command:
$ oc get rc
NAME DESIRED CURRENT READY AGE
parksmap-1 2 2 0 4h
You can see that we now have 2 replicas. Let’s verify the number of pods with
the oc get pods
command:
$ oc get pods
NAME READY STATUS RESTARTS AGE
parksmap-1-8g6lb 1/1 Running 0 1m
parksmap-1-hx0kv 1/1 Running 0 4h
And lastly, let’s verify that the Service that we learned about in the previous lab accurately reflects two endpoints:
$ oc describe svc parksmap
You will see something like the following output:
Name: parksmap
Namespace: explore-XX
Labels: app=parksmap
Selector: deploymentconfig=parksmap
Type: ClusterIP
IP: 172.30.169.213
Port: 8080-tcp 8080/TCP
Endpoints: 10.1.0.5:8080,10.1.1.5:8080
Session Affinity: None
No events.
Another way to look at a Service's endpoints is with the following:
$ oc get endpoints parksmap
And you will see something like the following:
NAME ENDPOINTS AGE
parksmap 10.1.0.5:8080,10.1.1.5:8080 4h
Your IP addresses will likely be different, as each pod receives a unique IP within the OpenShift environment. The endpoint list is a quick way to see how many pods are behind a service.
You can also see that both Pods are running using the web console:

Overall, that’s how simple it is to scale an application (Pods in a Service). Application scaling can happen extremely quickly because OpenShift is just launching new instances of an existing image, especially if that image is already cached on the node.
Application "Self Healing"
Because OpenShift’s RCs are constantly monitoring to see that the desired number of Pods actually is running, you might also expect that OpenShift will "fix" the situation if it is ever not right. You would be correct!
Since we have two Pods running right now, let’s see what happens if we
"accidentally" kill one. Run the oc get pods
command again, and choose a Pod
name. Then, do the following:
$ oc delete pod parksmap-1-hx0kv && oc get pods
pod "parksmap-1-h45hj" deleted
NAME READY STATUS RESTARTS AGE
parksmap-1-h45hj 1/1 Terminating 0 4m
parksmap-1-q4b4r 0/1 ContainerCreating 0 1s
parksmap-1-vdkd9 1/1 Running 0 32s
Did you notice anything? There is a container being terminated (the one we deleted), and there’s a new container already being created.
Also, the names of the Pods are slightly changed. That’s because OpenShift almost immediately detected that the current state (1 Pod) didn’t match the desired state (2 Pods), and it fixed it by scheduling another Pod.
Additionally, OpenShift provides rudimentary capabilities around checking the liveness and/or readiness of application instances. If the basic checks are insufficient, OpenShift also allows you to run a command inside the container in order to perform the check. That command could be a complicated script that uses any installed language.
Based on these health checks, if OpenShift decided that our parksmap
application instance wasn’t alive, it would kill the instance and then restart
it, always ensuring that the desired number of replicas was in place.
More information on probing applications is available in the Application Health section of the documentation.
Exercise: Scale Down
Before we continue, go ahead and scale your application down to a single instance. Feel free to do this using whatever method you like.
'디지털 양피지 > Docker' 카테고리의 다른 글
openshift03 (0) | 2017.07.04 |
---|---|
openshift2 (0) | 2017.07.04 |
7.Kubernetes - Advanced (0) | 2017.07.03 |
6. Kubenetes 리소스 관리 (0) | 2017.06.30 |
6. Kubernetes 설치 (0) | 2017.06.30 |