Lab: Creating Routes by Exposing Services
Application description
In this lab, we’re going to make our application visible to the end users, so they can access it.
Background: Routes
While Services provide internal abstraction and load balancing within an OpenShift environment, sometimes clients (users, systems, devices, etc.) outside of OpenShift need to access an application. The way that external clients are able to access applications running in OpenShift is through the OpenShift routing layer. And the data object behind that is a Route.
The default OpenShift router (HAProxy) uses the HTTP header of the incoming request to determine where to proxy the connection. You can optionally define security, such as TLS, for the Route. If you want your Services, and, by extension, your Pods, to be accessible to the outside world, you need to create a Route.
Exercise: Creating a Route
Fortunately, creating a Route is a pretty straight-forward process. You simply
expose
the Service via the command line. Or, via the web console, just click
the "Create Route" button associated with the service.
First we want to verify that we don’t already have any existing routes:
$ oc get routes
No resources found.
Now we need to get the Service name to expose:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
parksmap 172.30.169.213 <none> 8080/TCP 5h
Once we know the Service name, creating a Route is a simple one-command task:
$ oc expose service parksmap
route "parksmap" exposed
Verify the Route was created with the following command:
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION
parksmap parksmap-explore-XX.cloudapps.sds.openshift3roadshow.com parksmap 8080-tcp
You can also verify the Route by looking at the project in the OpenShift web console:
Pretty nifty, huh? This application is now available at the URL shown in the web console. Click the link and you will see:
Lab: Exploring OpenShift’s Logging Capabilities
OpenShift provides some convenient mechanisms for viewing application logs. First and foremost is the ability to examine a Pod's logs directly from the web console or via the command line.
Background: Container Logs
OpenShift is constructed in such a way that it expects containers to log all
information to STDOUT
. In this way, both regular and error information is
captured via standardized Docker mechanisms. When exploring the Pod's logs
directly, you are essentially going through the Docker daemon to access the
container’s logs, through OpenShift’s API. Neat!
In some cases, applications may not have been designed to send all of their
information to |
Exercise: Examining Logs
Since we already deployed our application, we can take some time to examine its logs. In the web console, find your way back to the Pod details page (Applications → Pods → specific pod) and then click on the "Logs" tab. You should see a nice view of the Pod's logs:
It appears there are some errors in the log, and that’s OK. We’ll remedy those in a little bit.
You also have the option of viewing logs from the command line. Get the name of your Pod:
$ oc get pods
NAME READY STATUS RESTARTS AGE
parksmap-1-hx0kv 1/1 Running 0 5h
And then use the logs
command to view this Pod's logs:
$ oc logs parksmap-1-hx0kv
You will see all of the application logs scroll on your screen:
15:34:25.844 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client from Kubernetes config...
15:34:25.937 [main] DEBUG io.fabric8.kubernetes.client.Config - Did not find Kubernetes config at: [/.kube/config]. Ignoring.
15:34:25.937 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client from service account...
15:34:25.938 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt].
15:34:25.960 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token].
15:34:25.961 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client namespace from Kubernetes service account namespace path...
15:34:25.962 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
....
Exercise: Aggregated Pod Logs
When your application consists of only one Pod and it never fails, restarts, or has other issues, these ways to view logs may not be so bad. However in a scaled-out application where Pods may have restarted, been scaled up or down, or if you just want to get historical information, these mechanisms may be insufficient.
Fortunately, OpenShift provides an optional system for log aggregation that uses Fluentd, Elasticsearch, and Kibana (EFK).
In the OpenShift web console on the Pod's logs page, at the right you will see a "View Archive" link. Go ahead and click it. You will need to accept the SSL certificate.
Clicking this link takes you to the Kibana web interface. This interface is secured with OpenShift’s role-based access controls, so you can only see logs for projects that you have access to.
The "View Archive" link that you clicked takes you to a default view with a specific search term
pre-populated. Kibana will only show you logs where the pod name is
parksmap-1-hx0kv
and in the Project (namespace) explore-XX.
kubernetes.pod_name:"parksmap-1-hx0kv" AND kubernetes.namespace_name:"explore-XX"
If you want to see all the historical logs for this Project, simply remove the pod name reference and click the magnifying glass.
kubernetes.namespace_name:"explore-XX"
If you click the "x" in the column for the container name, and, in the left bar,
click "add" for kubernetes.pod_name
, you’ll then see your old Pod's logs,
too. Remember, we scaled them down before coming here, so you can see how the
log system is keeping a historical record.
Try the following search string:
kubernetes.namespace_name:"explore-XX" AND message:"Failure executing"
Service account? What’s that?
Lab: OpenShift Role-Based Access Control
Background
Almost every interaction with an OpenShift environment that you can think of requires going through the master’s API. All API interactions are both authenticated (AuthN - who are you?) and authorized (AuthZ - are you allowed to do what you are asking?).
In the log aggregation lab we saw that there was an error in reference to a Service Account.Just like a user has permissions (AuthZ), sometimes we may wish for non-users to be able to perform actions against the API. These "non-users" are referred to as service accounts.
OpenShift automatically creates a few special service accounts in every project. The default service account has its credentials automatically injected into every pod that is launched. By changing the permissions for that service account, we can do interesting things.
Exercise: Grant Service Account View Permissions
The parksmap application wants to talk to the OpenShift API to learn about other Pods, Services, and resources within the Project. You’ll learn why soon!
$ oc project explore-XX
Then:
$ oc policy add-role-to-user view -z default
The oc policy
command above is giving a defined role (view
) to a user. But
we are using a special flag, -z
. What does this flag do? From the -h
output:
-z, --serviceaccount=[]: service account in the current namespace to use as a user
The -z
syntax is a special one that saves us from having to type out the
entire string, which, in this case, is
system:serviceaccount:explore-XX:default
. It’s a nifty shortcut.
The |
Now that the default Service Account has view access, it can query the API to see what resources are within the Project. This also has the added benefit of supressing the error message! Although, in reality, we fixed the application.
Exercise: Grant User View Permissions
If you create a project, you are that project’s administrator. This means that
you can grant access to other users, too. If you like, give your neighbor view
access to your explore
project using the following command:
$ oc policy add-role-to-user view userXX
Have them go to the project view by clicking the Projects button and verify that they can see your project and its resources. This type of arrangement (view but not edit) might be ideal for a developer getting visibility into a production application’s project.
Exercise: Redeploy Application
One more step is required. We need to re-deploy the application because it’s given up trying to query the API.
This time we’ll use the web console. Find your explore-XX
project,
and then click "Applications" and then "Deployments". You’ll see your only
application, parksmap
, listed. Click that.
The deployment screen tells you a lot about how the application will be deployed. At the top right, there is a button labeled "Deploy". This button will cause a new deployment (which you know creates a new ReplicationController, right?).
Click it.
You’ll see that a new deployment is immediately started. Return to the overview page and watch it happen. You might not be fast enough! If you look at the logs for the application now, you should see no errors. That’s great.
Lab: Remote Operations
Background
Containers are treated as immutable infrastructure and therefore it is generally not recommended to modify the content of a container through SSH or running custom commands inside the container. Nevertheless, in some use-cases, such as debugging an application, it might be beneficial to get into a container and inspect the application.
Exercise: Remote Shell Session to a Container
OpenShift allows establishing remote shell sessions to a container without the
need to run an SSH service inside each container. In order to establish an
interactive session inside a container, you can use the oc rsh
command. First
get the list of available pods:
$ oc get pods
You should an output similar to the following:
NAME READY STATUS RESTARTS AGE
parksmap-2-tegp4 1/1 Running 0 2m
Now you can establish a remote shell session into the pod by using the pod name:
$ oc rsh parksmap-2-tegp4
You would see the following output:
sh-4.2$
The default shell used by |
Run the following command to list the files in the top folder:
$ ls /
anaconda-post.log bin dev etc home lib lib64 lost+found media mnt opt parksmap.jar proc root run sbin srv sys tmp usr var
Exercise: Execute a Command in a Container
In addition to remote shell, it is also possible to run a command remotely in an
already running container using the oc exec
command. This does not require
that a shell is installed, but only that the desired command is present and in
the executable path.
In order to show just the JAR file, run the following:
$ oc exec parksmap-2-tegp4 -- ls -l /parksmap.jar
You would see something like the following:
-rw-r--r--. 1 root root 21753918 Nov 23 15:54 /parksmap.jar
The |
You can also specify the shell commands to run directly with the oc rsh command:
$ oc rsh parksmap-2-tegp4 whoami
You would see something like:
whoami: cannot find name for user ID 1000060000
error: error executing remote command: error executing command in container: Error executing in Docker Container: 1
It is important to understand that, for security reasons, OpenShift does not run Docker containers as the user specified in the Dockerfile by default. In fact, when OpenShift launches a container its user is actually randomized. If you want or need to allow OpenShift users to deploy Docker images that do expect to run as root (or any specific user), a small configuration change is needed. You can learn more about the Docker guidelines for OpenShift, or you can look at the section on enabling images to run with a USER in the dockerfile. oc delete all -l app=nationalparks |
Exercise: Use the Web Console
You can also access a terminal into a container from the web console. On the Pod details page, one of the tabs says Terminal. Try it!
Lab: Deploying Java Code
Application Description
In this lab, we’re going to deploy a backend service, developed in Java
programming language that will expose 2 main REST endpoints to the visualizer
application (parksmap
web component that was deployed in the previous labs).
The application will query for national parks information (including it’s
coordinates) that is stored in a Mongo database. This application will also
provide an external access point, so that the API provided can be directly used
by the end user.
Background: Source-to-Image (S2I)
In a previous lab, we learned how to deploy a pre-existing Docker-formatted image. Now we will expand on that a bit by learning how OpenShift builds a Docker images using source code from an existing repository.
Source-to-Image (S2I) is another open source project sponsored by Red Hat. Its goal:
Source-to-image (S2I) is a tool for building reproducible Docker images. S2I
produces ready-to-run images by injecting source code into a Docker image and
assembling a new Docker image which incorporates the builder image and built
source. The result is then ready to use with docker run. S2I supports
incremental builds which re-use previously downloaded dependencies, previously
built artifacts, etc.
OpenShift is S2I-enabled and can use S2I as one of its build mechanisms (in addition to building Docker images from Dockerfiles, and "custom" builds).
OpenShift runs the S2I process inside a special Pod, called a Build Pod, and thus builds are subject to quotas, limits, resource scheduling, and other aspects of OpenShift.
A full discussion of S2I is beyond the scope of this class, but you can find more information about it either in the OpenShift S2I documentation or on GitHub (following the link above). The only key concept you need to remember about S2I is that it’s magic.
Exercise: Creating a Java application
The backend service that we will be deploying as part of this exercise is
called nationalparks
. This is a Java Spring Boot application that performs 2D
geo-spatial queries against a MongoDB database to locate and return map
coordinates of all National Parks in the world. That was just a fancy way of
saying that we are going to deploy a webservice that returns a JSON list of
places.
Add to Project
Because the nationalparks
application is a back-end to serve data that our
existing front-end will consume, we are going to build it inside the existing
explore-XX
project. And, we will do it from the web console.
Using application code on embedded GitLab
OpenShift can work with any accessible Git repository. This could be GitHub, GitLab, or any other server that speaks Git. You can even register webhooks in your Git server to initiate OpenShift builds triggered by any update to the application code!
The repository that we are going to use is already cloned in the internal GitLab repository and located at the following URL:
http://gitlab-ce-workshop-infra.cloudapps.sds.openshift3roadshow.com/userXX/nationalparks/tree/1.2.0
Your GitLab credentials are: userXX/openshift3 |
Later in the lab, we want you to make a code change and then rebuild your application. This is a fairly simple Spring framework Java application.
Build the Code on OpenShift
Similar to how we used "Add to project" before with a Docker-formatted image, we can do the same for specifying a source code repository. Since for this lab you have your own GitLab repository, let’s use it with a simple Java S2I image.
In the OpenShift web console, find your explore-XX
project, and then
click the "Add to Project" button. You will see a list of categories of
runtimes, and other types of components that you can deploy and run on OpenShift.
Type "jdk" into the search box, and then select the item titled
redhat-openjdk18-openshift
.
After you click redhat-openjdk18-openshift
, on the next screen you will need
to enter a name and a Git repository URL. For the name, enter nationalparks
,
and for the Git repository URL, enter:
http://gitlab-ce-workshop-infra.cloudapps.sds.openshift3roadshow.com/userXX/nationalparks.git
All of these runtimes shown are made available via Templates and ImageStreams, which will be discussed in a later lab. |
These labs were written against specific points in time for these applications. With Git as our version control system (VCS), we are using the concept of Branches/Tags. Click on Show advanced routing, build, and deployment options. In the Git Reference field enter "1.2.0". This will cause the S2I process to grab that specific tag in the code repository.
You can then hit the button labeled "Create". Then click Continue to overview. You will see this in the web console:
Build nationalparks, #1 Running. A new deployment will be created automatically
once the build completes. a few seconds ago View Log
Go ahead and click "View Log". This is a Java-based application that uses Maven as the build and dependency system. For this reason, the initial build will take a few minutes as Maven downloads all of the dependencies needed for the application. You can see all of this happening in real time!
From the command line, you can also see the Builds:
$ oc get builds
You’ll see output like:
NAME TYPE FROM STATUS STARTED DURATION
nationalparks-1 Source Git@b052ae6 Running About a minute ago 1m2s
You can also view the build logs with the following command:
$ oc logs -f builds/nationalparks-1
After the build has completed and successfully:
-
The S2I process will push the resulting Docker-formatted image to the internal OpenShift registry
-
The DeploymentConfiguration (DC) will detect that the image has changed, and this will cause a new deployment to happen.
-
A ReplicationController (RC) will be spawned for this new deployment.
-
The RC will detect no Pods are running and will cause one to be deployed, as our default replica count is just 1.
In the end, when issuing the oc get pods
command, you will see that the build Pod
has finished (exited) and that an application Pod is in a ready and running state:
NAME READY STATUS RESTARTS AGE
nationalparks-1-tkid3 1/1 Running 3 2m
nationalparks-1-build 0/1 Completed 0 3m
parksmap-1-4hbtk 1/1 Running 0 2h
If you look again at the web console, you will notice that, when you create the application this way, OpenShift also creates a Route for you. You can see the URL in the web console, or via the command line:
$ oc get routes
Where you should see something like the following:
NAME HOST/PORT PATH SERVICES PORT TERMINATION
nationalparks nationalparks-explore-XX.cloudapps.sds.openshift3roadshow.com nationalparks 8080-tcp
parksmap parksmap-explore-XX.cloudapps.sds.openshift3roadshow.com parksmap 8080-tcp
In the above example, the URL is:
http://nationalparks-explore-XX.cloudapps.sds.openshift3roadshow.com
Since this is a back-end application, it doesn’t actually have a web interface.
However, it can still be used with a browser. All back ends that work with the parks map
front end are required to implement a /ws/info/
endpoint. To test, the
complete URL to enter in your browser is:
http://nationalparks-explore-XX.cloudapps.sds.openshift3roadshow.com/ws/info/
The trailing slash is required. |
You will see a simple JSON string:
{"id":"nationalparks","displayName":"National Parks","center":{"latitude":"47.039304","longitude":"14.505178"},"zoom":4}
Earlier we said:
This is a Java Spring Boot application that performs 2D geo-spatial queries
against a MongoDB database
But we don’t have a database. Yet.
Lab: Adding a Database
Application Description
In this lab, we’re going to deploy a Mongo database that will be used to store
the data for the nationalparks
application. We will also connect the
nationalparks
service with the newly deployed Mongo database, so that the
nationalparks
service can load and query the database for the corresponding
information.
Finally, we will mark the nationalparks
application as a backend for the map
visualization tool, so that it can be dynamically discovered by the parksmap
component using the OpenShift discovery mechanism and the map will be displayed
automatically.
Background: Storage
Most useful applications are "stateful" or "dynamic" in some way, and this is
usually achieved with a database or other data storage. In this next lab we are
going to add MongoDB to our nationalparks
application and then rewire it to
talk to the database using environment variables.
We are going to use the MongoDB image that is included with OpenShift.
By default, this will use EmptyDir for data storage, which means if the Pod disappears the data does as well. In a real application you would use OpenShift’s persistent storage mechanism to attach real-world storage (NFS, Gluster, EBS, etc) to the Pods to give them a persistent place to store their data.
Background: Environment Variables
As you’ve seen so far, the web console makes it very easy to deploy things onto OpenShift. When we deploy the database, we pass in some environment variables. These environment variables are used to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables we are going to be setting are as follows:
-
MONGODB_USER
-
MONGODB_PASSWORD
-
MONGODB_DATABASE
-
MONGODB_ADMIN_PASSWORD
The database image is built in a way that it will automatically configure itself using the supplied information (assuming there is no data already present in the persistent storage!). The image will ensure that:
-
A database exists with the specified name
-
A user exists with the specified name
-
The user can access the specified database with the specified password
In the web console in your explore-XX
project,
again click the "Add to Project" button. Click the Data Stores category.
Type mongodb
in the search box, and then scroll down to find the MongoDB
(Ephemeral) template, and click it. You will notice that there are several
MongoDB templates available, some of which come with application servers
pre-configured. We just need a database, though, so the ephemeral Mongo
template is what you should choose.
When we performed the application build, there was no template. Rather, we selected the builder image directly and OpenShift presented only the standard build workflow. Now we are using a template - a preconfigured set of resources that includes parameters that can be customized. In our case, the parameters we are concerned with are the environment variables discussed — user, password, database, and admin password.
You can see that some of the fields say "generated if empty". This is a feature of Templates in OpenShift that will be covered in the next lab. For now, be sure to use the following values in their respective fields:
-
MONGODB_USER
:mongodb
-
MONGODB_PASSWORD
:mongodb
-
MONGODB_DATABASE
:mongodb
-
MONGODB_ADMIN_PASSWORD
:mongodb
You can leave the rest of the values as their defaults, and then click "Create". Then click Continue to overview. The MongoDB instance should quickly be deployed.
You can group services in OpenShift Console in order to display related services
together in one panel. Click on the chain icon on the right-top corner of the
nationalparks
service, choose mongodb
from the drop-down list in the
Group Service to nationalparks dialog and click on OK. The nationalparks
and
mongodb
services are groups and displayed together.
Exercise: Wiring the Application and the Database
When we initially created the nationalparks
application, we provided no environment
variables. The application is looking for a database, but can’t find one, and it
fails gracefully (you don’t see an error).
We need to configure the nationalparks
Pod(s) to have the right values in
the right environment variables so that the application knows how and where to
find MongoDB.
If you think way back to the beginning of the labs, you will recall that a DeploymentConfiguration tells OpenShift how to deploy something. This includes things like what environment variables to configure. So, to set up the right environment variables, we simply need to modify the DeploymentConfiguration (DC). This can easily be done from either the web interface or via the command line.
The command line takes a little less time, so let’s use that option. First, find the name of the DC:
$ oc get dc
Then, use the oc env
command to set environment variables directly on the DC:
$ oc env dc nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_SERVER_HOST=mongodb
After you have modified the DeploymentConfig object, you can verify the environment variables have been added by viewing the YAML for it:
$ oc get dc nationalparks -o yaml
You should see the following section:
- env:
- name: MONGODB_USER
value: mongodb
- name: MONGODB_PASSWORD
value: mongodb
- name: MONGODB_DATABASE
value: mongodb
- name: MONGODB_SERVER_HOST
value: mongodb
You can also just ask OpenShift to tell you about the environment variables on the DC:
$ oc env dc/nationalparks --list
# deploymentconfigs nationalparks, container nationalparks
MONGODB_USER=mongodb
MONGODB_PASSWORD=mongodb
MONGODB_DATABASE=mongodb
MONGODB_SERVER_HOST=mongodb
Exercise: Exploring OpenShift Magic
As soon as we set the environment variables on the DeploymentConfiguration, some
magic happened. OpenShift decided that this was a significant enough change to
warrant updating the internal version number of the DeploymentConfiguration. You
can verify this by looking at the output of oc get dc
:
NAME REVISION DESIRED CURRENT TRIGGERED BY
mongodb 1 1 1 config,image(mongodb:3.2)
nationalparks 2 1 1 config,image(nationalparks:1.2.0)
parksmap 1 1 1 config,image(parksmap:1.2.0)
Something that increments the version of a DeploymentConfiguration, by default,
causes a new deployment. You can verify this by looking at the output of oc get
rc
:
NAME DESIRED CURRENT READY AGE
mongodb-1 1 1 0 24m
nationalparks-1 0 0 0 3h
nationalparks-2 1 1 0 8m
parksmap-1 1 1 0 6h
We see that the desired and current number of instances for the "-1" deployment is 0. The desired and current number of instances for the "-2" deployment is 1. This means that OpenShift has gracefully torn down our "old" application and stood up a "new" instance.
Exercise: Data, Data, Everywhere
Now that we have a database deployed, we can again visit the nationalparks
web
service to query for data:
http://nationalparks-explore-XX.cloudapps.sds.openshift3roadshow.com/ws/data/all
And the result?
[]
Where’s the data? Think about the process you went through. You deployed the application and then deployed the database. Nothing actually loaded anything INTO the database, though.
The application provides an endpoint to do just that:
http://nationalparks-explore-XX.cloudapps.sds.openshift3roadshow.com/ws/data/load
And the result?
Items inserted in database: 2740
If you then go back to /ws/data/all
you will see tons of JSON data now.
That’s great. Our parks map should finally work!
http://parksmap-explore-XX.cloudapps.sds.openshift3roadshow.com
Hmm… There’s just one thing. The main map STILL isn’t displaying the parks. That’s because the front end parks map only tries to talk to services that have the right Label.
You are probably wondering how the database connection magically started working? When deploying applications to OpenShift, it is always best to use environment variables to define connections to dependent systems. This allows for application portability across different environments. The source file that performs the connection as well as creates the database schema can be viewed here:
In short summary: By referring to environment variables to connect to services (like databases), it can be trivial to promote applications throughout different lifecycle environments on OpenShift without having to modify application code. You can learn more about environment variables in the environment variables section of the Developer Guide. |
Exercise: Working With Labels
We explored how a Label is just a key=value pair earlier when looking at Services and Routes and Selectors. In general, a Label is simply an arbitrary key=value pair. It could be anything.
-
pizza=pepperoni
-
wicked=googly
-
openshift=awesome
In the case of the parks map, the application is actually querying the OpenShift
API and asking about the Routes and Services in the project. If any of them have a
Label that is type=parksmap-backend
, the application knows to interrogate
the endpoints to look for map data.
You can see the code that does this here.
Fortunately, the command line provides a convenient way for us to manipulate
labels. describe
the nationalparks
service:
$ oc describe route nationalparks
Name: nationalparks
Namespace: explore-XX
Created: 2 hours ago
Labels: app=nationalparks
Requested Host: nationalparks-explore-XX.cloudapps.sds.openshift3roadshow.com
exposed on router router 2 hours ago
Path: <none>
TLS Termination: <none>
Insecure Policy: <none>
Endpoint Port: 8080-tcp
Service: nationalparks
Weight: 100 (100%)
Endpoints: 10.1.9.8:8080
You see that it only has one label: app=nationalparks
. Now, use oc label
:
$ oc label route nationalparks type=parksmap-backend
You will see something like:
route "nationalparks" labeled
If you check your browser now:
http://parksmap-explore-XX.cloudapps.sds.openshift3roadshow.com/
You’ll notice that the parks suddenly are showing up. That’s really cool!
'디지털 양피지 > Docker' 카테고리의 다른 글
openshift04 (0) | 2017.07.04 |
---|---|
openshift03 (0) | 2017.07.04 |
8. Open Shift (0) | 2017.07.04 |
7.Kubernetes - Advanced (0) | 2017.07.03 |
6. Kubenetes 리소스 관리 (0) | 2017.06.30 |