Sunteți pe pagina 1din 27

Kubernetes

on OpenStack
Technical Guide to Creating and Accessing
a Kubernetes Cluster on OpenStack
Kubernetes on OpenStack

Introduction to Kubernetes
The objective of Kubernetes is to abstract away the complexity of managing a fleet of
containers, which represent packaged applications that include everything needed to
run wherever theyre provisioned. By interacting with the Kubernetes REST API, you
can describe the desired state of your application, and Kubernetes, aka k8s, will do
whatever is necessary to make the infrastructure conform. It will deploy groups of
containers, replicate them, redeploy if some of them fail, and so on.

Because its open source, it can run almost anywhere, and the major public cloud
providers all provide easy ways to consume this technology. Private clouds based on
OpenStack or Mesos can also run k8s, and bare metal servers can be leveraged as
worker nodes for it. So if you describe your application with k8s building blocks, youll
then be able to deploy it within VMs or bare metal servers, on public or private clouds.

Lets take a look at the basics of how Kubernetes works so that you will have a solid
foundation to dive deeper.

The Kubernetes architecture


The Kubernetes architecture is relatively simple. You never interact directly with the
nodes that are hosting your application, but only with the control plane, which presents
an API and is in charge of scheduling and replicating groups of containers named Pods.
Kubectl is the command line interface you can use to interact with the API to share
the desired application state or gather detailed information on the infrastructures
current state.

Lets look at the various pieces.

Nodes
Each node that will host part of your distributed application does so by leveraging
Docker or a similar container technology, such as Rocket from CoreOS. The nodes also
run two additional piece of software: kube-proxy, which give access to your running
app, and kubelet, which receives commands from the k8s control plane. Nodes can
also run flannel, an etcd backed network fabric for containers.

www.mirantis.com/contact
1
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

Master
The control plane itself runs the API server (kube-apiserver), the scheduler (kube-
scheduler), the controller manager (kube-controller-manager) and etcd, a highly
available key-value store for shared configuration and service discovery implementing
the Raft consensus Algorithm.

Now lets look at some of the terminology you might run into.

Terminology
Kubernetes has its own vocabulary which, once you get used to it, gives you some
sense of how things are organized. These terms include:

Pods: Pods are a group of one or more containers, their shared storage, and options
about how to run them. Each pod gets its own IP address.

Labels: Labels are key/value pairs that Kubernetes attaches to any objects, such as
pods, Replication Controllers, Endpoints, and so on.

Annotations: Annotations are key/value pairs used to store arbitrary non-queryable


metadata.

www.mirantis.com/contact
2
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

Services: Services are an abstraction that defines a logical set of Pods and a policy
by which to access them over the network.

Replication Controller: Replication controllers ensure that a specific number of pod


replicas are running at any one time.

Secrets: Secrets hold sensitive information such as passwords, TLS certificates, OAuth
tokens, and ssh keys.

ConfigMap: ConfigMaps are mechanisms used to inject containers with configuration


data while keeping containers agnostic of Kubernetes.

Why Kubernetes
In order to justify the added complexity that Kubernetes brings, there need to be some
benefits. At its core, a cluster manager such as k8s exists to serve developers so they
can serve themselves without having to involve the operation team.

Reliability is one of the major benefits of Kubernetes; Google has over 10 years
of experience when it comes to infrastructure operations with Borg, their internal
container orchestration solution, and theyve built Kubernetes based on this experience.
Kubernetes can be used to prevent failure from impacting the availability or performance
of your application; thats a great benefit.

Scalability is handled by Kubernetes on different levels. You can add cluster capacity
by adding more workers nodes, which can even be automated in many public clouds
with autoscaling functionality based on CPU and Memory triggers. The Kubernetes
Scheduler includes affinity features to spread your workloads evenly across the
infrastructure, maximizing availability. Finally, k8s can autoscale your application using
the Pod autoscaler, which can be driven by custom triggers.

Part 1: Create the cluster


In this ebook, well take you through the steps to run an Nginx container on Kubernetes
over OpenStack, including:

Deploying a Kubernetes cluster with Murano

www.mirantis.com/contact
3
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

Configuring OpenStack security to make a Kubernetes cluster usable from


within OpenStack

Downloading and configuring the Kubernetes client

Creating a Kubernetes application

Running an application on Kubernetes

Lets get started.

Create the Kubernetes cluster with Murano


The first step is to get a cluster created. There are several ways to do that, but the
easiest is to use a Murano Package. If you dont have Murano handy, you can get access
to it in several ways, but the easiest is to deploy Mirantis OpenStack with Murano.

Import the Kubernetes cluster app


The first step is to get the actual Kubernetes cluster app, which is available on the
OpenStack Foundations Community App Catalog. Follow these steps:

1. Log into Horizon and go to Applications->Manage->Packages.

2. Go to the Community App Catalog and choose Murano Apps -> Kubernetes Cluster
to get the Kubernetes Cluster App. Youre looking for the URL for the package itself.
In this case, thats http://storage.apps.openstack.org/apps/com.mirantis.docker.
kubernetes.KubernetesCluster.zip.

3. Back in Horizon, click Import Package.

4. For the Package Source, choose URL, and add the URL from step 2, then click Next:

www.mirantis.com/contact
4
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

5. Murano will automatically start downloading the images it needs, then mark them
for use with Murano; you wont have to do anything there but click Import and
wait. To see the images downloading, choose Project->Images. If the images didnt
already exist, youll see them Saving:

www.mirantis.com/contact
5
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

6. Once theyre finished saving, youll see that their status has changed to Active:

Next, well deploying an environment that includes the Kubernetes master and minions.

Create the Kubernetes Murano environment


1. In Horizon, choose Applications->Browse. You should see the new app under
Recent Activity.

www.mirantis.com/contact
6
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

2. To make things simple, click the Quick Deploy button.

3. Keep all the defaults, then scroll down and click Next.

www.mirantis.com/contact
7
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

4. Choose the Debian image and click Create.

www.mirantis.com/contact
8
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

5. Horizon will automatically take you to the new environment. At this point, its been
created, but not deployed:

6. You can add other things if you want, but for now, click Deploy This Environment.
Goes through a number of steps, creating VMs, networks, security groups, and so
on. You can see that on the main environment page, or by checking the logs:

www.mirantis.com/contact
9
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

7. When deployment is complete, youll see the status change to Ready:

8. All thats great, but where do you access the cluster? Click Latest Deployment
Log to see the IP address assigned to the cluster:

Now, youll notice that there are references to (in this case) 4 different nodes: gateway-1,
kube-1, kube-2, and kube-3. You can see these instances if you go to Project->Compute-
>Instances. Notice that the Kubernetes API is running on kube-1.

www.mirantis.com/contact
10
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

In part 2, youll actually access the Kubernetes cluster.

Part 2: Access the cluster


To access the Kubernetes cluster we created in part 1, were going to create a Ubuntu
VM (if you have a Ubuntu machine handy you can skip this step), then configure it to
access the Kubernetes API we just deployed.

Create the client VM


1. Create a new VM by choosing Project->Compute->Intances->Launch Instance:

2. Fortunately you dont have to worry about obtaining an image, because youll have
the Ubuntu Kubernetes image that was downloaded as part of the Murano app.
Click the plus sign (+) to choose it. (You can choose another distro if you like, but
these instructions assume youre using Ubuntu.)

www.mirantis.com/contact
11
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

3. You dont need a big server for this, but it needs to be big enough for the Ubuntu
image we selected, so choose the m1.small flavor:

4. Chances are its already on the network with the cluster, but that doesnt matter;
well be using floating IPs anyway. Just make sure its on a network, period.

www.mirantis.com/contact
12
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

5. Next make sure you have a key pair, because we need to log into this machine:

6. After it launches

www.mirantis.com/contact
13
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

7. Add a floating IP if necessary to access it by clicking the down arrow on the button
at the end of the line and choosing Associate Floating IP. If you dont have any
floating IP addresses allocated, click the plus sign (+) to allocate a new one:

8. Choose the appropriate network and click Allocate IP:

www.mirantis.com/contact
14
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

9. Now add it to your VM:

10. Youll see the new Floating IP listed with the Instance:

www.mirantis.com/contact
15
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

11. Before you can log in, however, youll need to make sure that the security group
allows for SSH access. Choose Project->Compute->Access & Security and click
Manage Rules for the default security group:

12. Click +Add Rule:

www.mirantis.com/contact
16
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

13. Under Rule, choose SSH at the bottom and click Add.

www.mirantis.com/contact
17
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

14. Youll see the new rule on the Manage Rules page:

15. Now use your SSH client to go ahead and log in using the username ubuntu and
the private key you specified when you created the VM.

Now youre ready to actually deploy containers to the cluster.

Part 3: Run the application


In part 2, you created the actual cluster, so finally, youre ready to actually interact with
the Kubernetes API that you installed. The general process goes like this:

Define the security credentials for accessing your applications.

Deploy a containerized app to the cluster.

Expose the app to the outside world so you can access it.

Lets see how that works.

Define security parameters for your Kubernetes app


The first thing that you need to understand is that while we have a cluster of machines
that are tied together with the Kubernetes API, it can support multiple environments,
or contexts, each with its own security credentials.

www.mirantis.com/contact
18
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

For example, if you were to create an application with a context that relies on a specific
certificate authority, I could then create a second one that relies on another certificate
authority. In this way, we both control our own destiny, but neither of us gets to see
the others application.

The process goes like this:

1. First, we need to create a new certificate authority which will be used to sign the
rest of our certificates. Create it with these commands:
$ sudo openssl genrsa -out ca-key.pem 2048
$ sudo openssl req -x509 -new -nodes -key ca-key.pem -days 10000
-out ca.pem -subj /CN=kube-ca

2. At this point you should have two files: ca-key.pem and ca.pem. Youll use them
to create the cluster administrator keypair. To do that, youll create a private key
(admin-key.pem), then create a certificate signing request (admin.csr), then sign
it to create the public key (admin.pem).
$ sudo openssl genrsa -out admin-key.pem 2048
$ sudo openssl req -new -key admin-key.pem -out admin.csr -subj /
CN=kube-adminsudo openssl x509 -req -in admin.csr -CA ca.pem -CAkey
ca-key.pem -CAcreateserial -out admin.pem -days 365

Now that you have these files, you can use them to configure the Kubernetes client.

Download and configure the Kubernetes client


1. Start by downloading the kubectl client on your machine. In this case , were
using linux; adjust appropriately for your OS.
$ curl -O https://storage.googleapis.com/kubernetes-release/release/
v1.4.3/bin/linux/amd64/kubectl

2. Make kubectl executable:


$ chmod +x kubectl

3. Move it to your path:


$ sudo mv kubectl /usr/local/bin/kubectl

4. Now its time to set the default cluster. To do that, youll want to use the URL

www.mirantis.com/contact
19
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

that you got from the environment deployment log. Also, make sure you provide
the full location of the ca.pem file, as in:
$ kubectl config set-cluster default-cluster --server=[KUBERNETES_
API_URL] --certificate-authority=[FULL-PATH-TO]/ca.pem

In my case, this works out to:


$ kubectl config set-cluster default-cluster --server=ht
tp://172.18.237.137:8080 --certificate-authority=/home/ubuntu/ca.pem

5. Next you need to tell kubectl where to find the credentials, as in:
$ kubectl config set-credentials default-admin --certificate-
authority=[FULL-PATH-TO]/ca.pem --client-key=[FULL-PATH-TO]/admin-
key.pem --client-certificate=[FULL-PATH-TO]/admin.pem

Again, in my case this works out to:


$ kubectl config set-credentials default-admin --certificate-
authority=/home/ubuntu/ca.pem --client-key=/home/ubuntu/admin-key.
pem --client-certificate=/home/ubuntu/admin.pem

6. Now you need to set the context so kubectl knows to use those credentials:
$ kubectl config set-context default-system --cluster=default-cluster
--user=default-admin
$ kubectl config use-context default-system

7. Now you should be able to see the cluster:


$ kubectl cluster-info

Kubernetes master is running at http://172.18.237.137:8080


To further debug and diagnose cluster problems, use kubectl cluster-info
dump.

Terrific! Now we just need to go ahead and run something on it.

Running an app on Kubernetes


Running an app on Kubernetes is pretty simple and is related to firing up a container.
Well go into the details of what everything means later, but for now, just follow along.

www.mirantis.com/contact
20
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

1. Start by creating a deployment that runs the nginx web server:


$ kubectl run my-nginx --image=nginx --replicas=2 --port=80

deployment my-nginx created

2. By default, containers are only visible to other members of the cluster. To


expose your service to the public internet, run:
$ kubectl expose deployment my-nginx --target-port=80
--type=NodePort

service my-nginx exposed

3. OK, so now its exposed, but where? We used the NodePort type, which means
that the external IP is just the IP of the node that its running on, as you can see
if you get a list of services:
$kubectl get services

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE


kubernetes 11.1.0.1 <none> 443/TCP 3d
my-nginx 11.1.116.61 <nodes> 80/TCP 18s

4. So we know that the nodes referenced here are kube-2 and kube-3
(remember, kube-1 is the API server), and we can get their IP addresses from the
Instances page

www.mirantis.com/contact
21
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

5. but that doesnt tell us what the actual port number is. To get that, we can
describe the actual service itself:
$ kubectl describe services my-nginx

Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: NodePort
IP: 11.1.116.61
Port: <unset> 80/TCP
NodePort: <unset> 32386/TCP
Endpoints: 10.200.41.2:80,10.200.9.2:80
Session Affinity: None
No events.

6. So the service is available on port 32386 of whatever machine you hit. But if you
try to access it, somethings still not right:
$ curl http://172.18.237.138:32386

curl: (7) Failed to connect to 172.18.237.138 port 32386: Connection timed out

7. The problem here is that by default, this port is closed, blocked by the default
security group. To solve this problem, create a new security group you can apply to
the Kubernetes nodes. Start by choosing Project->Compute->Access & Security-
>+Create Security Group.

8. Specify a name for the group and click Create Security Group.

www.mirantis.com/contact
22
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

9. Click Manage Rules for the new group.

10. By default, theres no access in; we need to change that. Click +Add Rule.

www.mirantis.com/contact
23
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

11. In this case, we want a Custom TCP Rule that allows Ingress on port 32386 (or
whatever port Kubernetes assigned the NodePort). You can specify access only
from certain IP addresses, but well leave that open in this case. Click Add to finish
adding the rule.

12. Now that you have a functioning security group you need to add it to the instances
Kubernetes is using as worker nodes in this case, the kube-2 and kube-3 nodes.
Start by clicking the small triangle on the button at the end of the line for each
instance and choosing Edit Security Groups.

www.mirantis.com/contact
24
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

13. You should see the new security group in the left-hand panel; click the plus sign
(+) to add it to the instance:

14. Click Save to save the changes.

15. Add the security group to all worker nodes in the cluster.

www.mirantis.com/contact
25
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085
Kubernetes on OpenStack

16. Now you can try again:


$ curl http://172.18.237.138:32386
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully
installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=http://nginx.org/>nginx.org</a>.<br/>
Commercial support is available at
<a href=http://nginx.com/>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

As you can see, you can now access the Nginx container you deployed on the
Kubernetes cluster.

Where to go from here


At this point you have all of the pieces that you need to create any Kubernetes-based
application on OpenStack. You can now use these techniques to spin up any application
in your Docker repository, or go ahead and create your own application images.

www.mirantis.com/contact
26
Mirantis.com | +1-650-963-9828 | 525 Almanor Ave, 4th Floor - Sunnyvale, CA 94085

S-ar putea să vă placă și