2 - Getting Started Local Setup

Introduction

    • The goal is to turn a simple Hello World app into an application running on Kubernetes.

    • The steps walk you to take code that you have developed on your machine, turn it into a Docker container image and then run that image on Minikube.

    • Minikube provides a simple way of running Kubernetes on your local machine for free

Running Kubernetes Locally via Minikube

    • Minikube is a tool that makes it easy to run Kubernetes locally.

    • Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day

Objectives

    • Run a hello world application.

    • Deploy the application to Minikube.

    • View application logs.

    • Update the application image.

Minikube Features

Minikube supports Kubernetes features such as:

    • DNS

    • NodePorts

    • ConfigMaps and Secrets

    • Dashboards

    • Container Runtime: Docker, rkt and CRI-O

    • Enabling CNI (Container Network Interface)

    • Ingress

Pre-Requisite

A Linux, Windows or Mac system could be used to bring-up local setup.

Here A Windows box is used for local setup

Install Minikube

Before you begin

VT-x or AMD-v virtualization must be enabled in your computer’s BIOS.

Install a Hypervisor

If you do not already have a hypervisor installed, install one now.

- Install Virtualbox on Ubuntu 16.04

- sudo apt-get update

- sudo apt-get install virtualbox-5.2

- Download and Install

Install kubectl

Install kubectl

Install Minikube

Download Minikube Package

wget https://github.com/kubernetes/minikube/releases/download/v0.28.1/minikube_0.28-1.deb

Install

sudo dpkg -i minikube_0.28-1.deb

$ minikube version

minikube version: v0.28.1

Kubernetes Cluster

Kubernetes is a production-grade, open-source platform that orchestrates the placement (scheduling) and execution of application containers within and across computer clusters.

Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized. Containerized applications are more flexible and available than in past deployment models, where applications were installed directly onto specific machines as packages deeply integrated into the host. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way. Kubernetes is an open-source platform and is production-ready.

A Kubernetes cluster consists of two types of resources:

    • The Master coordinates the cluster

    • Nodes are the workers that run applications

Cluster Diagram

The Master is responsible for managing the cluster. The master coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.

A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker or rkt. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.

Masters manage the cluster and the nodes are used to host the running applications.

When you deploy applications on Kubernetes, you tell the master to start the application containers. The master schedules the containers to run on the cluster's nodes. The nodes communicate with the master using the Kubernetes API, which the master exposes. End users can also use the Kubernetes API directly to interact with the cluster.

A Kubernetes cluster can be deployed on either physical or virtual machines.

To get started with Kubernetes development, you can use Minikube.

Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node.

Create a cluster

Minikube Start

Run the minikube start command:

minikube start

Starting local Kubernetes v1.10.0 cluster...

Starting VM...

Getting VM IP address...

Moving files into cluster...

Setting up certs...

Connecting to cluster...

Setting up kubeconfig...

Starting cluster components...

Kubectl is now configured to use the cluster.

Loading cached images from config file.

Great! You now have a running Kubernetes cluster in your online terminal.

Minikube started a virtual machine for you, and a Kubernetes cluster is now running in that VM.

Cluster Version

OK, kubectl is configured and we can see both the version of the client and as well as the server.

The client version is the kubectl version; the server version is the Kubernetes version installed on the master.

You can also see details about the build.

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-04-10T12:46:31Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}

Cluster Details

Let’s view the cluster details. We’ll do that by running kubectl cluster-info:

We have a running master and a dashboard.

The Kubernetes dashboard allows you to view your applications in a UI.

$ kubectl cluster-info

Kubernetes master is running at https://172.17.0.18:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

To view the nodes in the cluster, run the kubectl get nodes command:

$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

minikube Ready <none> 15m v1.10.0

This command shows all nodes that can be used to host our applications.

Now we have only one node, and we can see that it’s status is ready (it is ready to accept applications for deployment).

Kubernetes Deployment

A Deployment is responsible for creating and updating instances of your application

Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of your application. Once you've created a Deployment, the Kubernetes master schedules mentioned application instances onto individual Nodes in the cluster.

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces it. This provides a self-healing mechanism to address machine failure or maintenance.

In a pre-orchestration world, installation scripts would often be used to start applications, but they did not allow recovery from machine failure. By both creating your application instances and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach to application management.

Deploying your first app on Kubernetes

You can create and manage a Deployment by using the Kubernetes command line interface, Kubectl. Kubectl uses the Kubernetes API to interact with the cluster. Here are the most common Kubectl commands needed to create Deployments that run your applications on a Kubernetes cluster.

When you create a Deployment, you'll need to specify the container image for your application and the number of replicas that you want to run. You can change that information later by updating your Deployment.

Applications need to be packaged into one of the supported container formats in order to be deployed on Kubernetes

For our first Deployment, we'll use a Node.js application packaged in a Docker container. The source code and the Dockerfile are available in the GitHub repository for the Kubernetes Basics.

Deploy App

Let’s run our first app on Kubernetes with the kubectl run command. The run command creates a new deployment. We need to provide the deployment name and app image location (include the full repository url for images hosted outside Docker hub). We want to run the app on a specific port so we add the --port parameter:

$ kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080

deployment.apps "kubernetes-bootcamp" created

Great! You just deployed your first application by creating a deployment.

This performed a few things for you:

    • searched for a suitable node where an instance of the application could be run (we have only 1 available node)

    • scheduled the application to run on that Node

    • configured the cluster to reschedule the instance on a new Node when needed

To list your deployments use the get deployments command:

$ kubectl get deployments

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

kubernetes-bootcamp 1 1 1 1 1m

We see that there is 1 deployment running a single instance of your app. The instance is running inside a Docker container on your node.

View App

Pods that are running inside Kubernetes are running on a private, isolated network. By default they are visible from other pods and services within the same kubernetes cluster, but not outside that network. When we use kubectl, we're interacting through an API endpoint to communicate with our application.

We will cover other options on how to expose your application outside the kubernetes cluster in Module 4.

The kubectl command can create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running.

We will open a second terminal window to run the proxy.

$ kubectl proxy

Starting to serve on 127.0.0.1:8001

We now have a connection between our host (the online terminal) and the Kubernetes cluster. The proxy enables direct access to the API from these terminals.

You can see all those APIs hosted through the proxy endpoint, now available at through http://localhost:8001. For example, we can query the version directly through the API using the curl command:

$ curl http://localhost:8001/version

{

"major": "1",

"minor": "10",

"gitVersion": "v1.10.0",

"gitCommit": "fc32d2f3698e36b93322a3465f63a14e9f0eaead",

"gitTreeState": "clean",

"buildDate": "2018-04-10T12:46:31Z",

"goVersion": "go1.9.4",

"compiler": "gc",

"platform": "linux/amd64"

}

The API server will automatically create an endpoint for each pod, based on the pod name, that is also accessible through the proxy.

First we need to get the Pod name, and we'll store in the environment variable POD_NAME:

$ export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{end}}')

$ echo Name of the Pod: $POD_NAME

Name of the Pod: kubernetes-bootcamp-5c69669756-86gvd

Now we can make an HTTP request to the application running in that pod:

$ curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/

Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-5c69669756-86gvd | v=1

The url is the route to the API of the Pod.

Note: Check the top of the terminal. The proxy was run in a new tab (Terminal 2), and the recent commands were executed the original tab (Terminal 1). The proxy still runs in the second tab, and this allowed our curl command to work using localhost:8001.

Kubernetes Pods

A Pod is a group of one or more application containers (such as Docker or rkt) and includes shared storage (volumes), IP address and information about how to run them.

When you created a Deployment, Kubernetes created a Pod to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:

    • Shared storage, as Volumes

    • Networking, as a unique cluster IP address

    • Information about how to run each container, such as the container image version or specific ports to use

A Pod models an application-specific "logical host" and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver.

The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.

Pods overview

Nodes

Containers should only be scheduled together in a single Pod if they are tightly coupled and need to share resources such as disk.

A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster. The Master's automatic scheduling takes into account the available resources on each Node.

Every Kubernetes Node runs at least:

    • Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.

    • A container runtime (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the application.

Node overview

Troubleshooting with kubectl

You'll continue to use Kubectl command-line interface to get information about deployed applications and their environments. The most common operations can be done with the following kubectl commands:

    • kubectl get - list resources

    • kubectl describe - show detailed information about a resource

    • kubectl logs - print the logs from a container in a pod

    • kubectl exec - execute a command on a container in a pod

You can use these commands to see when applications were deployed, what their current statuses are, where they are running and what their configurations are.

Check Application Configuration

Check deployed application is running

$ kubectl get pods

NAME READY STATUS RESTARTS AGE

kubernetes-bootcamp-5c69669756-88xfl 1/1 Running 0 29s

Next, to view what containers are inside that Pod and what images are used to build those containers we run the describe pods command:

We see here details about the Pod’s container: IP address, the ports used and a list of events related to the lifecycle of the Pod.

Note: the describe command can be used to get detailed information about most of the kubernetes primitives: node, pods, deployments. The describe output is designed to be human readable, not to be scripted against.

$ kubectl describe pods

Name: kubernetes-bootcamp-5c69669756-88xfl

Namespace: default

Node: minikube/172.17.0.22

Start Time: Sun, 29 Jul 2018 16:13:39 +0000

Labels: pod-template-hash=1725225312

run=kubernetes-bootcamp

Annotations: <none>

Status: Running

IP: 172.18.0.2

Controlled By: ReplicaSet/kubernetes-bootcamp-5c69669756

Containers:

kubernetes-bootcamp:

Container ID: docker://41feee4c91008f137f4e9cee35db5707f6c0c4cc670c3679fb8ff4859b495b59

Image: gcr.io/google-samples/kubernetes-bootcamp:v1

Image ID: docker-pullable://gcr.io/google-samples/kubernetes-bootcamp@sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af

Port: 8080/TCP

Host Port: 0/TCP

State: Running

Started: Sun, 29 Jul 2018 16:13:41 +0000

Ready: True

Restart Count: 0

Environment: <none>

Mounts:

/var/run/secrets/kubernetes.io/serviceaccount from default-token-lmz7z (ro)

Conditions:

Type Status

Initialized True

Ready True

PodScheduled True

Volumes:

default-token-lmz7z:

Type: Secret (a volume populated by a Secret)

SecretName: default-token-lmz7z

Optional: false

QoS Class: BestEffort

Node-Selectors: <none>

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

node.kubernetes.io/unreachable:NoExecute for 300s

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Warning FailedScheduling 3m (x2 over 3m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.

Normal Scheduled 3m default-scheduler Successfully assigned kubernetes-bootcamp-5c69669756-88xfl to minikube

Normal SuccessfulMountVolume 3m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-lmz7z"

Normal Pulled 3m kubelet, minikube Container image "gcr.io/google-samples/kubernetes-bootcamp:v1" already present on machine

Normal Created 3m kubelet, minikube Created container

Normal Started 3m kubelet, minikube Started container

Show the app in the terminal

Recall that Pods are running in an isolated, private network - so we need to proxy access to them so we can debug and interact with them. To do this, we'll use the kubectl proxy command to run a proxy in a second terminal window. Click on the command below to automatically open a new terminal and run the proxy:

$ kubectl proxy

Starting to serve on 127.0.0.1:8001

Now again, we'll get the Pod name and query that pod directly through the proxy. To get the Pod name and store it in the POD_NAME environment variable:

$ export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')

$ echo Name of the Pod: $POD_NAME

Name of the Pod: kubernetes-bootcamp-5c69669756-88xfl

To see the output of our application, run a curl request.

$ curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/

Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-5c69669756-88xfl | v=1

The url is the route to the API of the Pod.

View the container logs

Anything that the application would normally send to STDOUT becomes logs for the container within the Pod. We can retrieve these logs using the kubectl logs command:

$ kubectl logs $POD_NAME

Kubernetes Bootcamp App Started At: 2018-07-29T16:13:41.212Z | Running On: kubernetes-bootcamp-5c69669756-88xfl

Running On: kubernetes-bootcamp-5c69669756-88xfl | Total Requests: 1 | App Uptime: 452.378 seconds | Log Time: 2018-07-29T16:21:13.590Z

Note: We don’t need to specify the container name, because we only have one container inside the pod.

Executing command on the container

We can execute commands directly on the container once the Pod is up and running. For this, we use the exec command and use the name of the Pod as a parameter. Let’s list the environment variables:

$ kubectl exec $POD_NAME env

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

HOSTNAME=kubernetes-bootcamp-5c69669756-88xfl

KUBERNETES_PORT=tcp://10.96.0.1:443

KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443

KUBERNETES_PORT_443_TCP_PROTO=tcp

KUBERNETES_PORT_443_TCP_PORT=443

KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1

KUBERNETES_SERVICE_HOST=10.96.0.1

KUBERNETES_SERVICE_PORT=443

KUBERNETES_SERVICE_PORT_HTTPS=443

NPM_CONFIG_LOGLEVEL=info

NODE_VERSION=6.3.1

HOME=/root

Again, worth mentioning that the name of the container itself can be omitted since we only have a single container in the Pod.

Next let’s start a bash session in the Pod’s container:

$ kubectl exec -ti $POD_NAME bash

root@kubernetes-bootcamp-5c69669756-88xfl:/#

We have now an open console on the container where we run our NodeJS application. The source code of the app is in the server.js file:

root@kubernetes-bootcamp-5c69669756-88xfl:/# cat server.js

var http = require('http');

var requests=0;

var podname= process.env.HOSTNAME;

var startTime;

var host;

var handleRequest = function(request, response) {

response.setHeader('Content-Type', 'text/plain');

response.writeHead(200);

response.write("Hello Kubernetes bootcamp! | Running on: ");

response.write(host);

response.end(" | v=1\n");

console.log("Running On:" ,host, "| Total Requests:", ++requests,"| App Uptime:", (new Date() - startTime)/1000 , "seconds", "| Log Time:",new Date());

}

var www = http.createServer(handleRequest);

www.listen(8080,function () {

startTime = new Date();;

host = process.env.HOSTNAME;

console.log ("Kubernetes Bootcamp App Started At:",startTime, "| Running On: " ,host, "\n" );

});

You can check that the application is up by running a curl command:

root@kubernetes-bootcamp-5c69669756-88xfl:/# curl localhost:8080

Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-5c69669756-88xfl | v=1

Note: here we used localhost because we executed the command inside the NodeJS container

To close your container connection type exit

root@kubernetes-bootcamp-5c69669756-88xfl:/# exit

exit

$

References