Kubernetes Instructions and commands

MD
R
Markdown

In addition to the CodeRecipes cheatsheet, we've created this markdown instructions doc.

Kubernetes

Daemonset (pods which run across all worker nodes)

https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

Termination of pods

Via terminationGracePeriodSeconds, which gives POD 30 seconds to shut down gracefully upon receiving a SIGTERM signal. After 30 seconds, the containers are sent a SIGKILL signal.

Underneath

Written in GoLang Its the Helm´s Man K8s github.com/kubernetes

Powers

Data Center as a Computer Container + Manifest => Kubernetes takes care of the rest

Architecture

K8S Cluster [Master] [Nodes] Masters and Minions(Nodes) Orchestrator for microservice containers Each container is a POD Kubernetes connects all PODS in Nodes (networking) Deployment => .YAML (manifest) We give a .yaml to the master of k8s and it looks the file and deploys the app in the cluster

Declarative Model

We give the API server a manifest file (YAML) with the desired state. No commands are given Eg. use image x and keep always 5 instances of the image running k8s always tries to keep the actual state the same as the desired state

Masters

                        [Master]

[Node Minion][Node Minion][Node Minion][Node Minion][Node Minion]

Control Plane No Pods run on master. Single Master Master is only for controlling and looking after the cluster

MOST IMPORTANT ONE FOR COMMUNICATING WITH K8S kube-apiserver (frontend app for the control plane :443) *RESTFUL API and consumes JSON (via manifest files) Sometimes called the "master" commands are issued to the api server $kubectl---- api-server

kube-controller-manager (controller of controllers) Persistent Storage (via etcd (key-value store)) Cluster state and config !etcd store needs to be safely stored

kube-scheduler (watches apiserver for new pods and assigns work to nodes)

Nodes (Minions)

A node is a worker [Kubelet][Container Engine][kube-proxy]

Node1 10.0.0.91 Node3 10.0.0.15 Node3 10.0.0.19

Kubelet

Is the node. Registers node in the cluster. Watches apiserver for new commands Reports state back to the master.

Instantiates PODS

Exposes port: 10255 /spec /healthz /pods

Container Engine

Docker Pulls images Starts and Stops containers

Kube-Proxy

k8s networking: POD IP Addresses All containers in a pod share a single IP Load balances across all pods in a service

PODs

k8s POD = VM = Docker Container Containers run inside PODS A pod is like cattle, nobody knows then and they will die A pod is a sandbox around a container All containers in a pod shared the same environment (ie localhost) Scaling = adding and removing PODS (aka pod replicas) A pod has only 1 state. It's either UP or OFF When a POD dies a new POD comes in with a different IP (before 10.0.0.33, now 10.0.0.44)

Services (load balancer)

Stable Networking Way of giving stable and static IPS to unstable PODS It has a YAML manifest and contains an IP and DNS to a group of PODS Load balancing happens with labels [service] LABELX [pod] LABELX [pod] LABELX
[pod] LABELY --load balancing won't happen on this one!!

Deployments

Create a file, send it to the apiserver on master. Then pods and replica sets will be magically created.

YAML file kind: Deployment metadata: name: xyz spec: replicas: 4

Automation with Replica Sets Rolling updates and rollbacks (B/G or Canary Releases)

Installation

Minikube (k8s cluster in development environment) GoogleContainerEngine (GKE) AWS Provider Manual Install

MiniKube

The same as Docker for Mac

VM: minikube Node Master

localhost

--

Instalation (1.8)

brew update brew install kubectl kubectl version --client brew cask install minikube brew install docker-machine-driver-xhyve sudo chown root:wheel /usr/local/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve sudo chmod u+s /usr/local/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve minikube start --vm-driver=xhyve // Kubectl is now configured to use the cluster. // Kubectl can speak to minikube local cluster or any other cluster on the cloud

kubectl config current-context kubectl get nodes

minikube stop minikube delete

minikube start --vm-driver=xhyve --kubernetes-version="v1.6.0" kubectl get nodes minikube dashboard // http://192.168.64.3:30000/#!/overview?namespace=default

AWS with Kops

Requirements:

  • kubectl
  • kops
  • AWS CLI

IAM Account in AWS AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess

AWS User: claudio-k8s AKIAJQNRE3B3FWJKNYKQ zYo+oAHZZL2n3RHEzYruXCzeM65rmoKLM0IOK4vv

GoDaddy Subdomain -> Delegate DNS NS management to awsdns

Create a Route53 DNS Zone

https://console.aws.amazon.com/route53/ DIG ns k8s.claudioteixeira.com

Install CLI Software

Kops 1.7

brew update && brew install kops kops version

Kubectl

kubectl version --client

AWS CLI

brew install awscli aws configure

// Create a bucket aws s3 mb s3://cluster1.k8s.claudioteixeira.com aws s3 ls

// Store kops config on a variable export KOPS_STATE_STORE=s3://cluster1.k8s.claudioteixeira.com

kops create cluster
--cloud=aws --zones=eu-west-1b
--dns-zone=k8s.claudioteixeira.com
--name=cluster1.k8s.claudioteixeira.com --yes

kops validate cluster

PODS

Hypervisor Virtualization - VM Docker - Container K8s - Pods

A Pod has a container inside.

A Pod has a manifest apiVersion: 1 kind: Pod metadata: name : hello-pod spec: containers: - name: hello-ctr image: containerPort: 8080

The manifest is feeded into the master apiServer The scheduler of the master, deploys the pod in a node

A Pod gets a single IP

POD1 (containerPort 23): 10.0.10.15:23 POD2 (containerPort 80): 10.0.10.17:80

Inter-pod Communcation

POD NETWORK (10.0.10.X) POD1 POD2 POD3 POD4

Pods can talk with each other

Pod is mortal, it dies it gets replaced.

Lifecycle pod.YML ---> apiserver ---> (pending) ----> running ---> succeeded

A Pod is scheduled on a Node (minion)

Pods

Create a pod (not recommended) kubectl create -f hello_pod.yml kubectl get pods kubectl describe pods kubectl get pods/hello-pod kubectl delete pods hello-pod

Replication Controller

Create a replication controller kubectl create -f rc.yml kubectl get rc Note: desired will match current kubectl describe rc Update the rc.yml (eg. change replicas number) kubectl apply -f rc.yml Check status of rc kubectl get rc -o wide Update the replication controller kubectl rolling-update -f rc-v2.yml

Services (expose and consume from pods)

  1. Access the app form outside the cluster (eg. web service)
  2. Access the app from inside the cluster (eg. another app consuming)

A service is a REST object. A service is an abstraction.

Service provides immutable IP,DNS and PORT.

Pods have labels. Service balances over Pods based on label selecting.

[Client] ------???----- [Replication Controller [POD] [POD] [POD] ]

WE DO NOT CONNECT TO POD IPS!!

[Client] ------ >> Service :) >> ----- [Replication Controller [POD] [POD] [POD] ]

Example: [SERVICE] IP: 10.0.10.298 DNS: myservice port: 3000

[NODE[POD]:3000] [NODE[POD]:3000] [NODE[POD]:3000]

Exposing the PODS Iteratively

Master Public IP: 52.212.37.85 Create and Expose service kubectl expose rc hello-rc --name=hello-svc --target-port=8080 --type=NodePort kubectl describe svc hello-svc

NODE1: http://34.240.220.118:32304/ NODE2: http://34.252.31.68:32304/

Exposing the PODS Declaratively

kubectl delete svc hello-svc View all app names in pods kubectl describe pods | grep app Cross check if app name matches pod app name cat service.yml Create service kubectl create -f service.yml curl -i http://34.240.220.118:30001 NODE1: http://34.240.220.118:30001/ NODE2: http://34.252.31.68:30001/ kubectl get svc kubectl describe ep hello-svc

Stream Logs kubectl logs -f hello-rc-5744r

SSH into the POD kubectl attach hello-rc-5744r -i

Run a command kubectl exec hello-rc-5744r -- ls /

Restart POD kubectl delete pod hello-rc-5744r

Revert Versions

add new pods with replication controllers change labels on services

Deployments (Updates and Rollbacks)

Will create Replica Sets which will create PODS. (Replica Sets are enhanced Replication Controllers)

deployment.yml ---> api server ----> Deployed to Cluster kubectl create -f deploy.yml Get replica sets kubectl get rs kubectl describe deploy hello-deploy kubectl describe rs

Update

Just change the original deploy.yml and push it to the api server. Update the image tag. The master will create a new replica set and deprecate the old one. kubectl apply -f deploy.yml --record kubectl rollout status deployment hello-deploy kubectl rollout history deployment hello-deploy kubectl get rs

Rollback

kubectl rollout undo deployment hello-deploy --to-revision=1 kubectl rollout status deployment hello-deploy

Created on 2/25/2019