Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-kind(K8s in Docker)
This post is based on https://kind.sigs.k8s.io/.
The Kubernetes IN Docker is a tool for running local Kubernetes clusters using Docker container โnodesโ. It is primarily designed for testing Kubernetes 1.11+. We can use it to create multi-node or multi-control-plane Kubernetes clusters
For more on Kubernetes solutions: https://kubernetes.io/docs/setup/pick-right-solution/#custom-solutions
If we have go and docker installed, we can install kind
with:
$ go get -u sigs.k8s.io/kind
This will put kind
in $(go env GOPATH)/bin. We may need to add that directory to our $PATH as shown https://golang.org/doc/code.html#GOPATH or do the following:
$ export PATH=$PATH:$(go env GOPATH)/bin $ export GOPATH=$(go env GOPATH)
The GOPATH environment variable specifies the location of our workspace. It defaults to a directory named go inside our home directory.
We can create a Kubernetes cluster using kind create cluster
(this may take 4-5 mins):
$ kind create cluster Creating cluster "kind" ... โ Ensuring node image (kindest/node:v1.13.4) ๐ผ โ Preparing nodes ๐ฆ โ Creating kubeadm config ๐ โ Starting control-plane ๐น๏ธ Cluster creation complete. You can now use the cluster with: export KUBECONFIG="$(kind get kubeconfig-path --name="kind")" kubectl cluster-info
This will bootstrap a Kubernetes cluster using a pre-built node image - we can find it on docker hub kindest/node.
After creating a cluster, we can use kubectl
to interact with it by using the configuration file generated by kind
:
$ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
The kind get kubeconfig-path
returns the location of the generated confguration file. In my case, it is:
$ echo $KUBECONFIG /Users/kihyuckhong/.kube/kind-config-kind
The config file looks like this:
apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0t... server: https://localhost:64708 name: kind contexts: - context: cluster: kind user: kubernetes-admin name: kubernetes-admin@kind current-context: kubernetes-admin@kind kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0... client-key-data: LS0...
By default, the cluster will be given the name kind
. We may want to use the --name flag to assign the cluster a different context name.
Let's check how the cluster looks like:
$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kind kind kubernetes-admin $ kubectl cluster-info Kubernetes master is running at https://localhost:56897 KubeDNS is running at https://localhost:56897/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
To see all the clusters we have created, we can use the get clusters
command:
$ kind get clusters kind
The cluster will have a kubeconfig file to go along with it:
$ kind get kubeconfig-path /Users/kihyuckhong/.kube/kind-config-kind
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kind-control-plane Ready master 23m v1.13.4 172.17.0.2 <none> Ubuntu 18.04.1 LTS 4.9.125-linuxkit docker://18.6.3 $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m
We can see a container for the control plane is running:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 88879275d8c2 kindest/node:v1.13.4 "/usr/local/bin/entrโฆ" About an hour ago Up About an hour 56897/tcp, 127.0.0.1:56897->6443/tcp kind-control-plane
When creating our kind cluster, via create cluster
, we can use a configuration file to run specific commands before or after systemd or kubeadm run. To specify a configuration file when creating a cluster, use the --config flag.
For a sample kind configuration file see kind-example-config.
We're mostly interested in multi-node clusters. A simple configuration for this can be achieved with the following config file contents ~/.kube/kind_worker:
# three node (two workers) cluster config kind: Cluster apiVersion: kind.sigs.k8s.io/v1alpha3 nodes: - role: control-plane - role: worker - role: worker
Let's make our cluster with two nodes. We may want to delete the cluster and start it again to use the flag, --config:
$ unset KUBECONFIG $ kind delete cluster $ $ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")" $ $ kind create cluster --config kind_worker Creating cluster "kind" ... โ Ensuring node image (kindest/node:v1.13.4) ๐ผ โ Preparing nodes ๐ฆ๐ฆ๐ฆ โ Creating kubeadm config ๐ โ Starting control-plane ๐น๏ธ โ Joining worker nodes ๐ Cluster creation complete. You can now use the cluster with: export KUBECONFIG="$(kind get kubeconfig-path --name="kind")" kubectl cluster-info $ echo $KUBECONFIG /Users/kihyuckhong/.kube/kind-config-kind $ kubectl cluster-info Kubernetes master is running at https://localhost:53350 KubeDNS is running at https://localhost:53350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kind kind kubernetes-admin $ kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready master 4m51s v1.13.4 kind-worker Ready <none> 4m35s v1.13.4 kind-worker2 Ready <none> 4m34s v1.13.4
Now we have two worker nodes and one node for control-plane!
Namespace?
$ kubens default kube-public kube-system
We'll create a node with Flask app. Here are the files we need:
Dockerfile:
FROM alpine RUN apk add --no-cache python3 && \ python3 -m ensurepip && \ rm -r /usr/lib/python*/ensurepip && \ pip3 install --upgrade pip setuptools && \ rm -r /root/.cache COPY . /app WORKDIR /app RUN pip3 install -r requirements.txt ENTRYPOINT [ "python3" ] CMD [ "app.py" ]
app.py:
# app.py from flask import Flask app = Flask(__name__) @app.route('/') def blog(): return "Flask in kind Kubernetes cluster" if __name__ == '__main__': app.run(threaded=True,host='0.0.0.0',port=8087)
requirements.txt:
Flask==0.10.1
Run the container on local but not in the cluster yet:
$ docker build -t k8s-flask:latest . $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s-flask latest 51227513e1ef 2 minutes ago 60.6MB $ docker run -p 5000:8087 k8s-flask:latest 5ca7b4861798fdc86886f5f86e43180028b44e9ec07186fadae21f94dc785a52 $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5ca7b4861798 k8s-flask:latest "python3 app.py" 12 seconds ago Up 12 seconds 0.0.0.0:5000->8787/tcp recursing_clarke
First, we need to check if the cluster is up:
$ kind get clusters kind $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kind kind kubernetes-admin
Let's use "v" but not "latest" for our image, build and push it to DockerHub:
$ docker build -t dockerbogo/k8s-flask:v1 . $ docker push dockerbogo/k8s-flask:v1
Here is the manifest file ():
apiVersion: v1 kind: Service metadata: name: flask-app-service spec: ports: - targetPort: 8787 nodePort: 30087 port: 80 selector: app: flask-app type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: flask-deployment labels: app: flask spec: replicas: 1 selector: matchLabels: app: flask-app template: metadata: labels: app: flask-app spec: containers: - name: flask-app-container image: dockerbogo/k8s-flask:v1 ports: - containerPort: 8787
$ kubectl apply -f flask_deployment.yaml service/flask-app-service created deployment.apps/flask-deployment created $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES flask-deployment-5b44f997c-fbvbq 1/1 Running 0 18m 10.38.0.1 kind-worker <none> <none> flask-deployment-5b44f997c-ghvv7 1/1 Running 0 18m 10.38.0.2 kind-worker <none> <none> flask-deployment-5b44f997c-ngp4h 1/1 Running 0 18m 10.32.0.4 kind-worker2 <none> <none> $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kind-control-plane Ready master 13m v1.13.4 172.17.0.4 <none> Ubuntu 18.04.1 LTS 4.9.125-linuxkit docker://18.6.3 kind-worker Ready <none> 13m v1.13.4 172.17.0.2 <none> Ubuntu 18.04.1 LTS 4.9.125-linuxkit docker://18.6.3 kind-worker2 Ready <none> 13m v1.13.4 172.17.0.3 <none> Ubuntu 18.04.1 LTS 4.9.125-linuxkit docker://18.6.3 $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE flask-app-service NodePort 10.96.244.120 <none> 80:30087/TCP 112s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14m $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9a206e84648 kindest/node:v1.13.4 "/usr/local/bin/entrโฆ" 19 minutes ago Up 19 minutes 53350/tcp, 127.0.0.1:53350->6443/tcp kind-control-plane 9e9204297170 kindest/node:v1.13.4 "/usr/local/bin/entrโฆ" 19 minutes ago Up 19 minutes kind-worker 3c9bd9e28ce3 kindest/node:v1.13.4 "/usr/local/bin/entrโฆ" 19 minutes ago Up 19 minutes kind-worker
Currently, stuck at this. We need Node-IP:nodePort. But how to get Node-IP for the NodePort service?
We can connect to the worker node but that's it:
$ kubectl run busybox -it --image=busybox --restart=Never --rm If you don't see a command prompt, try pressing enter. / # ping 10-32-0-4.default.pod.cluster.local PING 10-32-0-4.default.pod.cluster.local (10.32.0.4): 56 data bytes 64 bytes from 10.32.0.4: seq=0 ttl=64 time=3.982 ms 64 bytes from 10.32.0.4: seq=1 ttl=64 time=0.349 ms 64 bytes from 10.32.0.4: seq=2 ttl=64 time=0.319 ms
We can list pods that are running the app:
$ kubectl get pods --selector="app=flask-app" NAME READY STATUS RESTARTS AGE flask-deployment-5b44f997c-c76cb 1/1 Running 0 8m25s flask-deployment-5b44f997c-rrh76 1/1 Running 0 8m25s flask-deployment-5b44f997c-rvkf8 1/1 Running 0 8m25s
At this point, the only way to get it is to use "port-forwarding":
$ kubectl port-forward flask-deployment-5b44f997c-c76cb 8787:8787 Forwarding from 127.0.0.1:8787 -> 8787 Forwarding from [::1]:8787 -> 8787
To delete a cluster with kind
:
$ unset KUBECONFIG $ kind delete cluster
If the flag --name is not specified, kind
will use the default cluster context name kind and delete that cluster.
Docker & K8s
- Docker install on Amazon Linux AMI
- Docker install on EC2 Ubuntu 14.04
- Docker container vs Virtual Machine
- Docker install on Ubuntu 14.04
- Docker Hello World Application
- Nginx image - share/copy files, Dockerfile
- Working with Docker images : brief introduction
- Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
- More on docker run command (docker run -it, docker run --rm, etc.)
- Docker Networks - Bridge Driver Network
- Docker Persistent Storage
- File sharing between host and container (docker run -d -p -v)
- Linking containers and volume for datastore
- Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
- Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
- Dockerfile - Build Docker images automatically III - RUN
- Dockerfile - Build Docker images automatically IV - CMD
- Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
- Docker - Apache Tomcat
- Docker - NodeJS
- Docker - NodeJS with hostname
- Docker Compose - NodeJS with MongoDB
- Docker - Prometheus and Grafana with Docker-compose
- Docker - StatsD/Graphite/Grafana
- Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
- Docker : NodeJS with GCP Kubernetes Engine
- Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
- Docker : Jenkins Master and Slave
- Docker - ELK : ElasticSearch, Logstash, and Kibana
- Docker - ELK 7.6 : Elasticsearch on Centos 7
- Docker - ELK 7.6 : Filebeat on Centos 7
- Docker - ELK 7.6 : Logstash on Centos 7
- Docker - ELK 7.6 : Kibana on Centos 7
- Docker - ELK 7.6 : Elastic Stack with Docker Compose
- Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
- Docker - Deploy Elastic Stack via Helm on minikube
- Docker Compose - A gentle introduction with WordPress
- Docker Compose - MySQL
- MEAN Stack app on Docker containers : micro services
- MEAN Stack app on Docker containers : micro services via docker-compose
- Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
- Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
- Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
- Docker Compose with two containers - Flask REST API service container and an Apache server container
- Docker compose : Nginx reverse proxy with multiple containers
- Docker & Kubernetes : Envoy - Getting started
- Docker & Kubernetes : Envoy - Front Proxy
- Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
- Docker Packer
- Docker Cheat Sheet
- Docker Q & A #1
- Kubernetes Q & A - Part I
- Kubernetes Q & A - Part II
- Docker - Run a React app in a docker
- Docker - Run a React app in a docker II (snapshot app with nginx)
- Docker - NodeJS and MySQL app with React in a docker
- Docker - Step by Step NodeJS and MySQL app with React - I
- Installing LAMP via puppet on Docker
- Docker install via Puppet
- Nginx Docker install via Ansible
- Apache Hadoop CDH 5.8 Install with QuickStarts Docker
- Docker - Deploying Flask app to ECS
- Docker Compose - Deploying WordPress to AWS
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
- Docker - ECS Fargate
- Docker - AWS ECS service discovery with Flask and Redis
- Docker & Kubernetes : minikube
- Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
- Docker & Kubernetes 3 : minikube Django with Redis and Celery
- Docker & Kubernetes 4 : Django with RDS via AWS Kops
- Docker & Kubernetes : Kops on AWS
- Docker & Kubernetes : Ingress controller on AWS with Kops
- Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
- Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
- Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
- Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
- Docker & Kubernetes : DaemonSet
- Docker & Kubernetes : Secrets
- Docker & Kubernetes : kubectl command
- Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
- Docker & Kubernetes : Configure a Pod to Use a ConfigMap
- AWS : EKS (Elastic Container Service for Kubernetes)
- Docker & Kubernetes : Run a React app in a minikube
- Docker & Kubernetes : Minikube install on AWS EC2
- Docker & Kubernetes : Cassandra with a StatefulSet
- Docker & Kubernetes : Terraform and AWS EKS
- Docker & Kubernetes : Pods and Service definitions
- Docker & Kubernetes : Service IP and the Service Type
- Docker & Kubernetes : Kubernetes DNS with Pods and Services
- Docker & Kubernetes : Headless service and discovering pods
- Docker & Kubernetes : Scaling and Updating application
- Docker & Kubernetes : Horizontal pod autoscaler on minikubes
- Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
- Docker & Kubernetes : Rolling updates
- Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
- Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
- Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
- Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
- Docker & Kubernetes : MongoDB / MongoExpress on Minikube
- Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
- Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
- Docker & Kubernetes : Nginx Ingress Controller on Minikube
- Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
- Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
- Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
- Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
- Docker & Kubernetes : StatefulSets on minikube
- Docker & Kubernetes : RBAC
- Docker & Kubernetes Service Account, RBAC, and IAM
- Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
- Docker & Kubernetes : Helm Chart
- Docker & Kubernetes : My first Helm deploy
- Docker & Kubernetes : Readiness and Liveness Probes
- Docker & Kubernetes : Helm chart repository with Github pages
- Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
- Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
- Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
- Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
- Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
- Docker & Kubernetes : Istio on EKS
- Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
- Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
- Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
- Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
- Docker & Kubernetes : Spinnaker on EKS with Halyard
- Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
- Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
- Docker & Kubernetes : Jenkins-X on EKS
- Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
- Docker & Kubernetes : ArgoCD on Kubernetes cluster
- Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization