Kubernetes Q and A - Part I
- Describe the steps from packing container images to running containers.
To run an application in Kubernetes, we first need to package it up into one or more container images, push those images to an image registry, and then post a description of our app to the Kubernetes API server.
The description includes information such as the container image or images that contain our application components, how those components are related to each other, and which ones need to be run co-located (together on the same node) and which don’t.
For each component, we can also specify how many replicas we want to run. Additionally, the description also includes which of those components provide a service to either internal or external clients and should be exposed through a single IP address and made discoverable to the other components.
When the API server processes our app's description, the Scheduler schedules the specified groups of containers onto the available worker nodes based on computational resources required by each group and the unallocated resources on each node at that moment.
The Kubelet on those nodes then instructs the Container Runtime (Docker, rkt) to pull the required container images and run the containers. - Why do we even need pods? Why can't we use containers directly?
Containers are designed to run only a single process per container (unless the process itself spawns child processes). If we run multiple unrelated processes in a single container, it is our responsibility to keep all those processes running, manage their logs, and so on.
For example, we'd have to include a mechanism for automatically restarting individual processes if they crash. Also, all those processes would log to the same standard output, so we'd have a hard time figuring out what process logged what.
Therefore, we need to run each process in its own container. That's how Docker and Kubernetes are meant to be used.
All containers of a pod run under the same Network (so containers share network interaces hence they share the same IP address and port space) and UTS (UNIX Time Sharing) namespaces (share the same hsotname).
Because containers in a pod run in the same Network namespace, which means processes running in containers of the same pod need to take care not to bind to the same port numbers, or they'll run into port conflicts.
All the containers in a pod also have the same loopback network interface, so a container can communicate with other containers in the same pod through localhost.
- Create a simple YAML descriptor for a pod and then create a pod.
Here is a pod descriptor file bogo-manual.yaml:
apiVersion: v1 kind: Pod metadata: name: bogo-manual spec: containers: - image: dockerbogo/bogo name: bogo ports: - containerPort: 8080 protocol: TCP
It conforms to the v1 version of the Kubernetes API. The type of resource we're describing is a pod, with the name bogo-manual. The pod consists of a single container based on the dockerbogo/bogo image. The container is given a name and it's listening on port 8080.
We can use
kubectl explain pods
to get descriptions about pods:$ kubectl explain pods KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata <Object> Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec <Object> Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status <Object> Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
We can then drill deeper to find out more about each attribute, for example, pod.spec attribute, with kubectl explain pod.spec command:
$ kubectl explain pod.spec KIND: Pod VERSION: v1 RESOURCE: spec <Object> DESCRIPTION: Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status PodSpec is a description of a pod. ...
To create the pod from our YAML file, we need to use the
kubectl create
command:$ kubectl create -f bogo-manual.yaml pod/bogo-manual created $ kubectl get pods NAME READY STATUS RESTARTS AGE bogo-manual 1/1 Running 0 1m
After creating the pod, we can ask Kubernetes for the full YAML of the pod. We'll see it's similar to the YAML we saw earlier but the additional fields appears in the returned definition. Go ahead and use the following command to see the full descriptor of the pod:
$ kubectl get pod bogo-manual -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2021-03-20T21:10:38Z" managedFields: ...
To get json instead of yaml, we want to usekubectl get po kubia-manual -o json
.
- How can we talk to a specific pod without going through a service?
Ans: we can use kubectl port-forward proxy running on localhost:8888.
The pod is now running:
$ kubectl get pods NAME READY STATUS RESTARTS AGE bogo-manual 1/1 Running 0 16m
But how do we can see it in action?
We can use thekubectl expose
command to create a service to gain access to the pod externally. And we have other ways of connecting to a pod for testing and debugging purposes. One of them is throughkubectl port-forward
command. The following command will forward our machine's local port 8888 to port 8080 of our bogo-manual pod:
$ kubectl port-forward bogo-manual 8888:8080 Forwarding from 127.0.0.1:8888 -> 8080 Forwarding from [::1]:8888 -> 8080
The port forwarder is running and we can now connect to our pod through the local port.
In a different terminal, we can now use
curl
to send an HTTP request to our pod through the kubectl port-forward proxy running on localhost:8888:$ curl localhost:8888 You've hit bogo-manual
Using port forwarding like this is an effective way to test an individual pod. - Create/delete a pod with labels.
We want to create a new pod with two labels using bogo-manual-with-labels.yaml:
apiVersion: v1 kind: Pod metadata: name: bogo-manual-v2 labels: creation_method: manual env: prod spec: containers: - image: dockerbogo/bogo name: bogo ports: - containerPort: 8080 protocol: TCP
We've included the labels creation_method=manual and env=prod in the metadata.labels section. Let's create it:
$ kubectl create -f bogo-manual-with-labels.yaml pod/bogo-manual-v2 created $ kubectl get pods NAME READY STATUS RESTARTS AGE bogo-manual 1/1 Running 0 148m bogo-manual-v2 1/1 Running 0 11m
Instead of listing all labels, if we're only interested in certain labels, we can specify them with the -L switch and have each displayed in its own column. List pods again and show the columns for the two labels we attached to our bogo-manual-v2 pod:
$ kubectl get pods -L creation_method,env NAME READY STATUS RESTARTS AGE CREATION_METHOD ENV bogo-manual 1/1 Running 0 153m bogo-manual-v2 1/1 Running 0 16m manual prod
To list pods using a label selector:
$ kubectl get pods NAME READY STATUS RESTARTS AGE bogo-manual 1/1 Running 0 3h21m bogo-manual-v2 1/1 Running 0 64m $ kubectl get pod -l creation_method=manual NAME READY STATUS RESTARTS AGE bogo-manual-v2 1/1 Running 0 64m
To list all pods that include the env label, whatever its value is:
$ kubectl get pod -l env NAME READY STATUS RESTARTS AGE bogo-manual-v2 1/1 Running 0 67m
To list pods that don't have the env label:
$ kubectl get pod -l '!env' NAME READY STATUS RESTARTS AGE bogo-manual 1/1 Running 0 3h28m $ kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS bogo-manual 1/1 Running 0 6h8m <none> bogo-manual-v2 1/1 Running 0 3h51m creation_method=manual,env=prod $ kubectl delete pod -l env=prod pod "bogo-manual-v2" deleted $ kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS bogo-manual 1/1 Running 0 6h8m <none>
- What are namespaces?
Using multiple namespaces allows us to split complex systems with numerous components into smaller distinct groups.
They can also be used for separating resources in a multi-tenant environment, splitting up resources into pro, dev, and QA environments.
To list all namespaces in the cluster:
$ kubectl get ns NAME STATUS AGE default Active 20h kube-node-lease Active 20h kube-public Active 20h kube-system Active 20h
Up until noew, we've operated only in the default namespace. When listing resources with thekubectl get
command, we did not specify the namespace explicitly, sokubectl
always defaulted to the default namespace, showing us only the objects in that namespace.
But as we can see from the list, the kube-public and the kube-system namespaces also exist.
To look at the pods that belong to the kube-system namespace, we need to tellkubectl
to list pods in that namespace only:
$ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE coredns-f9fd979d6-9xcsw 1/1 Running 2 20h etcd-minikube 1/1 Running 1 4h13m kube-apiserver-minikube 1/1 Running 1 4h13m kube-controller-manager-minikube 1/1 Running 2 20h kube-proxy-nmsrh 1/1 Running 2 20h kube-scheduler-minikube 1/1 Running 2 20h storage-provisioner 1/1 Running 5 20h
Note that we can also use -n instead of --namespace.
Namespaces enable us to separate resources that don't belong together into non-overlapping groups. If several users or groups of users are using the same Kubernetes cluster, and they each manage their own distinct set of resources, they should each use their own namespace.
A namespace is a Kubernetes resource like any other, so we can create it by posting a YAML file to the Kubernetes API server.
Let's create a bogo-namespace.yaml file with the following:
apiVersion: v1 kind: Namespace metadata: name: bogo-namespace
Now, let's usekubectl
to post the file to the Kubernetes API server:
$ kubectl create -f bogo-namespace.yaml namespace/bogo-namespace created $ kubectl get ns NAME STATUS AGE bogo-namespace Active 8s default Active 20h kube-node-lease Active 20h kube-public Active 20h kube-system Active 20h
We could have created the namespace usingkubectl create namespace
command without the yaml file:
$ kubectl create namespace bogo-namespace
To create resources in the namespace we've created, either add a namespace: bogo-namespace entry to the metadata section, or specify the namespace when creating the resource with thekubectl create
command:
$ kubectl create -f bogo-manual.yaml -n bogo-namespace pod/bogo-manual created
We now have two pods with the same name (bogo-manual). One is in the default namespace, and the other is in our bogo-namespace:
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE bogo-namespace bogo-manual 1/1 Running 0 95m default bogo-manual 1/1 Running 0 5h51m default bogo-manual-v2 1/1 Running 0 3h34m kube-system coredns-f9fd979d6-9xcsw 1/1 Running 2 22h kube-system etcd-minikube 1/1 Running 1 6h16m kube-system kube-apiserver-minikube 1/1 Running 1 6h16m kube-system kube-controller-manager-minikube 1/1 Running 2 22h kube-system kube-proxy-nmsrh 1/1 Running 2 22h kube-system kube-scheduler-minikube 1/1 Running 2 22h kube-system storage-provisioner 1/1 Running 5 22h
We no longer need either the pods in that namespace, or the namespace itself. We can delete the whole namespace (the pods will be deleted along with the namespace automatically):
$ kubectl delete ns bogo-namespace namespace "bogo-namespace" deleted
(Note)
We should know what namespaces don't provide, at least, not out of the box.
Although namespaces allow us to isolate objects into distinct groups, they don't provide any kind of isolation of running objects.
For example, we may think that when different users deploy pods across different namespaces, those pods are isolated from each other and can't communicate, but that's not necessarily the case.
Whether namespaces provide network isolation depends on which networking solution is deployed with Kubernetes. When the solution doesn't provide inter-namespace network isolation, if a pod in namespace "foo" knows the IP address of a pod in namespace "bar", there is nothing preventing it from sending traffic, such as HTTP requests, to the other pod.
- What is ReplicationController?
One of the main benefits of using Kubernetes we can keep our containers running in the cluster.
But what if one of those containers dies? What if all containers of a pod die?
As soon as a pod is scheduled to a node, the Kubelet on that node will run its containers and keep them running as long as the pod exists. If the container's main process crashes, the Kubelet will restart the container.
A ReplicationController is a Kubernetes resource that ensures its pods are always kept running. If the pod disappears for any reason, the ReplicationController notices the missing pod and creates a replacement pod.
- ReplicaSet
Initially, ReplicationControllers were the only Kubernetes component for replicating pods and rescheduling them when nodes failed. Later, a similar resource called a ReplicaSet was introduced and it’s a new generation of ReplicationController.
We're going to create a ReplicaSet with the following yaml, bogo-replicaset.yaml:
apiVersion: apps/v1 kind: ReplicaSet metadata: name: bogo spec: replicas: 3 selector: matchLabels: app: bogo template: metadata: labels: app: bogo spec: containers: - name: bogo image: dockerbogo/bogo
The first thing to note is that ReplicaSet are not part of the v1 API, so we need to ensure us specify the proper apiVersion when creating the resource.
We're creating a resource of type ReplicaSet:
$ kubectl create -f bogo-replicaset.yaml replicaset.apps/bogo created $ kubectl get rs NAME DESIRED CURRENT READY AGE bogo 3 3 3 85s $ kubectl describe rs Name: bogo Namespace: default Selector: app=bogo Labels: <none> Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=bogo Containers: bogo: Image: dockerbogo/bogo Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 51m replicaset-controller Created pod: bogo-g6x78 Normal SuccessfulCreate 51m replicaset-controller Created pod: bogo-hbdhw Normal SuccessfulCreate 51m replicaset-controller Created pod: bogo-89h4f
It’s showing it has three replicas matching the selector. If we list all the pods, we'll see they're still the same three pods we had before. The ReplicaSet didn't create any new ones.
$ kubectl get pods NAME READY STATUS RESTARTS AGE bogo-89h4f 1/1 Running 0 54m bogo-g6x78 1/1 Running 0 54m bogo-hbdhw 1/1 Running 0 54m
We can delete the ReplicaSet to clean up our cluster:$ kubectl delete rs bogo replicaset.apps "bogo" deleted
Deleting the ReplicaSet should delete all the pods. List the pods to confirm that's the case:
$ kubectl get pods NAME READY STATUS RESTARTS AGE
- DaemonSets
ReplicaSets are used for running a specific number of pods deployed anywhere in the Kubernetes cluster.
But certain cases exist when we want a pod to run on each and every node in the cluster and each node needs to run exactly one instance of the pod.
DaemonSets run only a single pod replica on each node while ReplicaSets scatter them around the whole cluster randomly.
The use cases of the DaemonSets include infrastructure-related pods that perform system-level operations (such as a log collector and a resource monitor on every node).
Another good example is Kubernetes' own kube-proxy process, which needs to run on all nodes to make services work.
To run a pod on all cluster nodes, we create a DaemonSet object, which is much like a ReplicaSet, except that pods created by a DaemonSet already have a target node specified and skip the Kubernetes Scheduler. They aren't scattered around the cluster randomly.
Whereas a ReplicaSet (or ReplicationController) makes sure that a desired number of pod replicas exist in the cluster, a DaemonSet doesn't have any notion of a desired replica count. It doesn't need it because its job is to ensure that a pod matching its pod selector is running on each node.
If a node goes down, the DaemonSet doesn't cause the pod to be created elsewhere. But when a new node is added to the cluster, the DaemonSet immediately deploys a new pod instance to it.
It also does the same if someone inadvertently deletes one of the pods, leaving the node without the DaemonSet's pod. Like a ReplicaSet, a DaemonSet creates the pod from the pod template configured in it.
A DaemonSet deploys pods to all nodes in the cluster, unless we specify that the pods should only run on a subset of all the nodes. This is done by specifying the node-Selector property in the pod template, which is part of the DaemonSet definition similar to the pod template in a ReplicaSet.
Let's create the DaemonSet using ssd-monitor-daemonset.yaml:
apiVersion: apps/v1 kind: DaemonSet metadata: name: ssd-monitor spec: selector: matchLabels: app: ssd-monitor template: metadata: labels: app: ssd-monitor spec: nodeSelector: disk: ssd containers: - name: main image: dockerbogo/ssd-monitor
$ kubectl create -f ssd-monitor-daemonset.yaml daemonset.apps/ssd-monitor created $ kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ssd-monitor 0 0 0 0 0 disk=ssd 3m28s
Those zeroes indicate there's something wrong. Let's list pods:
$ kubectl get pods No resources found in default namespace.
Where are the pods?
Yes, we forgot to label our nodes with the disk=ssd label. Let's label it.
The DaemonSet should detect that the nodes' labels have changed and deploy the pod to all nodes with a matching label.
We need to know the node's name when labeling it:
$ kubectl get node NAME STATUS ROLES AGE VERSION minikube Ready master 27h v1.19.0
Now, we need to add the disk=ssd label to our nodes like this:
$ kubectl label node minikube disk=ssd node/minikube labeled
The DaemonSet should have created one pod now:
$ kubectl get pods NAME READY STATUS RESTARTS AGE ssd-monitor-jgfdn 1/1 Running 0 13s
- Service resources - expose a group of pods to external clients
A Kubernetes Service is a resource for a single entry point to a group of pods. Each service has an IP address and port that never change while the service exists.
Clients can open connections to that IP and port, and those connections are then routed to one of the pods behind that service. This way, clients of a service don't need to know the location of pods providing the service, allowing those pods to be moved around the cluster at any time.
The service address doesn't change even if the pod's IP address changes. Additionally, by creating the service, we also enable the pods to easily find the service by its name through either environment variables or DNS.
A service can be backed by more than one pod and the connections to the service are load-balanced across all the backing pods.
But how exactly do we define which pods are part of the service and which aren't?
Though the easiest way to create a service is throughkubectl expose
, we'll create a service manually by posting a YAML to the Kubernetes API server.
Here is our bogo-svc.yaml file:apiVersion: v1 kind: Service metadata: name: bogo spec: ports: - port: 80 targetPort: 8080 selector: app: bogo
where the port this service will be available on, targetPort is the container port the service will forward to.
All pods with the app=bogo label will be part of this service.
Here we're defining a bogo service which will accept connections on port 80 and route each connection to port 8080 of one of the pods matching the app=bogo label selector.
Let's create the service by posting the file usingkubectl create
:
$ kubectl create -f bogo-svc.yaml service/bogo created $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bogo ClusterIP 10.108.115.229 <none> 80/TCP 7m20s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 38h
The list shows that the IP address assigned to the service is 10.111.23.135. Because this is the cluster IP, it's only accessible from inside the cluster.
The primary purpose of services is exposing groups of pods to other pods in the cluster though we'll usually also want to expose services externally.
Let's use the service from inside the cluster and see what it does.
We can execute thecurl
command inside one of our existing pods through thekubectl exec
command which allows us to remotely run arbitrary commands inside an existing container of a pod.
$ kubectl create deployment bogo --image=dockerbogo/bogo deployment.apps/bogo created $ kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS bogo-764645c96c-v5rzf 1/1 Running 0 168m app=bogo,pod-template-hash=764645c96c $ kubectl exec bogo-764645c96c-v5rzf -- curl -s http://10.108.115.229 $ kubectl logs bogo-764645c96c-v5rzf bogo server starting and listening on 8080...
Note
Thecurl
from within the pod did not get the response from the service. Need an investigation.
The double dash (--) in the command signals the end of command options forkubectl
.
Everything after the double dash is the command that should be executed inside the pod.
Using the double dash isn't necessary if the command has no arguments that start with a dash. But in our case, if we don't use the double dash there, the -s option would be interpreted as an option forkubectl exec
. - How do pods find discover a service's IP and port? - Discovering services
bogo-svc.yaml:
apiVersion: v1 kind: Service metadata: name: bogo spec: ports: - port: 80 targetPort: 8080 selector: app: bogo
$ kubectl create -f bogo-svc.yaml service/bogo created
bogo-replicaset.yaml:
apiVersion: apps/v1 kind: ReplicaSet metadata: name: bogo spec: replicas: 3 selector: matchLabels: app: bogo template: metadata: labels: app: bogo spec: containers: - name: bogo image: dockerbogo/bogo
$ kubectl create -f bogo-replicaset.yaml replicaset.apps/bogo created
$ kubectl get pods NAME READY STATUS RESTARTS AGE bogo-8cktl 1/1 Running 0 47s bogo-bgx9z 1/1 Running 0 47s bogo-t9df8 1/1 Running 0 47s $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bogo ClusterIP 10.108.125.207 <none> 80/TCP 2m56s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d12h $ kubectl get rs NAME DESIRED CURRENT READY AGE bogo 3 3 3 65s
By creating a service, we now have a single and stable IP address and port that we can hit to access our pods. This address will remain unchanged throughout the whole lifetime of the service. Pods behind this service may come and go, their IPs may change, their number can go up or down, but they'll always be accessible through the service's single and constant IP address.
But how do the client pods know the IP and port of a service?
Each service gets a DNS entry in the internal DNS server running in a kube-dns pod, and client pods that know the name of the service can access it through its fully qualified domain name (FQDN).
We'll try to access the bogo service through its FQDN instead of its IP and we'll do that inside an existing pod.
$ kubectl get pods NAME READY STATUS RESTARTS AGE bogo-8cktl 1/1 Running 0 27m bogo-bgx9z 1/1 Running 0 27m bogo-t9df8 1/1 Running 0 27m $ kubectl exec -it bogo-8cktl -- bash root@bogo-8cktl:/#
We're now inside the container. We can use thecurl
command to access the bogo service in any of the following ways:
root@bogo-8cktl:/# curl http://bogo.default.svc.cluster.local You've hit bogo-bgx9z root@bogo-8cktl:/# curl http://bogo.default.svc You've hit bogo-bgx9z root@bogo-8cktl:/# curl http://bogo.default You've hit bogo-t9df8 root@bogo-8cktl:/# curl http://bogo You've hit bogo-t9df8 root@bogo-8cktl:/# for i in {1..5}; do curl http://bogo; done You've hit bogo-bgx9z You've hit bogo-t9df8 You've hit bogo-bgx9z You've hit bogo-t9df8 You've hit bogo-bgx9z
We can hit our service by using the service's name as the hostname in the requested URL. We can omit the namespace and the svc.cluster.local suffix because of how the DNS resolver inside each pod's container is configured.
Look at the /etc/resolv.conf file in the container:
root@bogo-8cktl:/# cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5
- Can't ping to a service. Why?
What if, for whatever reason, we can't access your service?
We'll most likely try to figure out what's wrong by entering an existing pod and trying to access the service.
However, if we still can't access the service with acurl
command, maybe then we'll try toping
the service IP to see if it's up.
$ kubectl get pods NAME READY STATUS RESTARTS AGE bogo-8cktl 1/1 Running 0 161m bogo-bgx9z 1/1 Running 0 161m bogo-t9df8 1/1 Running 0 161m $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bogo ClusterIP 10.108.125.207 <none> 80/TCP 174m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d15h $ kubectl exec -it bogo-8cktl -- bash root@bogo-8cktl:/# root@bogo-8cktl:/# curl bogo You've hit bogo-bgx9z root@bogo-8cktl:/# ping bogo PING bogo.default.svc.cluster.local (10.108.125.207): 56 data bytes ^C--- bogo.default.svc.cluster.local ping statistics --- 31 packets transmitted, 0 packets received, 100% packet loss
curl
a service works, butping
it doesn't.
That's because the service's cluster IP is a virtual IP. So, the IP only has meaning when combined with the service port.
- What is a Service endpoints object?
Services don't link to pods directly. Instead, there is a resource sits in between: the Endpoints resource.
$ kubectl describe svc bogo Name: bogo Namespace: default Labels: <none> Annotations: <none> Selector: app=bogo Type: ClusterIP IP: 10.108.125.207 Port: <unset> 80/TCP TargetPort: 8080/TCP Endpoints: 172.18.0.2:8080,172.18.0.3:8080,172.18.0.4:8080 Session Affinity: None Events: <none>
where he service's pod selector is used to create the list of endpoints and an Endpoints resource is a list of IP addresses and ports exposing a service:
$ kubectl get endpoints bogo NAME ENDPOINTS AGE bogo 172.18.0.2:8080,172.18.0.3:8080,172.18.0.4:8080 96m
apiVersion: v1 kind: Service metadata: name: bogo spec: ports: - port: 80 targetPort: 8080 selector: app: bogo
Although the pod selector is defined in the service spec, it's not used directly when redirecting incoming connections.
Instead, the selector is used to build a list of IPs and ports, which is then stored in the Endpoints resource.
When a client connects to a service, the service proxy selects one of those IP and port pairs and redirects the incoming connection to the server listening at that location.
- Why Ingresses are needed?
One important reason is that each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services.
The downside of the LoadBalancer serviceis that each service we expose with a LoadBalancer will get its own IP address, and we have to pay for a LoadBalancer per exposed service, which can get expensive.
When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded to.
Ingress is the most useful if we want to expose multiple services under the same IP address, and we only pay for one load balancer.
Ingresses operate at the application layer of the network stack (HTTP) and can provide features such as cookie-based session affinity and the like, which services can't.
Docker & K8s
- Docker install on Amazon Linux AMI
- Docker install on EC2 Ubuntu 14.04
- Docker container vs Virtual Machine
- Docker install on Ubuntu 14.04
- Docker Hello World Application
- Nginx image - share/copy files, Dockerfile
- Working with Docker images : brief introduction
- Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
- More on docker run command (docker run -it, docker run --rm, etc.)
- Docker Networks - Bridge Driver Network
- Docker Persistent Storage
- File sharing between host and container (docker run -d -p -v)
- Linking containers and volume for datastore
- Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
- Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
- Dockerfile - Build Docker images automatically III - RUN
- Dockerfile - Build Docker images automatically IV - CMD
- Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
- Docker - Apache Tomcat
- Docker - NodeJS
- Docker - NodeJS with hostname
- Docker Compose - NodeJS with MongoDB
- Docker - Prometheus and Grafana with Docker-compose
- Docker - StatsD/Graphite/Grafana
- Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
- Docker : NodeJS with GCP Kubernetes Engine
- Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
- Docker : Jenkins Master and Slave
- Docker - ELK : ElasticSearch, Logstash, and Kibana
- Docker - ELK 7.6 : Elasticsearch on Centos 7
- Docker - ELK 7.6 : Filebeat on Centos 7
- Docker - ELK 7.6 : Logstash on Centos 7
- Docker - ELK 7.6 : Kibana on Centos 7
- Docker - ELK 7.6 : Elastic Stack with Docker Compose
- Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
- Docker - Deploy Elastic Stack via Helm on minikube
- Docker Compose - A gentle introduction with WordPress
- Docker Compose - MySQL
- MEAN Stack app on Docker containers : micro services
- MEAN Stack app on Docker containers : micro services via docker-compose
- Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
- Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
- Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
- Docker Compose with two containers - Flask REST API service container and an Apache server container
- Docker compose : Nginx reverse proxy with multiple containers
- Docker & Kubernetes : Envoy - Getting started
- Docker & Kubernetes : Envoy - Front Proxy
- Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
- Docker Packer
- Docker Cheat Sheet
- Docker Q & A #1
- Kubernetes Q & A - Part I
- Kubernetes Q & A - Part II
- Docker - Run a React app in a docker
- Docker - Run a React app in a docker II (snapshot app with nginx)
- Docker - NodeJS and MySQL app with React in a docker
- Docker - Step by Step NodeJS and MySQL app with React - I
- Installing LAMP via puppet on Docker
- Docker install via Puppet
- Nginx Docker install via Ansible
- Apache Hadoop CDH 5.8 Install with QuickStarts Docker
- Docker - Deploying Flask app to ECS
- Docker Compose - Deploying WordPress to AWS
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
- Docker - ECS Fargate
- Docker - AWS ECS service discovery with Flask and Redis
- Docker & Kubernetes : minikube
- Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
- Docker & Kubernetes 3 : minikube Django with Redis and Celery
- Docker & Kubernetes 4 : Django with RDS via AWS Kops
- Docker & Kubernetes : Kops on AWS
- Docker & Kubernetes : Ingress controller on AWS with Kops
- Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
- Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
- Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
- Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
- Docker & Kubernetes : DaemonSet
- Docker & Kubernetes : Secrets
- Docker & Kubernetes : kubectl command
- Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
- Docker & Kubernetes : Configure a Pod to Use a ConfigMap
- AWS : EKS (Elastic Container Service for Kubernetes)
- Docker & Kubernetes : Run a React app in a minikube
- Docker & Kubernetes : Minikube install on AWS EC2
- Docker & Kubernetes : Cassandra with a StatefulSet
- Docker & Kubernetes : Terraform and AWS EKS
- Docker & Kubernetes : Pods and Service definitions
- Docker & Kubernetes : Service IP and the Service Type
- Docker & Kubernetes : Kubernetes DNS with Pods and Services
- Docker & Kubernetes : Headless service and discovering pods
- Docker & Kubernetes : Scaling and Updating application
- Docker & Kubernetes : Horizontal pod autoscaler on minikubes
- Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
- Docker & Kubernetes : Rolling updates
- Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
- Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
- Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
- Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
- Docker & Kubernetes : MongoDB / MongoExpress on Minikube
- Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
- Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
- Docker & Kubernetes : Nginx Ingress Controller on Minikube
- Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
- Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
- Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
- Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
- Docker & Kubernetes : StatefulSets on minikube
- Docker & Kubernetes : RBAC
- Docker & Kubernetes Service Account, RBAC, and IAM
- Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
- Docker & Kubernetes : Helm Chart
- Docker & Kubernetes : My first Helm deploy
- Docker & Kubernetes : Readiness and Liveness Probes
- Docker & Kubernetes : Helm chart repository with Github pages
- Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
- Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
- Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
- Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
- Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
- Docker & Kubernetes : Istio on EKS
- Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
- Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
- Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
- Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
- Docker & Kubernetes : Spinnaker on EKS with Halyard
- Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
- Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
- Docker & Kubernetes : Jenkins-X on EKS
- Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
- Docker & Kubernetes : ArgoCD on Kubernetes cluster
- Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization