Docker : Dockerfile - NodeJS with GCP Kubernetes Engine
In this post, we'll do the following:
- Create a hello.js server.
- Create a Docker container image.
- Create a container cluster.
- Create a Kubernetes pod.
- Scale up our services.
Google Cloud Shell is loaded with development tools and it offers a persistent 5GB home directory and runs on the Google Cloud. Google Cloud Shell provides command-line access to our GCP resources. We can activate the shell: in GCP console, on the top right toolbar, click the Open Cloud Shell button:
In the dialog box that opens, click "START CLOUD SHELL".
gcloud is the command-line tool for Google Cloud Platform. It comes pre-installed on Cloud Shell and supports tab-completion.
hello-node.js:
var http = require('http'); var handleRequest = function(request, response) { response.writeHead(200); response.end("Hello node!"); } var www = http.createServer(handleRequest); www.listen(8080);
Because Cloud Shell has the node
executable installed, run the following command to start the node server (the command produces no output):
$ node hello-node.js
Use the built-in Web preview feature of Cloud Shell to open a new browser tab and proxy a request to the instance we just started on port 8080:
Type Ctrl+c to stop the running node server. In the next section, we will now package this application in a Docker container.
Here is our Dockerfile:
FROM node:8 EXPOSE 8080 COPY hello-node.js . CMD node hello-node.js
the Docker image does the following:
- Start from the node image found on the DockerHub.
- Expose port 8080.
- Copy our server.js file to the image.
- Start the node server as we previously did manually.
Build the image with the following command:
$ docker build -t gcr.io/hello-node-231518/hello-node:v1 .
Note that we used project-id for gcr in the command:
Test the image locally with the following command which will run a Docker container as a daemon on port 8080 from our newly-created container image:
$ docker run -d -p 8080:8080 gcr.io/hello-node-231518/hello-node:v1
To see our results we can use the web preview feature of Cloud Shell:
Or we can use "curl":
$ curl http://localhost:8080 Hello node!
Let's stop the container:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4f89b0028a8b gcr.io/hello-node-231518/hello-node:v1 "/bin/sh -c 'node he…" 6 minutes ago Up 6 minutes 0.0.0.0:8080->8080/tcp eager_sinoussi $ docker stop 4f89b0028a8b 4f89b0028a8b
Now that the image is working as intended, push it to the Google Container Registry. We may need to enable Container Registry before the push:
Push the image:
$ gcloud docker -- push gcr.io/hello-node-231518/hello-node:v1
We can check it from UI:
Now it's time to create our Kubernetes Engine cluster. A cluster consists of a Kubernetes master API server hosted by Google and a set of worker nodes. The worker nodes are Compute Engine virtual machines.
We need to make sure we have set our project using gcloud
:
$ gcloud config set project hello-node-231518 Updated property [core/project].
Create a cluster with one "n1-standard-1" node:
$ gcloud container clusters create hello-node \ --num-nodes 1 \ --machine-type n1-standard-1 \ --zone us-central1-b ... NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS hello-node us-central1-b 1.11.6-gke.2 35.193.73.168 n1-standard-1 1.11.6-gke.2 1 RUNNING
We can check it from select "Navigation menu" > "Kubernetes Engine":
In next section, we'll deploy our own containerized application to the Kubernetes cluster using kubectl
command.
A Kubernetes pod can contain single or multiple containers. We'll use one container built with our node image stored in our private container registry. It will serve content on port 8080.
Create a pod with the kubectl run
command:
$ kubectl run hello-node \ --image=gcr.io/hello-node-231518/hello-node:v1 \ --port=8080 deployment.apps "hello-node" created
As we can see, we've created a deployment object. Deployments are the recommended way to create and scale pods. Here, a new deployment manages a single pod replica running the hello-node:v1 image.
To view the deployment:
$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-node 1 1 1 1 37s
To view the pod created by the deployment:
$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-57fc58b759-69fxx 1/1 Running 0 1m
Here are the additional commands:
$ kubectl cluster-info GLBCDefaultBackend is running at https://35.193.73.168/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy Heapster is running at https://35.193.73.168/api/v1/namespaces/kube-system/services/heapster/proxy KubeDNS is running at https://35.193.73.168/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://35.193.73.168/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. $ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://35.193.73.168 name: gke_hello-node-231518_us-central1-b_hello-node contexts: - context: cluster: gke_hello-node-231518_us-central1-b_hello-node user: gke_hello-node-231518_us-central1-b_hello-node name: gke_hello-node-231518_us-central1-b_hello-node current-context: gke_hello-node-231518_us-central1-b_hello-node kind: Config preferences: {} users: - name: gke_hello-node-231518_us-central1-b_hello-node user: auth-provider: config: access-token: ya29.GqMBrgaWT1CjO98CGzZhbT_OPZ32cXMTBJVkr8r52Nl7eOoEKkLK7EPlJ5W7XJGGPowdsK0_WXOrv75bQNRkcsosZCoL56OesWnnbeY4BksHOhAjSeIUK36wBhzhF3uRr7-uwLqP4VS_25 2qf7itdIDhgNPTAEBUuyOt7-hNghKSTXvhcXTVBYH7dGMc1buCigeRsLUkluT7HC3Ep3yDAhfm5dexug cmd-args: config config-helper --format=json cmd-path: /google/google-cloud-sdk/bin/gcloud expiry: 2019-02-12T21:24:54Z expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp $ kubectl get events # kubectl logs <pod-name> $ kubectl logs hello-node-57fc58b759-69fxx
The pod, by default, is only accessible by its internal IP within the cluster. In order to make the hello-node container accessible from outside the Kubernetes virtual network, we have to expose the pod as a Kubernetes service.
From Cloud Shell we can expose the pod to the public internet with the kubectl expose
command combined with the --type="LoadBalancer" flag. This flag is required for the creation of an externally accessible IP:
$ kubectl expose deployment hello-node --type="LoadBalancer" service "hello-node" exposed
The flag used in the command specifies that we'll be using the load-balancer provided by the Compute Engine. Note that we expose the deployment, and not the pod, directly.
This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but we will add more replicas later).
The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform.
To find the publicly-accessible IP address of the service, we can make a request kubectl to list all the cluster services:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node LoadBalancer 10.43.245.163 35.222.182.205 8080:32547/TCP 50s kubernetes ClusterIP 10.43.240.1 <none> 443/TCP 13m
There are 2 IP addresses listed for our hello-node service, both serving port 8080. The CLUSTER-IP is the internal IP that is only visible inside our cloud virtual network; the EXTERNAL-IP is the external load-balanced IP.
Note that he EXTERNAL-IP may take several minutes to become available and visible. If the EXTERNAL-IP is missing, wait a few minutes and try again.
We should now be able to get the service by pointing our browser to this address: http://<EXTERNAL_IP>:8080:
To scale our service, we may want to tell the replication controller to manage a new number of replicas for our pod:
$ kubectl scale deployment hello-node --replicas=2 deployment.extensions "hello-node" scaled
We can request a description of the updated deployment:
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-node 2 2 2 2 17m
To list the all pods:
$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-57fc58b759-26fph 1/1 Running 0 4m hello-node-57fc58b759-69fxx 1/1 Running 0 19m
Let's deploy a new revision to our service. First, here is our new hello-node:v2:
var http = require('http'); var handleRequest = function(request, response) { response.writeHead(200); response.end("Hello k8 node!"); } var www = http.createServer(handleRequest); www.listen(8080);
Now we can build and publish a new container image to the registry with an incremented tag ('v2').
Run the following:
$ docker build -t gcr.io/hello-node-231518/hello-node:v2 . Successfully tagged gcr.io/hello-node-231518/hello-node:v2 $ gcloud docker -- push gcr.io/hello-node-231518/hello-node:v2 ... v2: digest: sha256:028a791fd893f91d1372a4cbbaa8c993f21c82128b636268140078ccfbd1531f size: 2215
We can check the gcr. Indeed, we have two versions of images:
Kubernetes will update our replication controller to the new version of the application. In order to change the image label for our running container, we will edit the existing hello-node deployment and change the image from gcr.io/hello-node-231518/hello-node:v1 to gcr.io/hello-node-231518/hello-node:v2 using the kubectl edit
command. It opens a text editor displaying the full deployment yaml configuration. By updating the spec.template.spec.containers.image field in the config we are telling the deployment to update the pods with the new image.
$ kubectl edit deployment hello-node
Updated by changing v1=>v2:
apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: 2016-03-24T17:55:28Z generation: 3 labels: run: hello-node name: hello-node namespace: default resourceVersion: "151017" selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/hello-node uid: 981fe302-f1e9-11e5-9a78-42010af00005 spec: replicas: 4 selector: matchLabels: run: hello-node strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: run: hello-node spec: containers: - image: gcr.io/PROJECT_ID/hello-node:v2 imagePullPolicy: IfNotPresent name: hello-node ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30
After the update of the deployment yaml, we run the following to update the deployment with the new image. New pods will be created with the new image and the old pods will be deleted:
$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-node 2 2 2 2 45m
Our updated app:
Docker & K8s
- Docker install on Amazon Linux AMI
- Docker install on EC2 Ubuntu 14.04
- Docker container vs Virtual Machine
- Docker install on Ubuntu 14.04
- Docker Hello World Application
- Nginx image - share/copy files, Dockerfile
- Working with Docker images : brief introduction
- Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
- More on docker run command (docker run -it, docker run --rm, etc.)
- Docker Networks - Bridge Driver Network
- Docker Persistent Storage
- File sharing between host and container (docker run -d -p -v)
- Linking containers and volume for datastore
- Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
- Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
- Dockerfile - Build Docker images automatically III - RUN
- Dockerfile - Build Docker images automatically IV - CMD
- Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
- Docker - Apache Tomcat
- Docker - NodeJS
- Docker - NodeJS with hostname
- Docker Compose - NodeJS with MongoDB
- Docker - Prometheus and Grafana with Docker-compose
- Docker - StatsD/Graphite/Grafana
- Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
- Docker : NodeJS with GCP Kubernetes Engine
- Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
- Docker : Jenkins Master and Slave
- Docker - ELK : ElasticSearch, Logstash, and Kibana
- Docker - ELK 7.6 : Elasticsearch on Centos 7
- Docker - ELK 7.6 : Filebeat on Centos 7
- Docker - ELK 7.6 : Logstash on Centos 7
- Docker - ELK 7.6 : Kibana on Centos 7
- Docker - ELK 7.6 : Elastic Stack with Docker Compose
- Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
- Docker - Deploy Elastic Stack via Helm on minikube
- Docker Compose - A gentle introduction with WordPress
- Docker Compose - MySQL
- MEAN Stack app on Docker containers : micro services
- MEAN Stack app on Docker containers : micro services via docker-compose
- Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
- Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
- Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
- Docker Compose with two containers - Flask REST API service container and an Apache server container
- Docker compose : Nginx reverse proxy with multiple containers
- Docker & Kubernetes : Envoy - Getting started
- Docker & Kubernetes : Envoy - Front Proxy
- Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
- Docker Packer
- Docker Cheat Sheet
- Docker Q & A #1
- Kubernetes Q & A - Part I
- Kubernetes Q & A - Part II
- Docker - Run a React app in a docker
- Docker - Run a React app in a docker II (snapshot app with nginx)
- Docker - NodeJS and MySQL app with React in a docker
- Docker - Step by Step NodeJS and MySQL app with React - I
- Installing LAMP via puppet on Docker
- Docker install via Puppet
- Nginx Docker install via Ansible
- Apache Hadoop CDH 5.8 Install with QuickStarts Docker
- Docker - Deploying Flask app to ECS
- Docker Compose - Deploying WordPress to AWS
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
- Docker - ECS Fargate
- Docker - AWS ECS service discovery with Flask and Redis
- Docker & Kubernetes : minikube
- Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
- Docker & Kubernetes 3 : minikube Django with Redis and Celery
- Docker & Kubernetes 4 : Django with RDS via AWS Kops
- Docker & Kubernetes : Kops on AWS
- Docker & Kubernetes : Ingress controller on AWS with Kops
- Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
- Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
- Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
- Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
- Docker & Kubernetes : DaemonSet
- Docker & Kubernetes : Secrets
- Docker & Kubernetes : kubectl command
- Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
- Docker & Kubernetes : Configure a Pod to Use a ConfigMap
- AWS : EKS (Elastic Container Service for Kubernetes)
- Docker & Kubernetes : Run a React app in a minikube
- Docker & Kubernetes : Minikube install on AWS EC2
- Docker & Kubernetes : Cassandra with a StatefulSet
- Docker & Kubernetes : Terraform and AWS EKS
- Docker & Kubernetes : Pods and Service definitions
- Docker & Kubernetes : Service IP and the Service Type
- Docker & Kubernetes : Kubernetes DNS with Pods and Services
- Docker & Kubernetes : Headless service and discovering pods
- Docker & Kubernetes : Scaling and Updating application
- Docker & Kubernetes : Horizontal pod autoscaler on minikubes
- Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
- Docker & Kubernetes : Rolling updates
- Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
- Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
- Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
- Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
- Docker & Kubernetes : MongoDB / MongoExpress on Minikube
- Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
- Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
- Docker & Kubernetes : Nginx Ingress Controller on Minikube
- Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
- Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
- Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
- Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
- Docker & Kubernetes : StatefulSets on minikube
- Docker & Kubernetes : RBAC
- Docker & Kubernetes Service Account, RBAC, and IAM
- Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
- Docker & Kubernetes : Helm Chart
- Docker & Kubernetes : My first Helm deploy
- Docker & Kubernetes : Readiness and Liveness Probes
- Docker & Kubernetes : Helm chart repository with Github pages
- Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
- Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
- Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
- Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
- Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
- Docker & Kubernetes : Istio on EKS
- Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
- Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
- Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
- Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
- Docker & Kubernetes : Spinnaker on EKS with Halyard
- Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
- Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
- Docker & Kubernetes : Jenkins-X on EKS
- Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
- Docker & Kubernetes : ArgoCD on Kubernetes cluster
- Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization