Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine - 2020
In this post, we'll learn how to create a continuous delivery pipeline using Google Kubernetes Engine, Cloud Source Repositories, Cloud Build, and Spinnaker.
After creating a sample app, we configure these services to automatically build, test, and deploy it. When we modify the app code, the changes trigger the continuous delivery pipeline (via tag push) to automatically rebuild, retest, and redeploy the new version.
Here is the list of things we'll do in this post:
- Set up our environment by launching Cloud Shell, creating a GKE cluster, and configuring our identity and user management scheme.
- Download a sample app, create a Git repository, and upload it to a Cloud Source Repository.
- Deploy Spinnaker to GKE using Helm. Helm is a toolset to manage Kubernetes packages (also called Charts), which contain pre-configured Kubernetes resources.
- Build our Docker image.
- Create triggers to create Docker images when our app changes.
- Configure a Spinnaker pipeline to reliably and continuously deploy our app to GKE.
- Deploy a code change, triggering the pipeline, and watch it roll out to production.
To continuously deliver app updates to the users, we need an automated process that reliably builds, tests, and updates our software. Code changes should automatically flow through a pipeline that includes artifact creation, unit testing, functional testing, and production rollout.
Source: Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine
In some cases, we want a code update to apply to only a subset (canary) of our users, so that it is exercised realistically before we push it to our entire user base. If one of these canary releases proves unsatisfactory, our automated procedure must be able to quickly roll back the software changes.
Google Cloud Shell is loaded with development tools and it offers a persistent 5GB home directory and runs on the Google Cloud. Google Cloud Shell provides command-line access to our GCP resources. We can activate the shell: in GCP console, on the top right toolbar, click the Open Cloud Shell button:
In the dialog box that opens, click "START CLOUD SHELL".
gcloud is the command-line tool for Google Cloud Platform. It comes pre-installed on Cloud Shell and supports tab-completion.
Set our zone:
$ gcloud config set compute/zone us-central1-f Updated property [compute/zone].
Run the following command to create a Kubernetes cluster:
$ gcloud container clusters create spinnaker-tutorial \ --machine-type=n1-standard-2 ... kubeconfig entry generated for my-istio. NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS my-istio us-central1-f 1.11.7-gke.4 35.238.72.145 n1-standard-1 1.11.7-gke.4 4 RUNNING
We want to create a Cloud Identity and Access Management (Cloud IAM) service account to delegate permissions to Spinnaker, allowing it to store data in Cloud Storage. Spinnaker stores its pipeline data in Cloud Storage to ensure reliability and resiliency.
If our Spinnaker deployment unexpectedly fails, we can create an identical deployment in minutes with access to the same pipeline data as the original.
Create the service account:
$ gcloud iam service-accounts create spinnaker-account \ --display-name spinnaker-account Created service account [spinnaker-account].
Store the service account email address and our current project ID in environment variables for use in later commands:
$ export SA_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:spinnaker-account" \ --format='value(email)') $ export PROJECT=$(gcloud info --format='value(config.project)')
Bind the storage.admin role to our service account:
$ gcloud projects add-iam-policy-binding \ $PROJECT --role roles/storage.admin --member serviceAccount:$SA_EMAIL
Download the service account key. We need this key later when we install Spinnaker and upload the key to GKE:
$ gcloud iam service-accounts keys create spinnaker-sa.json --iam-account $SA_EMAIL created key [ea10c7cf83c6918a4d17f7ed10eb3d04eb476cd1] of type [json] as [spinnaker-sa.json] for [spinnaker-account@...
Create the Cloud Pub/Sub topic for notifications from Container Registry. This command may fail with the error "Resource already exists in the project", which means that the topic has already been created for us:
$ gcloud beta pubsub topics create projects/$PROJECT/topics/gcr Created topic ...
Create a subscription that Spinnaker can read from to receive notifications of images being pushed:
$ gcloud beta pubsub subscriptions create gcr-triggers \ --topic projects/${PROJECT}/topics/gcr Created subscription ...
Give Spinnaker's service account permissions to read from the gcr-triggers subscription:
$ export SA_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:spinnaker-account" \ --format='value(email)') $ gcloud beta pubsub subscriptions add-iam-policy-binding gcr-triggers \ --role roles/pubsub.subscriber --member serviceAccount:$SA_EMAIL Updated IAM policy for subscription [gcr-triggers]. bindings: - members: - serviceAccount:spinnaker-account@qwiklabs-gcp-aa3b3c729febc543.iam.gserviceaccount.com role: roles/pubsub.subscriber ...
We'll use Helm to deploy Spinnaker from the Charts repository.
Helm is a package manager we can use to configure and deploy Kubernetes apps.
Download and install the helm binary and unzip the file to our local system:
$ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.10.0-linux-amd64.tar.gz $ tar zxfv helm-v2.10.0-linux-amd64.tar.gz $ cp linux-amd64/helm .
Grant Tiller, the server side of Helm, the cluster-admin role in our cluster via rbac role based access control so that when we do helm install ... spinnaker
, the Tillter will do the deploy whatever necessary (pods, services, PVC, etc.) on behalf of us:
$ kubectl create clusterrolebinding user-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account) ... clusterrolebinding.rbac.authorization.k8s.io "user-admin-binding" created $ kubectl create serviceaccount tiller --namespace kube-system serviceaccount "tiller" created $ kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller clusterrolebinding.rbac.authorization.k8s.io "tiller-admin-binding" created
Grant Spinnaker the cluster-admin role so it can deploy resources across all namespaces:
$ kubectl create clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:default spinnaker-admin clusterrolebinding.rbac.authorization.k8s.io "spinnaker-admin" created
Now we want to initialize Helm. It gets the cluster info from ~/.kube/config and deploys its server side component Tiller onto our cluster.
$ ./helm init --service-account=tiller ... Tiller (the Helm server-side component) has been installed into our Kubernetes Cluster. ... $ ./helm update ... ...Successfully got an update from the "stable" chart repository Update Complete. ⎈ Happy Helming!⎈
Note that when we do helm init
, not only we installed the "Tiller" but also got stable charts (packages).
At this point, we can check the stable chart repository using helm inspect
:
$ ./helm inspect stable/spinnaker apiVersion: v1 appVersion: 1.11.6 description: Open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. home: http://spinnaker.io/ icon: https://pbs.twimg.com/profile_images/669205226994319362/O7OjwPrh_400x400.png maintainers: - email: viglesias@google.com name: viglesiasce - email: lwander@google.com name: lwander - email: hello@dwardu.com name: dwardu89 - email: username.taken@gmail.com name: paulczar name: spinnaker sources: - https://github.com/spinnaker - https://github.com/viglesiasce/images version: 1.7.2 --- halyard: spinnakerVersion: 1.11.6 image: repository: gcr.io/spinnaker-marketplace/halyard tag: 1.13.1 # Provide a config map with Hal commands that will be run the core config (storage) # The config map should contain a script in the config.sh key additionalScripts: enabled: false configMapName: my-halyard-config configMapKey: config.sh ...
Ensure that Helm is properly installed by running the following command. If Helm is correctly installed, v2.10.0 appears for both client and server:
$ ./helm version Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Create a bucket for Spinnaker to store its pipeline configuration:
$ export PROJECT=$(gcloud info --format='value(config.project)') $ export BUCKET=$PROJECT-spinnaker-config $ gsutil mb -c regional -l us-central1 gs://$BUCKET reating gs://qwiklabs-gcp-aa3b3c729febc543-spinnaker-config/...
Create the file (spinnaker-config.yaml) describing the configuration for how Spinnaker should be installed:
$ export SA_JSON=$(cat spinnaker-sa.json) $ export PROJECT=$(gcloud info --format='value(config.project)') $ export BUCKET=$PROJECT-spinnaker-config $ cat > spinnaker-config.yaml <<EOF gcs: enabled: true bucket: $BUCKET project: $PROJECT jsonKey: '$SA_JSON' dockerRegistries: - name: gcr address: https://gcr.io username: _json_key password: '$SA_JSON' email: 1234@5678.com # Disable minio as the default storage backend minio: enabled: false # Configure Spinnaker to enable GCP services halyard: spinnakerVersion: 1.10.2 image: tag: 1.12.0 additionalScripts: create: true data: enable_gcs_artifacts.sh: |- \$HAL_COMMAND config artifact gcs account add gcs-$PROJECT --json-path /opt/gcs/key.json \$HAL_COMMAND config artifact gcs enable enable_pubsub_triggers.sh: |- \$HAL_COMMAND config pubsub google enable \$HAL_COMMAND config pubsub google subscription add gcr-triggers \ --subscription-name gcr-triggers \ --json-path /opt/gcs/key.json \ --project $PROJECT \ --message-format GCR EOF
Spinnaker is actually a composite application of individual microservices (more than 10 services). We can see the complexity from https://www.spinnaker.io/reference/architecture/. To help the dependencies of the Spinnaker, we're using Halyard-based Helm chart.
Use the Helm command-line interface to deploy the chart with our configuration set. This command typically takes five to ten minutes to complete:
$ ./helm install -n cd stable/spinnaker -f spinnaker-config.yaml --timeout 600 \ --version 1.1.6 --wait NAME: cd LAST DEPLOYED: Wed Mar 6 12:30:02 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cd-redis-master-0 1/1 Running 0 4m cd-spinnaker-halyard-0 1/1 Running 0 4m ==> v1/Secret NAME TYPE DATA AGE cd-redis Opaque 1 4m cd-spinnaker-gcs Opaque 1 4m cd-spinnaker-registry Opaque 1 4m ==> v1/ConfigMap NAME DATA AGE cd-spinnaker-additional-scripts 2 4m cd-spinnaker-halyard-config 3 4m ==> v1/RoleBinding NAME AGE cd-spinnaker-halyard 4m ==> v1beta2/StatefulSet NAME DESIRED CURRENT AGE cd-redis-master 1 1 4m ==> v1/ServiceAccount NAME SECRETS AGE cd-spinnaker-halyard 1 4m ==> v1/ClusterRoleBinding NAME AGE cd-spinnaker-spinnaker 4m ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cd-redis-master ClusterIP 10.43.249.1936379/TCP 4m cd-spinnaker-halyard ClusterIP None 8064/TCP 4m ==> v1/StatefulSet NAME DESIRED CURRENT AGE cd-spinnaker-halyard 1 1 4m NOTES: 1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI: export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}") kubectl port-forward --namespace default $DECK_POD 9000 2. Visit the Spinnaker UI by opening your browser to: http://127.0.0.1:9000 To customize your Spinnaker installation. Create a shell in your Halyard pod: kubectl exec --namespace default -it cd-spinnaker-halyard-0 bash ...
After the command completes, run the following command to set up port forwarding to the Spinnaker UI from Cloud Shell:
$ export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" \ -o jsonpath="{.items[0].metadata.name}") $ kubectl port-forward --namespace default $DECK_POD 8080:9000 >> /dev/null & [1] 1026
To open the Spinnaker user interface, click Web Preview in Cloud Shell and click Preview on port 8080:
We should see the welcome screen, followed by the Spinnaker UI:
In this section, we'll configure Cloud Build to detect changes to our app source code.
In Cloud Shell, download the sample source code and unpack it:
$ wget https://gke-spinnaker.storage.googleapis.com/sample-app-v2.tgz $ tar xzfv sample-app-v2.tgz $ cd sample-app
Set the username and email address for our Git commits in this repository. We need to replace [EMAIL_ADDRESS] with our Git email address, and replace [USERNAME] with our Git username:
## git config --global user.email "[EMAIL_ADDRESS]" $ git config --global user.email "k.hong@aol.com" ## git config --global user.name "[USERNAME]" $ git config --global user.name "K"
Make the initial commit to our source code repository:
$ git init $ git add . $ git commit -m "Initial commit"
Create a repository to host our code:
$ gcloud source repos create sample-app Created [sample-app]. $ git config credential.helper gcloud.sh
Add our newly created repository as remote:
$ export PROJECT=$(gcloud info --format='value(config.project)') $ git remote add origin https://source.developers.google.com/p/$PROJECT/r/sample-app
Push our code to the new repository's master branch:
$ git push origin master
In this section, we'll configure Cloud Build to build and push our Docker images every time we push Git tags to our source repository.
Cloud Build automatically checks out our source code, builds the Docker image from the Dockerfile in our repository, and pushes that image to Container Registry.
- In the GCP Console, in the Cloud Build section, click Build Triggers.
Select Cloud Source Repository and click Continue.
Select our newly created
sample-app
repository from the list, and click Continue.Set the following trigger settings:
- Name:
sample-app-tags
- Trigger type: Tag
- Tag (regex):
v.*
- Build configuration:
cloudbuild.yaml
- cloudbuild.yaml location:
cloudbuild.yaml
- Name:
Click Create trigger.
From now on, whenever we push a Git tag prefixed with the letter "v" to our source code repository, Cloud Build automatically builds and pushes our app as a Docker image to Container Registry.
Spinnaker needs access to our Kubernetes manifests in order to deploy them to our clusters. This section creates a Cloud Storage bucket that will be populated with our manifests during the CI process in Cloud Build. After the manifests are in Cloud Storage, Spinnaker can download and apply them during our pipeline's execution.
Create the bucket:
$ export PROJECT=$(gcloud info --format='value(config.project)') $ gsutil mb -l us-central1 gs://$PROJECT-kubernetes-manifests Creating gs://qwiklabs-gcp-aa3b3c729febc543-kubernetes-manifests/...
Enable versioning on the bucket so that we have a history of our manifests:
$ gsutil versioning set on gs://$PROJECT-kubernetes-manifests Enabling versioning for gs://qwiklabs-gcp-aa3b3c729febc543-kubernetes-manifests/...
Set the correct project ID in our kubernetes deployment manifests:
$ sed -i s/PROJECT/$PROJECT/g k8s/deployments/*
Commit the changes to the repository:
$ git commit -a -m "Set project ID" [master 973b8b6] Set project ID 4 files changed, 4 insertions(+), 4 deletions(-)
Push our first image using the following steps.
Go to our source code folder in Cloud Shell and create a Git tag and push it
$ git tag v1.0.0 $ git push --tags ... To https://source.developers.google.com/p/qwiklabs-gcp-aa3b3c729febc543/r/sample-app * [new tag] v1.0.0 -> v1.0.0
In Cloud Build, click Build History to check that the build has been triggered. If not, verify the trigger was configured properly in the previous section.
Now that our images are building automatically, we need to deploy them to the Kubernetes cluster.
We deploy to a scaled-down environment for integration testing. After the integration tests pass, we must manually approve the changes to deploy the code to production services.
Install the spin CLI for managing Spinnaker
spin is a command-line utility for managing Spinnaker's applications and pipelines.
Download the latest version of spin
.
$ curl -LO https://storage.googleapis.com/spinnaker-artifacts/spin/$(curl -s https://storage.googleapis.com/spinnaker-artifacts/spin/latest)/linux/amd64/spin % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 12.2M 100 12.2M 0 0 25.4M 0 --:--:-- --:--:-- --:--:-- 25.4M $ chmod +x spin
Let's create an app in Spinnaker using spin
:
$ ./spin application save --application-name sample \ --owner-email example@example.com \ --cloud-providers kubernetes \ --gate-endpoint http://localhost:8080/gate Application save succeeded
Next, we want to create the continuous delivery pipeline. In this tutorial, the pipeline is configured to detect when a Docker image with a tag prefixed with "v" has arrived in our Container Registry.
In a new tab of Cloud Shell, run the following command in the source code directory to upload an example pipeline to our Spinnaker instance:
$ export PROJECT=$(gcloud info --format='value(config.project)') $ sed s/PROJECT/$PROJECT/g spinnaker/pipeline-deploy.json > pipeline.json $ ./spin pipeline save --gate-endpoint http://localhost:8080/gate -f pipeline.json Pipeline save succeeded
The configuration we just created uses notifications of newly tagged images being pushed to trigger a Spinnaker pipeline.
It may take a while to see the following apps displayed:
In a previous step, we pushed a tag to the Cloud Source Repositories which triggered Cloud Build to build and push our image to Container Registry. We can now check on the pipeline that was triggered.
Return to the Pipelines page by clicking Pipelines.
Click Details to see more information about the pipeline's progress. This section shows the status of the deployment pipeline and its steps. Steps in blue are currently running, green ones have completed successfully, and red ones have failed. Click a stage to see details about it.
After 3 to 5 minutes the integration test phase completes and the pipeline requires manual approval to continue the deployment.
Hover over the yellow "person" icon and click Continue.
Our rollout continues to the production frontend and backend deployments. It completes after a few minutes.
To view the app, select Infrastructure > Load Balancers in the top of the Spinnaker UI.
Scroll down the list of load balancers and click Default, under sample-frontend-production.
Scroll down the details pane on the right and copy our app's IP address by clicking the clipboard button on the Ingress IP. The ingress IP link from the Spinnaker UI uses HTTPS by default, but the application is configured to use HTTP.
Note that the Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
Paste the address into the browser to view the production version of the app.
We have now manually triggered the pipeline to build, test, and deploy our app.
In this section, we'll test the pipeline end to end by making a code change, pushing a Git tag, and watching the pipeline run in response.
By pushing a Git tag that starts with "v", we trigger Cloud Build to build a new Docker image and push it to Container Registry. Spinnaker detects that the new image tag begins with "v" and triggers a pipeline to deploy the image to canaries, run tests, and roll out the same image to all pods in the deployment.
Change the color of the app from orange to blue:
$ sed -i 's/orange/blue/g' cmd/gke-info/common-service.go
Tag the change and push it to the source code repository:
$ git commit -a -m "Change color to blue" $ git tag v1.0.1 $ git push --tags Counting objects: 5, done. Compressing objects: 100% (4/4), done. Writing objects: 100% (5/5), 401 bytes | 0 bytes/s, done. Total 5 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3) To https://source.developers.google.com/p/qwiklabs-gcp-314099969eb1e982/r/sample-app * [new tag] v1.0.1 -> v1.0.1
See the new build appear in the Cloud Build Build History.
Once, the build is done,
click Pipelines to watch the pipeline start to deploy the image.
Observe the canary deployments. When the deployment is paused, waiting to roll out to production, start refreshing the tab that contains our app. Four of our backends are running the previous version of our app, while only one backend is running the canary. We should see the new, blue version of our app appear about every tenth time we refresh.
After testing completes, return to the Spinnaker tab and approve the deployment.
When the pipeline completes, our app looks like the following screenshot. Note that the color has changed to blue because of our code change, and that the Version field now reads
v1.0.1
.We have now successfully rolled out our app to entire production environment!
Optionally, we can roll back this change by reverting our previous commit. Rolling back adds a new tag
(v1.0.2
), and pushes the tag back through the same pipeline we used to deployv1.0.1
:$ git revert v1.0.1 [master 7d1b5c0] Revert "Change color to blue" 1 file changed, 1 insertion(+), 1 deletion(-) $ git tag v1.0.2 $ git push --tags Counting objects: 5, done. Compressing objects: 100% (4/4), done. Writing objects: 100% (5/5), 425 bytes | 0 bytes/s, done. Total 5 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3) To https://source.developers.google.com/p/qwiklabs-gcp-314099969eb1e982/r/sample-app * [new tag] v1.0.2 -> v1.0.2
- Delete the Spinnaker installation:
$ ../helm delete --purge cd release "cd" deleted
- Delete the sample app services:
$ kubectl delete -f k8s/services service "sample-backend-canary" deleted service "sample-backend-production" deleted service "sample-frontend-canary" deleted service "sample-frontend-production" deleted
- Remove the service account IAM bindings:
$ export SA_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:spinnaker-account" --format='value(email)') $ export PROJECT=$(gcloud info --format='value(config.project)') $ gcloud projects remove-iam-policy-binding $PROJECT --role roles/storage.admin --member serviceAccount:$SA_EMAIL
- Delete the service account:
$ export SA_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:spinnaker-account" --format='value(email)') $ gcloud iam service-accounts delete $SA_EMAIL deleted service account [spinnaker-account@...
- Delete the GKE cluster:
$ gcloud container clusters delete spinnaker-tutorial --zone=us-central1-f ... Deleted [https://container.googleapis.com/v1/projects/qwiklabs-gcp-4b4e1c92e7208e9e/zones/us-central1-f/clusters/spinnaker-tutorial]. [1]+ Done kubectl port-forward --namespace default $DECK_POD 8080:9000 >> /dev/null (wd: ~) (wd now: ~/sample-app)
- Delete the repository:
$ gcloud source repos delete sample-app ... Deleted [sample-app].
- Delete the bucket:
$ export PROJECT=$(gcloud info --format='value(config.project)') $ export BUCKET=$PROJECT-spinnaker-config $ gsutil -m rm -r gs://$BUCKET ... / [13/13 objects] 100% Done Operation completed over 13 objects. ...
- Delete our container images:
$ export PROJECT=$(gcloud info --format='value(config.project)') $ gcloud container images delete gcr.io/$PROJECT/sample-app:v1.0.0 $ gcloud container images delete gcr.io/$PROJECT/sample-app:v1.0.1
- Delete that roll back container image:
$ gcloud container images delete gcr.io/$PROJECT/sample-app:v1.0.2
This post is based on Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine
There are other posts which appear to be based on the same site, and this one may worth look up : Know Everything About Spinnaker & How to Deploy Using Kubernetes Engine.
Docker & K8s
- Docker install on Amazon Linux AMI
- Docker install on EC2 Ubuntu 14.04
- Docker container vs Virtual Machine
- Docker install on Ubuntu 14.04
- Docker Hello World Application
- Nginx image - share/copy files, Dockerfile
- Working with Docker images : brief introduction
- Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
- More on docker run command (docker run -it, docker run --rm, etc.)
- Docker Networks - Bridge Driver Network
- Docker Persistent Storage
- File sharing between host and container (docker run -d -p -v)
- Linking containers and volume for datastore
- Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
- Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
- Dockerfile - Build Docker images automatically III - RUN
- Dockerfile - Build Docker images automatically IV - CMD
- Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
- Docker - Apache Tomcat
- Docker - NodeJS
- Docker - NodeJS with hostname
- Docker Compose - NodeJS with MongoDB
- Docker - Prometheus and Grafana with Docker-compose
- Docker - StatsD/Graphite/Grafana
- Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
- Docker : NodeJS with GCP Kubernetes Engine
- Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
- Docker : Jenkins Master and Slave
- Docker - ELK : ElasticSearch, Logstash, and Kibana
- Docker - ELK 7.6 : Elasticsearch on Centos 7
- Docker - ELK 7.6 : Filebeat on Centos 7
- Docker - ELK 7.6 : Logstash on Centos 7
- Docker - ELK 7.6 : Kibana on Centos 7
- Docker - ELK 7.6 : Elastic Stack with Docker Compose
- Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
- Docker - Deploy Elastic Stack via Helm on minikube
- Docker Compose - A gentle introduction with WordPress
- Docker Compose - MySQL
- MEAN Stack app on Docker containers : micro services
- MEAN Stack app on Docker containers : micro services via docker-compose
- Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
- Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
- Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
- Docker Compose with two containers - Flask REST API service container and an Apache server container
- Docker compose : Nginx reverse proxy with multiple containers
- Docker & Kubernetes : Envoy - Getting started
- Docker & Kubernetes : Envoy - Front Proxy
- Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
- Docker Packer
- Docker Cheat Sheet
- Docker Q & A #1
- Kubernetes Q & A - Part I
- Kubernetes Q & A - Part II
- Docker - Run a React app in a docker
- Docker - Run a React app in a docker II (snapshot app with nginx)
- Docker - NodeJS and MySQL app with React in a docker
- Docker - Step by Step NodeJS and MySQL app with React - I
- Installing LAMP via puppet on Docker
- Docker install via Puppet
- Nginx Docker install via Ansible
- Apache Hadoop CDH 5.8 Install with QuickStarts Docker
- Docker - Deploying Flask app to ECS
- Docker Compose - Deploying WordPress to AWS
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
- Docker - ECS Fargate
- Docker - AWS ECS service discovery with Flask and Redis
- Docker & Kubernetes : minikube
- Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
- Docker & Kubernetes 3 : minikube Django with Redis and Celery
- Docker & Kubernetes 4 : Django with RDS via AWS Kops
- Docker & Kubernetes : Kops on AWS
- Docker & Kubernetes : Ingress controller on AWS with Kops
- Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
- Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
- Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
- Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
- Docker & Kubernetes : DaemonSet
- Docker & Kubernetes : Secrets
- Docker & Kubernetes : kubectl command
- Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
- Docker & Kubernetes : Configure a Pod to Use a ConfigMap
- AWS : EKS (Elastic Container Service for Kubernetes)
- Docker & Kubernetes : Run a React app in a minikube
- Docker & Kubernetes : Minikube install on AWS EC2
- Docker & Kubernetes : Cassandra with a StatefulSet
- Docker & Kubernetes : Terraform and AWS EKS
- Docker & Kubernetes : Pods and Service definitions
- Docker & Kubernetes : Service IP and the Service Type
- Docker & Kubernetes : Kubernetes DNS with Pods and Services
- Docker & Kubernetes : Headless service and discovering pods
- Docker & Kubernetes : Scaling and Updating application
- Docker & Kubernetes : Horizontal pod autoscaler on minikubes
- Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
- Docker & Kubernetes : Rolling updates
- Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
- Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
- Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
- Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
- Docker & Kubernetes : MongoDB / MongoExpress on Minikube
- Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
- Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
- Docker & Kubernetes : Nginx Ingress Controller on Minikube
- Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
- Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
- Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
- Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
- Docker & Kubernetes : StatefulSets on minikube
- Docker & Kubernetes : RBAC
- Docker & Kubernetes Service Account, RBAC, and IAM
- Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
- Docker & Kubernetes : Helm Chart
- Docker & Kubernetes : My first Helm deploy
- Docker & Kubernetes : Readiness and Liveness Probes
- Docker & Kubernetes : Helm chart repository with Github pages
- Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
- Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
- Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
- Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
- Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
- Docker & Kubernetes : Istio on EKS
- Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
- Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
- Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
- Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
- Docker & Kubernetes : Spinnaker on EKS with Halyard
- Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
- Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
- Docker & Kubernetes : Jenkins-X on EKS
- Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
- Docker & Kubernetes : ArgoCD on Kubernetes cluster
- Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization