Docker : Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
Service account, Role, RoleBinding in EKS
AWS Fargate provides on-demand, right-sized compute capacity for containers. With AWS Fargate, we no longer have to provision, configure, or scale groups of virtual machines to run containers.
Fargate eliminates the need for us to create or manage EC2 instances for our Kubernetes applications. When our pods start, Fargate automatically allocates compute resources on-demand to run them.
ALB is a popular AWS service that load balances incoming traffic at the application layer (layer 7) across multiple targets, such as pods running on a Kubernetes cluster and is a great way to get traffic to such microservices.
In this post, we'll setup AWS Application Load Balancer (ALB) with our EKS cluster for ingress-based load balancing to Fargate pods using the open source ALB Ingress Controller.
We'll follow the instructions in Using ALB Ingress Controller with Amazon EKS on Fargate.
We'll do the followings:
- Create an Amazon EKS cluster
- Create a Fargate profile (which allows us to launch pods on Fargate)
- Implement IAM roles for service accounts on our cluster in order to give fine-grained IAM permissions to our ingress controller pods
- Deploy a simple nginx service, and expose it to the internet using an ALB.
We need eksctl
, kubectl
, jq
, and AWS cli.
Create a cluster by running the eksctl
commands:
$ AWS_REGION=us-east-1 $ CLUSTER_NAME=CLUSTER_NAME=eks-fargate-alb-bogo $ eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION --fargate 2021-05-19 09:12:21 [ℹ] eksctl version 0.44.0 2021-05-19 09:12:21 [ℹ] using region us-east-1 2021-05-19 09:12:22 [ℹ] setting availability zones to [us-east-1c us-east-1f] 2021-05-19 09:12:22 [ℹ] subnets for us-east-1c - public:192.168.0.0/19 private:192.168.64.0/19 2021-05-19 09:12:22 [ℹ] subnets for us-east-1f - public:192.168.32.0/19 private:192.168.96.0/19 2021-05-19 09:12:22 [ℹ] using Kubernetes version 1.18 2021-05-19 09:12:22 [ℹ] creating EKS cluster "eks-fargate-alb-bogo" in "us-east-1" region with Fargate profile 2021-05-19 09:12:22 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=eks-fargate-alb-bogo' 2021-05-19 09:12:22 [ℹ] CloudWatch logging will not be enabled for cluster "eks-fargate-alb-bogo" in "us-east-1" 2021-05-19 09:12:22 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=eks-fargate-alb-bogo' 2021-05-19 09:12:22 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eks-fargate-alb-bogo" in "us-east-1" 2021-05-19 09:12:22 [ℹ] 2 sequential tasks: { create cluster control plane "eks-fargate-alb-bogo", 2 sequential sub-tasks: { 2 sequential sub-tasks: { wait for control plane to become ready, create fargate profiles }, create addons } } 2021-05-19 09:12:22 [ℹ] building cluster stack "eksctl-eks-fargate-alb-bogo-cluster" 2021-05-19 09:12:24 [ℹ] deploying stack "eksctl-eks-fargate-alb-bogo-cluster" 2021-05-19 09:12:54 [ℹ] waiting for CloudFormation stack "eksctl-eks-fargate-alb-bogo-cluster" ... 2021-05-19 09:50:08 [ℹ] creating Fargate profile "fp-default" on EKS cluster "eks-fargate-alb-bogo" 2021-05-19 09:54:27 [ℹ] created Fargate profile "fp-default" on EKS cluster "eks-fargate-alb-bogo" 2021-05-19 09:54:28 [ℹ] "coredns" is now schedulable onto Fargate 2021-05-19 09:56:37 [ℹ] "coredns" is now scheduled onto Fargate 2021-05-19 09:56:37 [ℹ] "coredns" pods are now scheduled onto Fargate 2021-05-19 09:56:38 [ℹ] waiting for the control plane availability... 2021-05-19 09:56:38 [✔] saved kubeconfig as "/Users/kihyuckhong/.kube/config" 2021-05-19 09:56:38 [ℹ] no tasks 2021-05-19 09:56:38 [✔] all EKS cluster resources for "eks-fargate-alb-bogo" have been created 2021-05-19 09:56:39 [ℹ] kubectl command should work with "/Users/kihyuckhong/.kube/config", try 'kubectl get nodes' 2021-05-19 09:56:39 [✔] EKS cluster "eks-fargate-alb-bogo" in "us-east-1" region is ready
When we create a cluster using the eksctl
with the flag –fargate,
it will not only create a cluster but also a Fargate profile, which allows the cluster administrator to specify which pods will run on Fargate.
The default profile created by the eksctl
maps everything in the default and kube-system namespaces to Fargate.
We can separate the controller from the apps we run by creating new Fargate profiles. This gives us more fine-grained capabilities to manage how our pods are deployed on Fargate.
Once the cluster creation is completed, we can validate that everything went well by running kubetl
command:
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 36m
This response means that the cluster is running and that we are able to communicate with the Kubernetes API.
Now that our cluster is up and running, let’s setup the OIDC ID provider (IdP) in AWS. This step is needed to give IAM permissions to a Fargate pod running in the cluster using the IAM for Service Accounts feature. Let’s setup the OIDC provider for our cluster:
$ eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve 2021-05-19 09:57:49 [ℹ] eksctl version 0.44.0 2021-05-19 09:57:49 [ℹ] using region us-east-1 2021-05-19 09:57:50 [ℹ] will create IAM Open ID Connect provider for cluster "eks-fargate-alb-bogo" in "us-east-1" 2021-05-19 09:57:50 [✔] created IAM Open ID Connect provider for cluster "eks-fargate-alb-bogo" in "us-east-1"
Let's create an IAM policy that will be used by the ALB Ingress Controller deployment. This policy will be later associated to the Kubernetes service account and will allow the ALB Ingress Controller pods to create and manage the ALB’s resources in our AWS account for us. Download the IAM Policy example document and create it:
$ wget -O alb-ingress-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/master/docs/examples/iam-policy.json ...‘alb-ingress-iam-policy.json’ saved $ aws iam create-policy --policy-name ALBIngressControllerIAMPolicy --policy-document file://alb-ingress-iam-policy.json { "Policy": { "PolicyName": "ALBIngressControllerIAMPolicy", "PolicyId": "ANPAXVB5JUJ6IPDAI6FV3", "Arn": "arn:aws:iam::526262051452:policy/ALBIngressControllerIAMPolicy", "Path": "/", "DefaultVersionId": "v1", "AttachmentCount": 0, "PermissionsBoundaryUsageCount": 0, "IsAttachable": true, "CreateDate": "2021-05-19T17:00:13+00:00", "UpdateDate": "2021-05-19T17:00:13+00:00" } }
Define env variables:
$ STACK_NAME=eksctl-$CLUSTER_NAME-cluster $ VPC_ID=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" | jq -r '[.Stacks[0].Outputs[] | {key: .OutputKey, value: .OutputValue}] | from_entries' | jq -r '.VPC') $ AWS_ACCOUNT_ID=$(aws sts get-caller-identity | jq -r '.Account')
Let's create the Cluster Role and Role Binding:
$ cat > rbac-role.yaml <<-EOF --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app.kubernetes.io/name: alb-ingress-controller name: alb-ingress-controller rules: - apiGroups: - "" - extensions resources: - configmaps - endpoints - events - ingresses - ingresses/status - services verbs: - create - get - list - update - watch - patch - apiGroups: - "" - extensions resources: - nodes - pods - secrets - services - namespaces verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app.kubernetes.io/name: alb-ingress-controller name: alb-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: alb-ingress-controller subjects: - kind: ServiceAccount name: alb-ingress-controller namespace: kube-system EOF
Run the file to create a role:
$ kubectl apply -f rbac-role.yaml clusterrole.rbac.authorization.k8s.io/alb-ingress-controller created clusterrolebinding.rbac.authorization.k8s.io/alb-ingress-controller created
As we can see from the output, the commands created two resources for us.
Next, the Kubernetes Service Account:
$ eksctl create iamserviceaccount \ --name alb-ingress-controller \ --namespace kube-system \ --cluster $CLUSTER_NAME \ --attach-policy-arn arn:aws:iam::$AWS_ACCOUNT_ID:policy/ALBIngressControllerIAMPolicy \ --approve 2021-05-19 10:04:25 [ℹ] eksctl version 0.44.0 2021-05-19 10:04:25 [ℹ] using region us-east-1 2021-05-19 10:04:26 [ℹ] 1 iamserviceaccount (kube-system/alb-ingress-controller) was included (based on the include/exclude rules) 2021-05-19 10:04:26 [!] serviceaccounts that exists in Kubernetes will be excluded, use --override-existing-serviceaccounts to override 2021-05-19 10:04:26 [ℹ] 1 task: { 2 sequential sub-tasks: { create IAM role for serviceaccount "kube-system/alb-ingress-controller", create serviceaccount "kube-system/alb-ingress-controller" } } 2021-05-19 10:04:26 [ℹ] building iamserviceaccount stack "eksctl-eks-fargate-alb-bogo-addon-iamserviceaccount-kube-system-alb-ingress-controller" 2021-05-19 10:04:27 [ℹ] deploying stack "eksctl-eks-fargate-alb-bogo-addon-iamserviceaccount-kube-system-alb-ingress-controller" 2021-05-19 10:04:27 [ℹ] waiting for CloudFormation stack "eksctl-eks-fargate-alb-bogo-addon-iamserviceaccount-kube-system-alb-ingress-controller" 2021-05-19 10:04:31 [ℹ] waiting for CloudFormation stack "eksctl-eks-fargate-alb-bogo-addon-iamserviceaccount-kube-system-alb-ingress-controller" 2021-05-19 10:04:48 [ℹ] waiting for CloudFormation stack "eksctl-eks-fargate-alb-bogo-addon-iamserviceaccount-kube-system-alb-ingress-controller" 2021-05-19 10:04:49 [ℹ] created serviceaccount "kube-system/alb-ingress-controller"
This eksctl
command will deploy a new CloudFormation stack with an IAM role.
Let’s now deploy the ALB Ingress Controller to our cluster:
$ cat > alb-ingress-controller.yaml <<-EOF apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: alb-ingress-controller name: alb-ingress-controller namespace: kube-system spec: selector: matchLabels: app.kubernetes.io/name: alb-ingress-controller template: metadata: labels: app.kubernetes.io/name: alb-ingress-controller spec: containers: - name: alb-ingress-controller args: - --ingress-class=alb - --cluster-name=$CLUSTER_NAME - --aws-vpc-id=$VPC_ID - --aws-region=$AWS_REGION image: docker.io/amazon/aws-alb-ingress-controller:v1.1.6 serviceAccountName: alb-ingress-controller EOF $ kubectl apply -f alb-ingress-controller.yaml deployment.apps/alb-ingress-controller created $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE alb-ingress-controller-7c7f67575-sdd2v 1/1 Running 0 111s coredns-79d688dfd8-7wn9w 1/1 Running 0 50m coredns-79d688dfd8-v85wz 1/1 Running 0 50m
Now that we have our ingress controller running, we can deploy the application to the cluster and create an ingress resource to expose it.
$ cat > nginx-deployment.yaml <<-EOF apiVersion: apps/v1 kind: Deployment metadata: name: "nginx-deployment" namespace: "default" spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: "nginx" spec: containers: - image: nginx:latest imagePullPolicy: Always name: "nginx" ports: - containerPort: 80 EOF $ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment created
Then, let’s create a service so we can expose the NGINX pods:
$ cat > nginx-service.yaml <<-EOF apiVersion: v1 kind: Service metadata: annotations: alb.ingress.kubernetes.io/target-type: ip name: "nginx-service" namespace: "default" spec: ports: - port: 80 targetPort: 80 protocol: TCP type: NodePort selector: app: "nginx" EOF $ kubectl apply -f nginx-service.yaml service/nginx-service created $ cat > nginx-ingress.yaml <<-EOF apiVersion: extensions/v1beta1 kind: Ingress metadata: name: "nginx-ingress" namespace: "default" annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing labels: app: nginx-ingress spec: rules: - http: paths: - path: /* backend: serviceName: "nginx-service" servicePort: 80 EOF $ kubectl apply -f nginx-ingress.yaml ingress.extensions/nginx-ingress created $ kubectl get ingress nginx-ingress NAME CLASS HOSTS ADDRESS PORTS AGE nginx-ingress <none> * 8c14e961-default-nginxingr-29e9-1292497776.us-east-1.elb.amazonaws.com 80 17s $ LOADBALANCER_PREFIX=$(kubectl get ingress nginx-ingress -o json | jq -r '.status.loadBalancer.ingress[0].hostname' | cut -d- -f1) $ echo $LOADBALANCER_PREFIX 8c14e961 $ TARGETGROUP_ARN=$(aws elbv2 describe-target-groups | jq -r '.TargetGroups[].TargetGroupArn' | grep $LOADBALANCER_PREFIX) $ echo $TARGETGROUP_ARN arn:aws:elasticloadbalancing:us-east-1:526262051452:targetgroup/8c14e961-f8c4797641154edfb62/c1f40081fd607c40 $ aws elbv2 describe-target-health --target-group-arn $TARGETGROUP_ARN | jq -r '.TargetHealthDescriptions[].TargetHealth.State' healthy healthy healthy $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-cc7df4f8f-9whcm 1/1 Running 0 9m28s 192.168.79.53 fargate-ip-192-168-79-53.ec2.internal <none> <none> nginx-deployment-cc7df4f8f-lt7ql 1/1 Running 0 9m28s 192.168.101.8 fargate-ip-192-168-101-8.ec2.internal <none> <none> nginx-deployment-cc7df4f8f-qcc2n 1/1 Running 0 9m28s 192.168.127.30 fargate-ip-192-168-127-30.ec2.internal <none> <none>
With that, we should be able to run our application on containers with Amazon EKS without having to manage any infrastructure and being able to expose them to the internet or other applications using the AWS Application Load Balancer.
Put the alb address, 8c14e961-default-nginxingr-29e9-1292497776.us-east-1.elb.amazonaws.com in the browser:
$ kubectl delete -f nginx-ingress.yaml ingress.extensions "nginx-ingress" deleted $ kubectl delete -f nginx-service.yaml service "nginx-service" deleted $ kubectl delete -f nginx-deployment.yaml deployment.apps "nginx-deployment" deleted $ kubectl delete -f alb-ingress-controller.yaml deployment.apps "alb-ingress-controller" deleted $ kubectl delete -f rbac-role.yaml clusterrole.rbac.authorization.k8s.io "alb-ingress-controller" deleted clusterrolebinding.rbac.authorization.k8s.io "alb-ingress-controller" deleted $ eksctl delete iamserviceaccount \ --name alb-ingress-controller \ --namespace kube-system \ --cluster $CLUSTER_NAME 2021-05-19 12:49:27 [ℹ] eksctl version 0.44.0 2021-05-19 12:49:27 [ℹ] using region us-east-1 2021-05-19 12:49:28 [ℹ] 1 iamserviceaccount (kube-system/alb-ingress-controller) was included (based on the include/exclude rules) 2021-05-19 12:49:29 [ℹ] 1 task: { 2 sequential sub-tasks: { delete IAM role for serviceaccount "kube-system/alb-ingress-controller" [async], delete serviceaccount "kube-system/alb-ingress-controller" } } 2021-05-19 12:49:29 [ℹ] will delete stack "eksctl-eks-fargate-alb-bogo-addon-iamserviceaccount-kube-system-alb-ingress-controller" 2021-05-19 12:49:29 [ℹ] deleted serviceaccount "kube-system/alb-ingress-controller" $ ALBIngressControllerIAMPolicyARN=$(aws iam list-policies --query 'Policies[?PolicyName==`ALBIngressControllerIAMPolicy`].Arn' --output text) $ aws iam delete-policy --policy-arn $ALBIngressControllerIAMPolicyARN $ eksctl delete cluster --name $CLUSTER_NAME --region $AWS_REGION 2021-05-19 12:51:15 [ℹ] eksctl version 0.44.0 2021-05-19 12:51:15 [ℹ] using region us-east-1 2021-05-19 12:51:15 [ℹ] deleting EKS cluster "eks-fargate-alb-bogo" 2021-05-19 12:51:16 [ℹ] deleting Fargate profile "fp-default" 2021-05-19 12:55:34 [ℹ] deleted Fargate profile "fp-default" 2021-05-19 12:55:34 [ℹ] deleted 1 Fargate profile(s) 2021-05-19 12:55:35 [✔] kubeconfig has been updated 2021-05-19 12:55:35 [ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress 2021-05-19 12:55:37 [ℹ] 2 sequential tasks: { delete IAM OIDC provider, delete cluster control plane "eks-fargate-alb-bogo" [async] } 2021-05-19 12:55:38 [ℹ] will delete stack "eksctl-eks-fargate-alb-bogo-cluster" 2021-05-19 12:55:38 [✔] all cluster resources were deleted
- How do I set up the AWS Load Balancer Controller on an Amazon EKS cluster for Fargate?
- https://github.com/aws/eks-charts.git
- Using ALB Ingress Controller with Amazon EKS on Fargate
Here are the two Cloudformation templates created during this post.
eksctl-eks-fargate-alb-bogo-addon-iamserviceaccount-kube-system-alb-ingress-controller:
{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "IAM role for serviceaccount \"kube-system/alb-ingress-controller\" [created and managed by eksctl]", "Resources": { "Role1": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Statement": [ { "Action": [ "sts:AssumeRoleWithWebIdentity" ], "Condition": { "StringEquals": { "oidc.eks.us-east-1.amazonaws.com/id/62CA4123A8495432655F46B2EFE83DC2:aud": "sts.amazonaws.com", "oidc.eks.us-east-1.amazonaws.com/id/62CA4123A8495432655F46B2EFE83DC2:sub": "system:serviceaccount:kube-system:alb-ingress-controller" } }, "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::526262051452:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/62CA4123A8495432655F46B2EFE83DC2" } } ], "Version": "2012-10-17" }, "ManagedPolicyArns": [ "arn:aws:iam::526262051452:policy/ALBIngressControllerIAMPolicy" ] } } }, "Outputs": { "Role1": { "Value": { "Fn::GetAtt": "Role1.Arn" } } } }
eksctl-eks-fargate-alb-bogo-addon-iamserviceaccount-kube-system-alb-ingress-controller:
{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "EKS cluster (dedicated VPC: true, dedicated IAM: true) [created and managed by eksctl]", "Mappings": { "ServicePrincipalPartitionMap": { "aws": { "EC2": "ec2.amazonaws.com", "EKS": "eks.amazonaws.com", "EKSFargatePods": "eks-fargate-pods.amazonaws.com" }, "aws-cn": { "EC2": "ec2.amazonaws.com.cn", "EKS": "eks.amazonaws.com", "EKSFargatePods": "eks-fargate-pods.amazonaws.com" }, "aws-us-gov": { "EC2": "ec2.amazonaws.com", "EKS": "eks.amazonaws.com", "EKSFargatePods": "eks-fargate-pods.amazonaws.com" } } }, "Resources": { "ClusterSharedNodeSecurityGroup": { "Type": "AWS::EC2::SecurityGroup", "Properties": { "GroupDescription": "Communication between all nodes in the cluster", "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/ClusterSharedNodeSecurityGroup" } } ], "VpcId": { "Ref": "VPC" } } }, "ControlPlane": { "Type": "AWS::EKS::Cluster", "Properties": { "Name": "eks-fargate-alb-bogo", "ResourcesVpcConfig": { "SecurityGroupIds": [ { "Ref": "ControlPlaneSecurityGroup" } ], "SubnetIds": [ { "Ref": "SubnetPublicUSEAST1C" }, { "Ref": "SubnetPublicUSEAST1F" }, { "Ref": "SubnetPrivateUSEAST1C" }, { "Ref": "SubnetPrivateUSEAST1F" } ] }, "RoleArn": { "Fn::GetAtt": [ "ServiceRole", "Arn" ] }, "Version": "1.18" } }, "ControlPlaneSecurityGroup": { "Type": "AWS::EC2::SecurityGroup", "Properties": { "GroupDescription": "Communication between the control plane and worker nodegroups", "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/ControlPlaneSecurityGroup" } } ], "VpcId": { "Ref": "VPC" } } }, "FargatePodExecutionRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Statement": [ { "Action": [ "sts:AssumeRole" ], "Effect": "Allow", "Principal": { "Service": [ { "Fn::FindInMap": [ "ServicePrincipalPartitionMap", { "Ref": "AWS::Partition" }, "EKSFargatePods" ] } ] } } ], "Version": "2012-10-17" }, "ManagedPolicyArns": [ { "Fn::Sub": "arn:${AWS::Partition}:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy" } ], "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/FargatePodExecutionRole" } } ] } }, "IngressDefaultClusterToNodeSG": { "Type": "AWS::EC2::SecurityGroupIngress", "Properties": { "Description": "Allow managed and unmanaged nodes to communicate with each other (all ports)", "FromPort": 0, "GroupId": { "Ref": "ClusterSharedNodeSecurityGroup" }, "IpProtocol": "-1", "SourceSecurityGroupId": { "Fn::GetAtt": [ "ControlPlane", "ClusterSecurityGroupId" ] }, "ToPort": 65535 } }, "IngressInterNodeGroupSG": { "Type": "AWS::EC2::SecurityGroupIngress", "Properties": { "Description": "Allow nodes to communicate with each other (all ports)", "FromPort": 0, "GroupId": { "Ref": "ClusterSharedNodeSecurityGroup" }, "IpProtocol": "-1", "SourceSecurityGroupId": { "Ref": "ClusterSharedNodeSecurityGroup" }, "ToPort": 65535 } }, "IngressNodeToDefaultClusterSG": { "Type": "AWS::EC2::SecurityGroupIngress", "Properties": { "Description": "Allow unmanaged nodes to communicate with control plane (all ports)", "FromPort": 0, "GroupId": { "Fn::GetAtt": [ "ControlPlane", "ClusterSecurityGroupId" ] }, "IpProtocol": "-1", "SourceSecurityGroupId": { "Ref": "ClusterSharedNodeSecurityGroup" }, "ToPort": 65535 } }, "InternetGateway": { "Type": "AWS::EC2::InternetGateway", "Properties": { "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/InternetGateway" } } ] } }, "NATGateway": { "Type": "AWS::EC2::NatGateway", "Properties": { "AllocationId": { "Fn::GetAtt": [ "NATIP", "AllocationId" ] }, "SubnetId": { "Ref": "SubnetPublicUSEAST1C" }, "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/NATGateway" } } ] } }, "NATIP": { "Type": "AWS::EC2::EIP", "Properties": { "Domain": "vpc", "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/NATIP" } } ] } }, "NATPrivateSubnetRouteUSEAST1C": { "Type": "AWS::EC2::Route", "Properties": { "DestinationCidrBlock": "0.0.0.0/0", "NatGatewayId": { "Ref": "NATGateway" }, "RouteTableId": { "Ref": "PrivateRouteTableUSEAST1C" } } }, "NATPrivateSubnetRouteUSEAST1F": { "Type": "AWS::EC2::Route", "Properties": { "DestinationCidrBlock": "0.0.0.0/0", "NatGatewayId": { "Ref": "NATGateway" }, "RouteTableId": { "Ref": "PrivateRouteTableUSEAST1F" } } }, "PolicyCloudWatchMetrics": { "Type": "AWS::IAM::Policy", "Properties": { "PolicyDocument": { "Statement": [ { "Action": [ "cloudwatch:PutMetricData" ], "Effect": "Allow", "Resource": "*" } ], "Version": "2012-10-17" }, "PolicyName": { "Fn::Sub": "${AWS::StackName}-PolicyCloudWatchMetrics" }, "Roles": [ { "Ref": "ServiceRole" } ] } }, "PolicyELBPermissions": { "Type": "AWS::IAM::Policy", "Properties": { "PolicyDocument": { "Statement": [ { "Action": [ "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeInternetGateways" ], "Effect": "Allow", "Resource": "*" } ], "Version": "2012-10-17" }, "PolicyName": { "Fn::Sub": "${AWS::StackName}-PolicyELBPermissions" }, "Roles": [ { "Ref": "ServiceRole" } ] } }, "PrivateRouteTableUSEAST1C": { "Type": "AWS::EC2::RouteTable", "Properties": { "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/PrivateRouteTableUSEAST1C" } } ], "VpcId": { "Ref": "VPC" } } }, "PrivateRouteTableUSEAST1F": { "Type": "AWS::EC2::RouteTable", "Properties": { "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/PrivateRouteTableUSEAST1F" } } ], "VpcId": { "Ref": "VPC" } } }, "PublicRouteTable": { "Type": "AWS::EC2::RouteTable", "Properties": { "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/PublicRouteTable" } } ], "VpcId": { "Ref": "VPC" } } }, "PublicSubnetRoute": { "Type": "AWS::EC2::Route", "Properties": { "DestinationCidrBlock": "0.0.0.0/0", "GatewayId": { "Ref": "InternetGateway" }, "RouteTableId": { "Ref": "PublicRouteTable" } }, "DependsOn": [ "VPCGatewayAttachment" ] }, "RouteTableAssociationPrivateUSEAST1C": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "PrivateRouteTableUSEAST1C" }, "SubnetId": { "Ref": "SubnetPrivateUSEAST1C" } } }, "RouteTableAssociationPrivateUSEAST1F": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "PrivateRouteTableUSEAST1F" }, "SubnetId": { "Ref": "SubnetPrivateUSEAST1F" } } }, "RouteTableAssociationPublicUSEAST1C": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "PublicRouteTable" }, "SubnetId": { "Ref": "SubnetPublicUSEAST1C" } } }, "RouteTableAssociationPublicUSEAST1F": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "PublicRouteTable" }, "SubnetId": { "Ref": "SubnetPublicUSEAST1F" } } }, "ServiceRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Statement": [ { "Action": [ "sts:AssumeRole" ], "Effect": "Allow", "Principal": { "Service": [ { "Fn::FindInMap": [ "ServicePrincipalPartitionMap", { "Ref": "AWS::Partition" }, "EKS" ] } ] } } ], "Version": "2012-10-17" }, "ManagedPolicyArns": [ { "Fn::Sub": "arn:${AWS::Partition}:iam::aws:policy/AmazonEKSClusterPolicy" }, { "Fn::Sub": "arn:${AWS::Partition}:iam::aws:policy/AmazonEKSVPCResourceController" } ], "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/ServiceRole" } } ] } }, "SubnetPrivateUSEAST1C": { "Type": "AWS::EC2::Subnet", "Properties": { "AvailabilityZone": "us-east-1c", "CidrBlock": "192.168.64.0/19", "Tags": [ { "Key": "kubernetes.io/role/internal-elb", "Value": "1" }, { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/SubnetPrivateUSEAST1C" } } ], "VpcId": { "Ref": "VPC" } } }, "SubnetPrivateUSEAST1F": { "Type": "AWS::EC2::Subnet", "Properties": { "AvailabilityZone": "us-east-1f", "CidrBlock": "192.168.96.0/19", "Tags": [ { "Key": "kubernetes.io/role/internal-elb", "Value": "1" }, { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/SubnetPrivateUSEAST1F" } } ], "VpcId": { "Ref": "VPC" } } }, "SubnetPublicUSEAST1C": { "Type": "AWS::EC2::Subnet", "Properties": { "AvailabilityZone": "us-east-1c", "CidrBlock": "192.168.0.0/19", "MapPublicIpOnLaunch": true, "Tags": [ { "Key": "kubernetes.io/role/elb", "Value": "1" }, { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/SubnetPublicUSEAST1C" } } ], "VpcId": { "Ref": "VPC" } } }, "SubnetPublicUSEAST1F": { "Type": "AWS::EC2::Subnet", "Properties": { "AvailabilityZone": "us-east-1f", "CidrBlock": "192.168.32.0/19", "MapPublicIpOnLaunch": true, "Tags": [ { "Key": "kubernetes.io/role/elb", "Value": "1" }, { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/SubnetPublicUSEAST1F" } } ], "VpcId": { "Ref": "VPC" } } }, "VPC": { "Type": "AWS::EC2::VPC", "Properties": { "CidrBlock": "192.168.0.0/16", "EnableDnsHostnames": true, "EnableDnsSupport": true, "Tags": [ { "Key": "Name", "Value": { "Fn::Sub": "${AWS::StackName}/VPC" } } ] } }, "VPCGatewayAttachment": { "Type": "AWS::EC2::VPCGatewayAttachment", "Properties": { "InternetGatewayId": { "Ref": "InternetGateway" }, "VpcId": { "Ref": "VPC" } } } }, "Outputs": { "ARN": { "Value": { "Fn::GetAtt": [ "ControlPlane", "Arn" ] }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::ARN" } } }, "CertificateAuthorityData": { "Value": { "Fn::GetAtt": [ "ControlPlane", "CertificateAuthorityData" ] } }, "ClusterSecurityGroupId": { "Value": { "Fn::GetAtt": [ "ControlPlane", "ClusterSecurityGroupId" ] }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::ClusterSecurityGroupId" } } }, "ClusterStackName": { "Value": { "Ref": "AWS::StackName" } }, "Endpoint": { "Value": { "Fn::GetAtt": [ "ControlPlane", "Endpoint" ] }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::Endpoint" } } }, "FargatePodExecutionRoleARN": { "Value": { "Fn::GetAtt": [ "FargatePodExecutionRole", "Arn" ] }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::FargatePodExecutionRoleARN" } } }, "FeatureNATMode": { "Value": "Single" }, "SecurityGroup": { "Value": { "Ref": "ControlPlaneSecurityGroup" }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::SecurityGroup" } } }, "ServiceRoleARN": { "Value": { "Fn::GetAtt": [ "ServiceRole", "Arn" ] }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::ServiceRoleARN" } } }, "SharedNodeSecurityGroup": { "Value": { "Ref": "ClusterSharedNodeSecurityGroup" }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::SharedNodeSecurityGroup" } } }, "SubnetsPrivate": { "Value": { "Fn::Join": [ ",", [ { "Ref": "SubnetPrivateUSEAST1C" }, { "Ref": "SubnetPrivateUSEAST1F" } ] ] }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::SubnetsPrivate" } } }, "SubnetsPublic": { "Value": { "Fn::Join": [ ",", [ { "Ref": "SubnetPublicUSEAST1C" }, { "Ref": "SubnetPublicUSEAST1F" } ] ] }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::SubnetsPublic" } } }, "VPC": { "Value": { "Ref": "VPC" }, "Export": { "Name": { "Fn::Sub": "${AWS::StackName}::VPC" } } } } }
Docker & K8s
- Docker install on Amazon Linux AMI
- Docker install on EC2 Ubuntu 14.04
- Docker container vs Virtual Machine
- Docker install on Ubuntu 14.04
- Docker Hello World Application
- Nginx image - share/copy files, Dockerfile
- Working with Docker images : brief introduction
- Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
- More on docker run command (docker run -it, docker run --rm, etc.)
- Docker Networks - Bridge Driver Network
- Docker Persistent Storage
- File sharing between host and container (docker run -d -p -v)
- Linking containers and volume for datastore
- Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
- Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
- Dockerfile - Build Docker images automatically III - RUN
- Dockerfile - Build Docker images automatically IV - CMD
- Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
- Docker - Apache Tomcat
- Docker - NodeJS
- Docker - NodeJS with hostname
- Docker Compose - NodeJS with MongoDB
- Docker - Prometheus and Grafana with Docker-compose
- Docker - StatsD/Graphite/Grafana
- Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
- Docker : NodeJS with GCP Kubernetes Engine
- Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
- Docker : Jenkins Master and Slave
- Docker - ELK : ElasticSearch, Logstash, and Kibana
- Docker - ELK 7.6 : Elasticsearch on Centos 7
- Docker - ELK 7.6 : Filebeat on Centos 7
- Docker - ELK 7.6 : Logstash on Centos 7
- Docker - ELK 7.6 : Kibana on Centos 7
- Docker - ELK 7.6 : Elastic Stack with Docker Compose
- Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
- Docker - Deploy Elastic Stack via Helm on minikube
- Docker Compose - A gentle introduction with WordPress
- Docker Compose - MySQL
- MEAN Stack app on Docker containers : micro services
- MEAN Stack app on Docker containers : micro services via docker-compose
- Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
- Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
- Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
- Docker Compose with two containers - Flask REST API service container and an Apache server container
- Docker compose : Nginx reverse proxy with multiple containers
- Docker & Kubernetes : Envoy - Getting started
- Docker & Kubernetes : Envoy - Front Proxy
- Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
- Docker Packer
- Docker Cheat Sheet
- Docker Q & A #1
- Kubernetes Q & A - Part I
- Kubernetes Q & A - Part II
- Docker - Run a React app in a docker
- Docker - Run a React app in a docker II (snapshot app with nginx)
- Docker - NodeJS and MySQL app with React in a docker
- Docker - Step by Step NodeJS and MySQL app with React - I
- Installing LAMP via puppet on Docker
- Docker install via Puppet
- Nginx Docker install via Ansible
- Apache Hadoop CDH 5.8 Install with QuickStarts Docker
- Docker - Deploying Flask app to ECS
- Docker Compose - Deploying WordPress to AWS
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
- Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
- Docker - ECS Fargate
- Docker - AWS ECS service discovery with Flask and Redis
- Docker & Kubernetes : minikube
- Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
- Docker & Kubernetes 3 : minikube Django with Redis and Celery
- Docker & Kubernetes 4 : Django with RDS via AWS Kops
- Docker & Kubernetes : Kops on AWS
- Docker & Kubernetes : Ingress controller on AWS with Kops
- Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
- Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
- Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
- Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
- Docker & Kubernetes : DaemonSet
- Docker & Kubernetes : Secrets
- Docker & Kubernetes : kubectl command
- Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
- Docker & Kubernetes : Configure a Pod to Use a ConfigMap
- AWS : EKS (Elastic Container Service for Kubernetes)
- Docker & Kubernetes : Run a React app in a minikube
- Docker & Kubernetes : Minikube install on AWS EC2
- Docker & Kubernetes : Cassandra with a StatefulSet
- Docker & Kubernetes : Terraform and AWS EKS
- Docker & Kubernetes : Pods and Service definitions
- Docker & Kubernetes : Service IP and the Service Type
- Docker & Kubernetes : Kubernetes DNS with Pods and Services
- Docker & Kubernetes : Headless service and discovering pods
- Docker & Kubernetes : Scaling and Updating application
- Docker & Kubernetes : Horizontal pod autoscaler on minikubes
- Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
- Docker & Kubernetes : Rolling updates
- Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
- Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
- Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
- Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
- Docker & Kubernetes : MongoDB / MongoExpress on Minikube
- Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
- Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
- Docker & Kubernetes : Nginx Ingress Controller on Minikube
- Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
- Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
- Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
- Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
- Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
- Docker & Kubernetes : StatefulSets on minikube
- Docker & Kubernetes : RBAC
- Docker & Kubernetes Service Account, RBAC, and IAM
- Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
- Docker & Kubernetes : Helm Chart
- Docker & Kubernetes : My first Helm deploy
- Docker & Kubernetes : Readiness and Liveness Probes
- Docker & Kubernetes : Helm chart repository with Github pages
- Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
- Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
- Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
- Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
- Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
- Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
- Docker & Kubernetes : Istio on EKS
- Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
- Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
- Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
- Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
- Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
- Docker & Kubernetes : Spinnaker on EKS with Halyard
- Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
- Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
- Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
- Docker & Kubernetes : Jenkins-X on EKS
- Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
- Docker & Kubernetes : ArgoCD on Kubernetes cluster
- Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization