AWS : CLI (Command Line Interface)
The AWS Command Line Interface (CLI) is a unified tool to manage AWS services. With just one tool to download and configure, we can control multiple AWS services from the command line and automate them through scripts.
Ref : AWS CLI: A beginners guide.
We have couple of ways of installing aws.
- Via bundled installer
- Via pip
To install the CLI, we need Python 2.6.5 or higher. We can install CLI:
$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" $ unzip awscli-bundle.zip $ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Pip is a Python-based tool that offers convenient ways to install, upgrade, and remove Python packages and their dependencies.
Pip is the recommended method of installing the CLI on Mac and Linux (Installing the AWS Command Line Interface).
$ sudo pip install awscli ... Successfully installed awscli-1.10.44 botocore-1.4.34 futures-3.0.5 s3transfer-0.0.1
To upgrade an existing AWS CLI installation, use the --upgrade option:
$ sudo pip install --upgrade awscli
Pip installs the aws executable to /usr/bin/aws. The awscli library (which does the actual work) is installed to the 'site-packages' folder in Python's installation directory.
Confirm that the CLI is installed correctly by viewing the help file. Open a terminal, shell or command prompt, enter aws help and press Enter:
$ aws help
This section explains how to configure settings that the AWS Command Line Interface uses when interacting with AWS, such as our security credentials and the default region.
- aws configure:
$ aws configure AWS Access Key ID [****************34AA]: AWS Secret Access Key [****************pxEZ]: Default region name [us-west-1]: Default output format [None]: json
For general use, the aws configure command is the fastest way to set up AWS CLI installation. - aws ec2 create-security-group:
$ aws ec2 create-security-group --group-name my-sg --description "My security group" { "GroupId": "sg-efc45a8b" }
Nota that JSON is the default output format.
The above commands put Access Key ID and Secret Access Key to ~/.aws/config and ~/.aws/credentials:
[default] aws_access_key_id = <access id key> aws_secret_access_key = <secret access key> region = us-west-1
Protect the files:
$ chmod 600 ~/.aws/config $ chmod 600 ~/.aws/credentials
The AWS CLI looks for credentials and configuration settings in the following order:
- Command Line Options - region, output format and profile can be specified as command options to override default settings.
- Environment Variables - AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, etc.
- The AWS credentials file - located at ~/.aws/credentials on Linux, OS X, or Unix, or at C:\Users\USERNAME \.aws\credentials on Windows. This file can contain multiple named profiles in addition to a default profile.
- The CLI configuration file - typically located at ~/.aws/config on Linux, OS X, or Unix, or at C:\Users\USERNAME \.aws\config on Windows. This file can contain a default profile, named profiles, and CLI specific configuration parameters for each.
- Instance profile credentials - these credentials can be used on EC2 instances with an assigned instance role, and are delivered through the Amazon EC2 metadata service.
For more : Configuring the AWS Command Line Interface.
This section describes how to launch an EC2 instance running Ubuntu 14.04 from the command line using the AWS CLI.
Ref : Deploying a Development Environment in Amazon EC2 Using the AWS Command Line Interface
As done in previous sections, we need to run "aws configure" at the command line to set up credentials and settings.
$ aws configure AWS Access Key ID [****************DZMA]: AWS Secret Access Key [****************XMlf]: Default region name [us-west-1]: Default output format [json]:
Let's create a new security group:
$ aws ec2 create-security-group --group-name devenv-bogo-sg --description "security group for dev env in EC2" { "GroupId": "sg-fc8f1198" }
Then, add a rule that allows incoming traffic over port 22 for SSH:
$ aws ec2 authorize-security-group-ingress --group-name devenv-bogo-sg --protocol tcp --port 22 --cidr 0.0.0.0/0
We can get what we've done using the following command:
$ aws ec2 describe-security-groups
We can check it from AWS console:
Next, create a key pair, which allows us to connect to the instance:
$ aws ec2 create-key-pair --key-name devenv-bogo-key --query 'KeyMaterial' --output text > devenv-bogo-key.pem
We need to change the file mode so that we can have access to the key file.
$ chmod 400 devenv-bogo-key.pem
Now we are ready to launch an instance and connect to it.
$ aws ec2 run-instances --image-id ami-06116566 --security-group-ids sg-fc8f1198 --count 1 --instance-type t2.nano --key-name devenv-bogo-key --query 'Instances[0].InstanceId' "i-f2b7f847"
Once the instance is up and running, the following command will retrieve the public IP address that we will use to connect to the instance:
$ aws ec2 describe-instances --instance-ids i-f2b7f847 --query 'Reservations[0].Instances[0].PublicIpAddress' "54.153.77.171"
To connect to the instance, use the public IP address and private key with preferred terminal program. On Linux, OS X, or Unix, we can do this from the command line with the following command:
$ ssh -i devenv-bogo-key.pem ubuntu@54.153.77.171 ubuntu@ip-172-31-7-28:~$
We've now configured a security group, created a key pair, launched an EC2 instance, and connected to it without ever leaving the command line.
To lists all buckets:
$ aws s3 ls
To lists files in a bucket:
$ aws s3 ls s3://my-bucket-einsteinish 2017-04-25 09:28:32 3 ok 2017-04-25 09:27:30 3 ok.txt 2017-04-25 09:39:42 3 ok2.txt
To make a bucket(mb):
$ aws s3 mb s3://my-bucket-einsteinish-2 make_bucket: my-bucket-einsteinish-2
To remove a bucket (rb):
$ aws s3 rb s3://my-bucket-einsteinish-2 remove_bucket: my-bucket-einsteinish-2
To upload a file to a bucket (aws s3 cp, aws s3 mv, and aws s3 sync):
$ aws s3 cp ok.txt s3://my-bucket-einsteinish/ok.txt
THe following command will create ecs/jenkins and copy "ecs-jenkins.json" to bogo-aws/ecs/jenkins/:
$ aws s3 cp ecs-jenkins.json s3://bogo-aws/ecs/jenkins/ upload: ./ecs-jenkins.json to s3://bogo-aws/ecs/jenkins/ecs-jenkins.json
Or we can copy an object into a bucket with --grants read permissions on the object to everyone and full permissions (read, readacl, and writeacl) to the account associated with awsr@bogotobogo.com:
$ aws s3 cp ok2.txt s3://my-bucket-einsteinish/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=aws@bogotobogo.com
For this one, we may edit the permission from "Deny" to "Allow" for the "Effect" since it's been created by "BeanStalk".
To delete non-empty bucket, we use "--force" flag:
$ aws s3 rb s3://clusters.dev.cruxlynx.com --force
We can check our template file for syntax errors using aws cloudformation validate-template command:
$ aws cloudformation validate-template --template-url https://s3.amazonaws.com/my-cloudformation-1/ec2-instance-with-sg.template AWS CloudFormation Sample Template EC2InstanceWithSecurityGroupSample: Create an Amazon EC2 instance running the Amazon Linux AMI. The AMI is chosen based on the region in which the stack is run. This example creates an EC2 security group for the instance to give you SSH access. **WARNING** This template creates an Amazon EC2 instance. You will be billed for the AWS resources used if you create a stack from this template. PARAMETERS Name of an existing EC2 KeyPair to enable SSH access to the instance False KeyName PARAMETERS 0.0.0.0/0 The IP address range that can be used to SSH to the EC2 instances False SSHLocation PARAMETERS t2.small WebServer EC2 instance type False InstanceType
To create a stack we run the aws cloudformation create-stack command. We must provide the stack name, the location of a valid template, and any input parameters. Parameters are separated with a space and the key names are case sensitive. If we mistype a parameter key name when we run aws cloudformation create-stack, AWS CloudFormation doesn't create the stack and reports that the template doesn't contain that parameter.
$ aws cloudformation create-stack --stack-name myteststack --template-url https://s3.amazonaws.com/my-cloudformation-1/ec2-instance-with-sg.template --parameters ParameterKey=KeyName,ParameterValue=einsteinish arn:aws:cloudformation:us-east-1:526262051452:stack/myteststack/89192290-2e1b-11e7-893d-50a686e4bb1e
Note that the parameters in "ParameterKey=KeyName" should match the one in the template file. In our case:
"Parameters" : { "KeyName": { "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instance", "Type": "AWS::EC2::KeyPair::KeyName", "ConstraintDescription" : "must be the name of an existing EC2 KeyPair." }, ...
In the command, we specified "S3 url", however, we can use local template file (--template-body file://):
$ aws cloudformation create-stack --stack-name myteststack2 --template-body file:///home/k/TEST/CloudFormation/ec2-instance-with-sg.template --parameters ParameterKey=KeyName,ParameterValue=einsteinish arn:aws:cloudformation:us-east-1:526262051452:stack/myteststack2/cbc84e30-2e21-11e7-8841-500c28637435
If we specify a local template file, AWS CloudFormation uploads it to an Amazon S3 bucket in our AWS account. AWS CloudFormation creates a unique bucket for each region in which you upload a template file. The buckets are accessible to anyone with Amazon S3 permissions in our AWS account. If an AWS CloudFormation-created bucket already exists, the template is added to that bucket.
By default, aws cloudformation describe-stacks returns parameter values:
$ aws cloudformation describe-stacks
AWS (Amazon Web Services)
- AWS : EKS (Elastic Container Service for Kubernetes)
- AWS : Creating a snapshot (cloning an image)
- AWS : Attaching Amazon EBS volume to an instance
- AWS : Adding swap space to an attached volume via mkswap and swapon
- AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data
- AWS : Creating an instance to a new region by copying an AMI
- AWS : S3 (Simple Storage Service) 1
- AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket
- AWS : S3 (Simple Storage Service) 3 - Bucket Versioning
- AWS : S3 (Simple Storage Service) 4 - Uploading a large file
- AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively
- AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download
- AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another
- AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier
- AWS : Creating a CloudFront distribution with an Amazon S3 origin
- AWS : Creating VPC with CloudFormation
- AWS : WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution
- AWS : CloudWatch & Logs with Lambda Function / S3
- AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS
- AWS : Lambda and SNS - cross account
- AWS : CLI (Command Line Interface)
- AWS : CLI (ECS with ALB & autoscaling)
- AWS : ECS with cloudformation and json task definition
- AWS Application Load Balancer (ALB) and ECS with Flask app
- AWS : Load Balancing with HAProxy (High Availability Proxy)
- AWS : VirtualBox on EC2
- AWS : NTP setup on EC2
- AWS: jq with AWS
- AWS & OpenSSL : Creating / Installing a Server SSL Certificate
- AWS : OpenVPN Access Server 2 Install
- AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR
- AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard
- AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT
- DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)
- AWS - OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN
- AWS : Autoscaling group (ASG)
- AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation
- AWS : Adding a SSH User Account on Linux Instance
- AWS : Windows Servers - Remote Desktop Connections using RDP
- AWS : Scheduled stopping and starting an instance - python & cron
- AWS : Detecting stopped instance and sending an alert email using Mandrill smtp
- AWS : Elastic Beanstalk with NodeJS
- AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy
- AWS : Identity and Access Management (IAM) Roles for Amazon EC2
- AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts
- AWS : Identity and Access Management (IAM) sts assume role via aws cli2
- AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation
- AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)
- AWS : Amazon Route 53
- AWS : Amazon Route 53 - DNS (Domain Name Server) setup
- AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx
- AWS Amazon Route 53 : Private Hosted Zone
- AWS : SNS (Simple Notification Service) example with ELB and CloudWatch
- AWS : Lambda with AWS CloudTrail
- AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK
- AWS : Redshift data warehouse
- AWS : CloudFormation
- AWS : CloudFormation Bootstrap UserData/Metadata
- AWS : CloudFormation - Creating an ASG with rolling update
- AWS : Cloudformation Cross-stack reference
- AWS : OpsWorks
- AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)
- AWS CodeDeploy : Deploy an Application from GitHub
- AWS EC2 Container Service (ECS)
- AWS EC2 Container Service (ECS) II
- AWS Hello World Lambda Function
- AWS Lambda Function Q & A
- AWS Node.js Lambda Function & API Gateway
- AWS API Gateway endpoint invoking Lambda function
- AWS API Gateway invoking Lambda function with Terraform
- AWS API Gateway invoking Lambda function with Terraform - Lambda Container
- Amazon Kinesis Streams
- AWS: Kinesis Data Firehose with Lambda and ElasticSearch
- Amazon DynamoDB
- Amazon DynamoDB with Lambda and CloudWatch
- Loading DynamoDB stream to AWS Elasticsearch service with Lambda
- Amazon ML (Machine Learning)
- Simple Systems Manager (SSM)
- AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine
- AWS : RDS Importing and Exporting SQL Server Data
- AWS : RDS PostgreSQL & pgAdmin III
- AWS : RDS PostgreSQL 2 - Creating/Deleting a Table
- AWS : MySQL Replication : Master-slave
- AWS : MySQL backup & restore
- AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL
- AWS : Restoring Postgres on EC2 instance from S3 backup
- AWS : Q & A
- AWS : Security
- AWS : Security groups vs. network ACLs
- AWS : Scaling-Up
- AWS : Networking
- AWS : Single Sign-on (SSO) with Okta
- AWS : JIT (Just-in-Time) with Okta
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization