AWS Identity and Access Management (IAM) Roles for Amazon EC2
We'll learn how to create and use an IAM role. In this tutorial, we'll use a code which creates a S3 bucket via Python boto module: the first sample with credentials hard coded, and the other one using IAM which requires no credentials.
When an application needs AWS resources, it must sign their API requests with AWS credentials. So, as an application developer, we need a strategy for managing credentials for our applications that run on EC2 instances.
Another example may be when a user from one AWS account needs access to resources of another AWS account.
However, it's challenging to securely distribute credentials to each instance while can do this easily by providing the security credentials within the application itself.
Also, this approach is not safe because somebody can peek the credentials and use them.
We can resolve this situation by using IAM roles. A role describes the permissions to access resources. However the permissions are not linked to any IAM user or group.
By doing this, the applications or services assume the role (permissions) at run time. This means that AWS is providing to the application or the user temporary security credentials that can be used whenever they need access to those resources.
IAM roles are designed so that our applications can securely make API requests from our instances, without requiring us to manage the security credentials that the applications use. Instead of creating and distributing our AWS credentials, wean delegate permission to make API requests using IAM roles as follows:
- Create an IAM role.
- Define which accounts or AWS services can assume the role.
- Define which API actions and resources the application can use after assuming the role.
- Specify the role when we launch our instances.
- Have the application retrieve a set of temporary credentials and use them.
The following Python script on an EC2 instance that doesn't use IAM role. The credentials are hard coded inside the script:
import boto.s3 from boto.s3.connection import S3Connection from boto.s3.key import Key # 1, Hard coded credentials conn = boto.s3.connection.S3Connection(aws_access_key_id='AK***E26A', aws_secret_access_key='Jepc***xLe43y') # 2, Using env. config #aws_access_key_id = boto.config.get('Credentials', 'aws_access_key_id') #aws_secret_access_key = boto.config.get('Credentials', 'aws_secret_access_key') #conn = boto.connect_s3(aws_access_key_id, aws_secret_access_key) my_bucket = conn.create_bucket('iam_role_test') # Represents a key (object) in an S3 bucket : boto.s3.key.Key write = Key(my_bucket) write.key = 'test_file' write.set_contents_from_string('Python boto') read = Key(my_bucket) read.key = 'test_file' print('The content of a S3 bucket has been created by %s!' % (read.get_contents_as_string()))
Notw that we provided the access key ID and secret access key inside code.
Let's run the script:
$ python s3-boto.py The content of a S3 bucket has been created by Python boto!
As we can see a S3 bucket is created (iam_role_test) with an object inside (test_file).
So let's create a new role that allows our script on an EC2 instance to run without the access keys.
From the IAM console, choose Roles and then Create New Role.
After giving the role a name and click on Next Step, and select Amazon EC2::
Then keep the default Attach Policy template and find Amazon S3 Full Access and select it:
The next step will show us what the policy looks like:
Create the role.
Now that the role has been created, let's launch another EC2 instance with this role. Note that we need to specify the role assigned to the EC2 instance:
Here is the code without any credentials:
# Testing AIM # Access S3 using AIM # No security credentials needed # Run from an instance with AIM role defined import boto.s3 from boto.s3.connection import S3Connection from boto.s3.key import Key # 1, Hard coded credentials # conn = boto.s3.connection.S3Connection(aws_access_key_id='AK***E26A', # aws_secret_access_key='Jepc***xLe43y') # 2, Using env. config such as .boto #aws_access_key_id = boto.config.get('Credentials', 'aws_access_key_id') #aws_secret_access_key = boto.config.get('Credentials', 'aws_secret_access_key') #conn = boto.connect_s3(aws_access_key_id, aws_secret_access_key) # 3, Using AIM - no credentials given conn = boto.s3.connection.S3Connection() my_bucket = conn.create_bucket('iam_role_test_2') # Represents a key (object) in an S3 bucket : boto.s3.key.Key write = Key(my_bucket) write.key = 'test_file' write.set_contents_from_string('Python boto') read = Key(my_bucket) read.key = 'test_file' print('The content of a S3 bucket has been created by %s!' % (read.get_contents_as_string()))
We get the same output:
ubuntu@ip-172-31-8-26:~$ python s3-boto-2.py The content of a S3 bucket has been created by Python boto!
Because our instance metadata is available from our running instance, we do not need to use the Amazon EC2 console or the AWS CLI. This can be helpful when were writing scripts to run from our instance. For example, we can access the local IP address of our instance from instance metadata to manage a connection to an external application.
To view all categories of instance metadata from within a running instance, use the following URI:
ubuntu@ip-172-31-8-26:~$ curl http://169.254.169.254/latest/meta-data/ ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname iam/ instance-action instance-id instance-type local-hostname local-ipv4 mac metrics/ network/ placement/ profile public-hostname public-ipv4 public-keys/ reservation-id security-groups
The security credentials regarding our s3_role:
ubuntu@ip-172-31-8-26:~$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3_role { "Code" : "Success", "LastUpdated" : "2015-04-20T20:01:32Z", "Type" : "AWS-HMAC", "AccessKeyId" : "ASI***IWQ", "SecretAccessKey" : "z7d***69", "Token" : "AQoDY***QU=", "Expiration" : "2015-04-21T02:36:39Z"
AWS (Amazon Web Services)
- AWS : EKS (Elastic Container Service for Kubernetes)
- AWS : Creating a snapshot (cloning an image)
- AWS : Attaching Amazon EBS volume to an instance
- AWS : Adding swap space to an attached volume via mkswap and swapon
- AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data
- AWS : Creating an instance to a new region by copying an AMI
- AWS : S3 (Simple Storage Service) 1
- AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket
- AWS : S3 (Simple Storage Service) 3 - Bucket Versioning
- AWS : S3 (Simple Storage Service) 4 - Uploading a large file
- AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively
- AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download
- AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another
- AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier
- AWS : Creating a CloudFront distribution with an Amazon S3 origin
- AWS : Creating VPC with CloudFormation
- AWS : WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution
- AWS : CloudWatch & Logs with Lambda Function / S3
- AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS
- AWS : Lambda and SNS - cross account
- AWS : CLI (Command Line Interface)
- AWS : CLI (ECS with ALB & autoscaling)
- AWS : ECS with cloudformation and json task definition
- AWS Application Load Balancer (ALB) and ECS with Flask app
- AWS : Load Balancing with HAProxy (High Availability Proxy)
- AWS : VirtualBox on EC2
- AWS : NTP setup on EC2
- AWS: jq with AWS
- AWS & OpenSSL : Creating / Installing a Server SSL Certificate
- AWS : OpenVPN Access Server 2 Install
- AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR
- AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard
- AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT
- DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)
- AWS - OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN
- AWS : Autoscaling group (ASG)
- AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation
- AWS : Adding a SSH User Account on Linux Instance
- AWS : Windows Servers - Remote Desktop Connections using RDP
- AWS : Scheduled stopping and starting an instance - python & cron
- AWS : Detecting stopped instance and sending an alert email using Mandrill smtp
- AWS : Elastic Beanstalk with NodeJS
- AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy
- AWS : Identity and Access Management (IAM) Roles for Amazon EC2
- AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts
- AWS : Identity and Access Management (IAM) sts assume role via aws cli2
- AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation
- AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)
- AWS : Amazon Route 53
- AWS : Amazon Route 53 - DNS (Domain Name Server) setup
- AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx
- AWS Amazon Route 53 : Private Hosted Zone
- AWS : SNS (Simple Notification Service) example with ELB and CloudWatch
- AWS : Lambda with AWS CloudTrail
- AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK
- AWS : Redshift data warehouse
- AWS : CloudFormation
- AWS : CloudFormation Bootstrap UserData/Metadata
- AWS : CloudFormation - Creating an ASG with rolling update
- AWS : Cloudformation Cross-stack reference
- AWS : OpsWorks
- AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)
- AWS CodeDeploy : Deploy an Application from GitHub
- AWS EC2 Container Service (ECS)
- AWS EC2 Container Service (ECS) II
- AWS Hello World Lambda Function
- AWS Lambda Function Q & A
- AWS Node.js Lambda Function & API Gateway
- AWS API Gateway endpoint invoking Lambda function
- AWS API Gateway invoking Lambda function with Terraform
- AWS API Gateway invoking Lambda function with Terraform - Lambda Container
- Amazon Kinesis Streams
- AWS: Kinesis Data Firehose with Lambda and ElasticSearch
- Amazon DynamoDB
- Amazon DynamoDB with Lambda and CloudWatch
- Loading DynamoDB stream to AWS Elasticsearch service with Lambda
- Amazon ML (Machine Learning)
- Simple Systems Manager (SSM)
- AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine
- AWS : RDS Importing and Exporting SQL Server Data
- AWS : RDS PostgreSQL & pgAdmin III
- AWS : RDS PostgreSQL 2 - Creating/Deleting a Table
- AWS : MySQL Replication : Master-slave
- AWS : MySQL backup & restore
- AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL
- AWS : Restoring Postgres on EC2 instance from S3 backup
- AWS : Q & A
- AWS : Security
- AWS : Security groups vs. network ACLs
- AWS : Scaling-Up
- AWS : Networking
- AWS : Single Sign-on (SSO) with Okta
- AWS : JIT (Just-in-Time) with Okta
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization