Implement multi-dimensional security in AWS
In any application or system, whether it is running on premises or in cloud, security is utmost important. By securing your application(s) or system(s) from various attacks, accidental deletion, data leakage/theft etc., you can build confidence in your customers/users/stakeholders, focus on innovation, focus on business growth. There is no one way to implement security or there is not a single layer or level to protect, the security should be multi-dimensional which means every layer, every entry and exit point and the smallest bit of your data should be protected.
This post talks about, what is to be protected, how to protect and which AWS service can help you in achieving the multi-dimensional security of the system as a whole.
Security Design Principles
In the cloud, there are number of principles that can help you strengthen your system security:
Understand Shared Security Responsibility Model
AWS provides secure global infrastructure. However, AWS has principal of Shared Responsibility Model where AWS and Customers work together to secure data, assets and infrastructure on AWS cloud.
AWS says, “Security of the cloud” is their responsibility and “Security in the cloud” is the responsibility of the customers. Security of the cloud means AWS is responsible for protecting the physical locations where their data centers are residing, and the underlying foundational services such as compute, network, database and storage. Whereas Security in the cloud means, the customer is responsible for securing resources running/provisioned above the AWS managed layer such as OS, network, firewall, platform and application management, customer data, client side and server side encryption, network traffic protection etc. To understand better, let’s take an example of EC2, AWS is protecting the physical infrastructure and related services/resources from which the EC2 instances gets provisioned, once you provisioned the EC2 instance, it is your responsibility to secure everything you run onto it including the OS patching. If your EC2 instance gets compromised or there has been an attack which leads to data theft, then AWS will be not responsible for it.
There are some services like RDS and EMR, where AWS manages the underlying infrastructure and foundation services, the operating system and the application platform. Customer is only responsible to protect their data along with proper security measures, backup and recovery, high availability, disaster management, business continuity etc., using AWS provided tools for that service.
Services like S3 and DynamoDB, where AWS operates the infrastructure layer, the operating system, and platforms and you access the endpoints to store and retrieve data. You need to make sure only authorized users are accessing your data. You don’t need to worry about the high availability, security, patching of the service, AWS manages everything for you.
Enable Security | Identity and Access Management
Right level of access is very much important, it determines which user or service is eligible to access which AWS service(s). AWS provides IAM service to manage AWS accounts, groups, roles and IAM Users. IAM works on policies which describe permissions you like to give to a user/service or group of users. IAM policies are written JSON format.
When you create an AWS account, it creates the root level account which has unrestricted access to all AWS services, consider it as a “God Mode” user, who can do wonders. You can create as many IAM users you want under the root account but they will have limited or admin level access.
AWS services can be accessed via AWS management console, AWS CLI and AWS SDK.
AWS recommends following actions a user should perform as soon as a user creates an AWS account:
- Delete root access keys
- Create an IAM users
- Enable MFA (multi factor authentication) for root user
- Use groups to assign permissions
- Apply an IAM policy for your users to create strong passwords and rotate their passwords
There should be an IAM user with admin level permissions for day to day management and administrator activities. AWS provides ‘AdministratorAccess’ policy which gives full access to all AWS services and resources but restrict access to billing. Having said that, admin user should also safeguard his credentials as well and should give access to other users on a basis of least privilege principal. As a thumb rule, a user in an organization should have corresponding IAM user to access AWS services, if required access.
If you have medium or large enterprise then consider using Identity Federation. Identity federation is a system of trust between two parties for the purpose of authenticating users and conveying information needed to authorize their access to resources. In this system, an identity provider (IdP) is responsible for user authentication, and a service provider (SP), such as a service or an application, controls access to resources.
Federation commonly uses open identity standards, including Security Assertion Markup Language 2.0 (SAML 2.0), Open ID Connect (OIDC), and OAuth 2.0.
Users, based on their job roles or access levels, can be grouped together using IAM groups. Admin user can group users into an IAM group and assign the required policy(s) to that group. E.g. an IT company has departments such as HR, Finance, Legal, IT admins etc., hence all the users under HR department can be grouped together in a IAM group called ‘HR’ and admin can give them limited access by assigning required permissions to HR IAM group. IAM group gives the central control over users under than group, and ease of managing their permissions. It is also quite easy to add or remove users from a group.
Many AWS services may require access to other AWS service(s) on user’s behalf, may be read only access/limited/full access. AWS has IAM roles for this type of requests. Create an IAM role with required permissions for the service and assign it to it. E.g. application running on EC2 might want access to S3 service to read and put objects in S3 buckets, and IAM role for EC2 with ‘AmazonS3FullAccess’ can be attached to EC2 for this purpose.
There are scenarios where your application or users that don’t normally have access to your AWS services, need access but on a temporary basis. IAM role along with temporary security credentials solve this problem. In these scenarios, the IAM role is not assigned to any user or service, instead the role is ‘assumed’ by the IAM user, AWS service or application running on EC2. When any role is assumed, it returns the temporary security credentials which will be used by application or user to access the AWS services programmatically. Every temporary security credentials have expiration which can be configured as per need.
Any user, role or group of users, need permissions to access AWS services. IAM policy provide you with set of permissions which you can use to give IAM users/role required level of access. Any IAM policy has following elements in it:
- Version- Specify the version of the policy language that you want to use. As a best practice, use the latest 2012–10–17 version.
- Statement — Use this main policy element as a container for the following elements. You can include more than one statement in a policy.
- Sid (optional) — Include an optional statement ID to differentiate between your statements.
- Effect — Use Allow or Deny, to indicate whether the policy allows or denies access.
- Principal (optional) — Indicate the account, user, role, or federated user to which you would like to allow or deny access. If you are creating a policy to attach to a user or role, you cannot include this element. The principal is implied as that user or role.
- Action — Include a list of actions that the policy allows or denies.
- Resource — Specify a list of resources to which the actions apply.
- Condition (optional) — Specify the circumstances under which the policy grants permission.
A simple IAM policy allows the implied principal to list a single Amazon S3 bucket named example_bucket:
The policy allows members of a specific AWS account to perform any Amazon S3 actions in the bucket named ‘mybucket’ or objects within it:
Enable Security | Detective Controls
While IAM enables you to manage users, their permissions and restrict access to AWS services or application, it is not enough to declare or consider your application as secure. Anyone who gains unauthorized access to your root access keys, can login to your AWS environment programmatically and may perform malicious activities.
Placing detective controls in your application or as a process, help you in detecting potential threats or security incidents. In AWS, following will help you in setting up the detective controls:
- Capture and analyze logs
- Integrate auditing controls with notification and workflow
Capture and analyze logs
There are many approaches through which logs can be captured and analyzed — agent-based tools, API services can be used to capture and send the logs to central location on a server or to an object level storage or direct events to real-time log processing service.
In AWS, CloudTrail records every API activity call in your AWS account and when you create your AWS account the CloudTrail is enabled by default, you can see the API activities of past 90 days and by default CloudTrail stores logs in S3 bucket. You can create a new trail and direct the trail to CloudWatch logs for monitoring and analysis. You can enable CloudTrail for all regions and for all accounts under an AWS organization. You can configure CloudTrail to send the trails to CloudWatch logs for analysis. You can also use Amazon Macie service to analyze CloudTrail logs to identify suspicious API activities and protect any sensitive data such as API keys, secret keys, PII etc. in your AWS account or organization.
VPC Flow logs is a feature that captures IP traffic going in and out from network interfaces in your VPC. Flow log data gets stored in CloudWatch logs. Flow logs can be created at three levels: VPC, Subnets and Network Interface level. However, following traffic is NOT monitored by Flow logs:
- Traffic generated by instances when they contact Amazon DNS server
- Traffic to the reserved IP addresses to that default VPC router
- Traffic generated by windows instances for activation of Amazon Windows license
- Traffic to and from the http://169.154.169.154 for instance metadata
- DHCP traffic
Another AWS service called Athena, can be used to analyze logs using simple interactive queries using standard SQL. Athena uses machine learning. You just need to point the S3 bucket towards Athena and there you go. Athena can process unstructured, semi-structured, and structured data sets such as CSV, JSON, Avro or columnar data formats such as Apache Parquet and Apache ORC. Athena integrates with AWS QuickSight to give you interactive dashboard.
Integrate auditing controls with notification and workflow
Though it is important, but it is not enough to capture and analyze collected logs or event history. It is a best practice to integrate the flow of security events and findings with notification and workflow system such as ticketing system or event management system.
CloudWatch events are best suited for this. You can configure CloudWatch Events, by creating rules based on event pattern or schedule and invoke targets such as Lambda or SNS (Simple Notification Service) to receive alerts on Email/text or you can invoke any other service to mitigate the security event.
AWS Config is another service, which continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. AWS Config helps you in compliance as well, e.g. ‘ec2-managedinstance-patch-compliance-status-check’ config rule checks whether the compliance status of the AWS Systems Manager patch compliance is COMPLIANT or NON_COMPLIANT, after the patch installation on the instance. The rule is compliant if the field status is COMPLIANT. AWS Config is integrated with SNS to send you alerts in case of Non-Complaint events.
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API.
Amazon Inspector runs assessments for the assessment target against rules package(s):
- Network Reachability-1.1
- Security Best Practices-1.0
- Common Vulnerabilities and Exposures-1.1
- CIS Operating System Security Configuration Benchmarks-1.0
- Runtime Behavior Analysis-1.0
During an assessment run, the rules described in this section generate findings only for the EC2 instances that are running Linux-based operating systems. The rules do not generate findings for EC2 instances that are running Windows-based operating systems.
While setting up CI/CD pipeline, Amazon Inspector agent can be installed in production instances or in a build pipeline, it notifies developers and engineers when security findings are present.
Amazon GuardDuty is a managed threat detection service that uses machine learning and anomaly detection to continuously monitor for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. GuardDuty analyses billions of events from DNS logs, VPC Flow logs and CloudTrail logs to detects threats. You can enable automated response using CloudWatch Events and Lambda with GuardDuty.
Some examples of what GuardDuty detects:
- Unusual API calls, calls from known malicious IPs
- Attempts to disable CloudTrail logging
- Unauthorized deployments
- Compromised Instances
- Port scanning, failed logins
- UDP flood attacks and other DDoS attacks
Enable Security | Infrastructure Protection
Protecting your Infrastructure is like the first layer of defense. AWS is responsible for protecting the Global Infrastructure such as Regions, AZs, Edge locations. AWS handles the maintenance and security overhead of host operating systems, virtualization layer and the physical security of the facilities where services operate.
Below figure shows the shared responsibility model:
Next question would be how AWS secure its global infrastructure and services?
AWS has many years of experience in designing, constructing, operating and maintaining large scale data centers. This experience has been applied to the AWS platform and infrastructure. Their data centers are operating in highly secured facilities. Physical access is strictly controlled at parameter and at building entry/exit points by trained security guards with the help of video surveillance cameras, biometric security, intrusion detection systems and other means. Authorized staff must pass two-factor authentication a minimum of two times to access data center floors. All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff. AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges. When an employee no longer has a business need for these privileges, his or her access is immediately revoked, even if they continue to be an employee of Amazon or Amazon Web Services. All physical access to data centers by AWS employees is logged and audited routinely.
These facilities have automatic fire detection and suppression equipment, electrical power systems, Uninterruptible Power Supply (UPS) units to provide back-up power, Climate control, Storage Device decommissioning process, life support systems and equipment.
All of the AWS data centers are online, running and serving customers, no data center is “cold”. Data center operating in one AZ for a customer might be operating in a different AZ for another customer. AWS always shuffles the data center allocation deliberately so that all facilities are utilized and serve customers.
e.g. use1-az3 under AZ us-east-1e might be allocated to us-east-1b for another customer AWS account.
AWS has world-class Network infrastructure to enable you to secure your workload. Network devices, including firewalls are in place at the network boundary to implement access control lists, traffic flow policies.
Multiple layers of defense are advisable in any type of environment. You can protect your infrastructure within your AWS account, by enforcing boundary protection, allowing and monitoring of ingress and egress traffic, tailor or harden configuration of infrastructure services, using VPC (Virtual private cloud), Security groups, NACL (Network access control list), NAT gateways/instances, Bastian hosts, Route tables, public and private subnets.
Protection of guest operating systems, its patching and updates, application software, client data protection, access control are the responsibility of the customer, AWS has no liability in it.
AWS Marketplace has many security products, provided by third party providers or vendors, which you can purchase and use and save your time from configuring or hardening the raw resource. These security products can be — Firewalls, Hardened operating systems, Security Monitoring tools etc.
You can use AWS services which provides protection to your application.
AWS WAF is a layer 7 Web Application Firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF is deployed on either Amazon CloudFront or Application Load Balancer.
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS against the layer 3 and 4 attacks.
There are two flavors of AWS Shield: Standard and Advanced. AWS Shield Standard is enabled by default when anyone sign up and opens an AWS account. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge.
While AWS Shield Advanced comes at a cost of $3000/month and provides protection from much more sophisticated layer 3 & 4 attacks.
Here is the comparison table of AWS Shield Standard and Advanced features.
Amazon CloudFront is fast, highly secure and programmable content delivery network (CDN) that delivers application, videos, data and APIs to global customers with high transfer speed and low latency. CloudFront use Edge locations to deliver contents by caching the contents. CloudFront origin can be S3 bucket, EC2 instance, Elastic Load Balancer, or Route53.
CloudFront uses TTL (Time to live) to determine how long the cached content will stay in CloudFront caches before CloudFront makes another request to your origin to determine whether the object has been updated. TTL is always in seconds.
If you use S3 bucket as the origin and you want your users to access the S3 content using CloudFront URL only instead of S3 URLs then you can use ‘Restrict Bucket Access’ feature which will create a special CloudFront user — Origin access identity — to the origin.
You would also need to update the bucket policy to allow special CloudFront user i.e. Origin access identity, to be able to access S3 bucket objects; you can select an option to let CloudFront update the bucket policy for you or you can manually update it. Either way, you should review the bucket permissions once you update it.
AWS WAF is integrated with CloudFront and if you are using WAF and want WAF to allow or block requests based on the conditions you specify the you can do that by associating the web ACL with CloudFront distribution.
Although you are ensuring that host level protection is implemented but now consider how EC2 hosts (virtual servers) uses key pairs to protect login information; EC2 uses public key cryptography for this purpose. In public-key cryptography, public key is used to encrypt the data and the recipient uses private key to decrypt the data. Public-private keys together called “key-pair”.
There two ways you can generate EC2 key-pair,
- In the AWS Management console, go to EC2 dashboard under on the left-hand menu you’ll find the ‘Key Pairs’ option; from where a new key pair can be generated.
- You can import a key pair generated by the third-party software.
Amazon EC2 stores public key only and the AWS user stores the private key. If you lose the private key, there is no way to recover it and the instance which this lost key pair was attached to, becomes inaccessible. In this case, a new EC2 key-pair should be created and that EC2 instance must be terminated and a new EC2 instance will have to be launched with new key-pair.
Enable Security | Data Protection
Let’s focus on protecting the data. Businesses stores some key information in the form of employee records, customer details, credit history records etc. These can be name, address, contact number, email, health information, card details etc. Important and sensitive data needs to be protected, if this information is stolen, it can then be used for identity theft, phishing scams or other types of fraudulent activities.
To ensure that the important and sensitive data is secure, the first step is to classify the data into categories then implement the security accordingly. One way to classify the data, is to categorize it on the level of its sensitivity such as Critical, Medium & Low. Based on the data classification, appropriate access should be given; e.g. Data classified as Low category may be public facing which can be accessible to anyone, while data resides in Critical category should be given access to only who are authorized to access those data.
Applications also need access to data, it can be data at rest or data in transit. Encryption is the best way to secure your data in both cases. In AWS, KMS (Key Management Service) provides encryption keys which you can use to encrypt-decrypt your data at rest. AWS KMS is integrated with AWS services to simplify using your keys to encrypt data across your AWS workloads. It is also integrated with AWS SDK to enable you to provide data protection within your application. AWS KMS uses FIPS 140–2 validated hardware security modules (HSMs) to generate and protect keys.
There are two types of KMS keys:
- AWS managed keys
- Customer managed keys
AWS managed keys as the name suggests, are fully managed by AWS which means AWS generates, manages and rotates the keys. Customer managed keys however provide flexibility in terms of how you want to generate, manage and rotate encryption keys. Customer managed keys provides options for how you want to generate key material:
- KMS: Use this option if you want AWS to generate key material and you manage the key and the rotation.
- External: If you want to use the key material which is generated by you, then you can use this option to import the external key material into AWS.
- CloudHSM: If you have any corporate, contractual or regulatory compliance requirement such as licensing compliance where the encryption keys must reside on dedicated HSM cluster. Although AWS KMS are generated in and protected by hardware security modules (HSMs) that are FIPS 140–2 validated cryptographic modules but if you want more granule control of your HSMs then CloudHSM is the answer for it. When you generate encryption keys in CloudHSM, AWS generates 256 bit, persistent, AES symmetric keys in AWS CloudHSM cluster in your account while you retain the full control of the HSM cluster and keys generated.
To reduce the risk of unauthorized access to your data residing on different storages such as block storage, object storage, databases or any storage medium where it persists, it is a best practice to protect data at rest. Services such as S3, Amazon EBS (Elastic Block Store), Glacier, RDS (Relational Database Service) at integrated with AWS KMS to provide protection to data at rest.
Protecting data in transit is equally important as protecting data at rest. Data gets transferred from one system, service or application to another. Protecting data in transit protects the confidentiality and integrity of data. In AWS you can protect data in transit through some of the following ways:
- AWS Certificate Manager is the certificate management service provided by AWS, which can be used to generate, manage, deploy and renew SSL/TLS certificates. Various AWS services such as Elastic Load balancer, Amazon CloudFront, Amazon API Gateway, AWS CloudFormation are integrated with AWS Certificate Manager service to serve secure content over SSL/TLS.
- VPN (Virtual Private network): Use IPsec VPN for securing connections between VPC and on premises data center/office.
- Configure Protocol Policy: Configure HTTPS protocol in Amazon CloudFront and HTTPS listeners in application load balancer.
- Enforce encryption in transit: You can enforce encryption in transit using resource policies or host level security. For example, in S3 bucket policy you can include following condition with Deny effect to enforce SSL requests only:
Another example is to use security groups to accept only HTTPS traffic.
Enable Security | Incident Response
Even though you have followed all the best practices in securing your infrastructure, data, network etc., there is always a chance of your resources getting unauthorized access, data prone to get leaked or attack on your systems. It is very important to act calmly, quickly and take the correct steps in first go to respond to the security incident. Keep the tools and access in place before the security incident is not enough until you practice the incident response periodically.
Consider a scenario where your EC2 instance has been compromised, what would you do to minimize the damage? You might know the actions you need to perform but how quickly do you perform, that’s what matters in the situation like this. Here comes the power of automation, AWS CloudFormation gives you power to quickly launch a trusted isolated environment where you can perform the investigation on the compromised instance(s).
Your organization might have the Incident response team in place but during the incident it is equally important that the team has appropriate access to necessary tools and services. Your response team may need to work with other teams as well during the incident, it is important to know how to get access for the other people or team otherwise there could be a delay in responding to incident. Services such as AWS CloudTrail, Amazon CloudWatch Events, AWS Step Functions can assist you in the investigations.
Use AWS Security Hub to aggregate, organize, and prioritize your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, AWS Systems Manager, and AWS Firewall Manager, as well as from AWS Partner Network (APN) solutions.
Security Best Practices
Although there are lot of security best practices to follow to protect your data and assets of your organization, but here are a few which you can start with and define your Information Security Management System.
- Enforce least privilege access
- Logging and monitoring tools
- Apply Multi Factor Authentication wherever possible
- Use encryption
- Apply automation
- Get notified
- Backup and recovery plan and tools
- Updates and security patching
- Restrict traffic
- Test your security periodically
Conclusion
Security is an important part of your system design and especially when you are running your workloads in the cloud. Even a smallest security error may lead to huge data or financial loss. You need to protect your infrastructure, data, network, application etc. and have the controls in place which keeping you updated about what is happening in and around your system and notify you when any security incident happens, so you can act upon the incident in a quick and efficient way to minimize the impact of the incident.
AWS services help you in securing your assets and data in all possible way.
Let me know your thoughts and any security service you want me to write about in detail. I tried to kept this blog short but covering all aspects of security in AWS.
Happy learning!
References and further reading
Amazon Web Services Security Pillar whitepaper https://d1.awsstatic.com/whitepapers/architecture/AWS-Security-Pillar.pdf
AWS Security Best Practices whitepaper https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf
AWS documentation
https://docs.aws.amazon.com/
AWS re:invent 2017: Best Practices for Implementing AWS Key Management Service (SID330)
https://www.youtube.com/watch?v=X1eZjXQ55ec
AWS Cloud Best Practices https://media.amazonwebservices.com/AWS_Cloud_Best_Practices.pdf