id
stringlengths 8
78
| source
stringclasses 743
values | chunk_id
int64 1
5.05k
| text
stringlengths 593
49.7k
|
---|---|---|---|
AmazonCloudFront_DevGuide-350 | AmazonCloudFront_DevGuide.pdf | 350 | account. When you create or edit identity-based policies, follow these guidelines and recommendations: • Get started with AWS managed policies and move toward least-privilege permissions – To get started granting permissions to your users and workloads, use the AWS managed policies that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see AWS managed policies or AWS managed policies for job functions in the IAM User Guide. • Apply least-privilege permissions – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as least-privilege permissions. For more information about using IAM to apply permissions, see Policies and permissions in IAM in the IAM User Guide. • Use conditions in IAM policies to further restrict access – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as AWS CloudFormation. For more information, see IAM JSON policy elements: Condition in the IAM User Guide. • Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see Validate policies with IAM Access Analyzer in the IAM User Guide. Identity-based policy examples 996 Amazon CloudFront Developer Guide • Require multi-factor authentication (MFA) – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see Secure API access with MFA in the IAM User Guide. For more information about best practices in IAM, see Security best practices in IAM in the IAM User Guide. Allow users to view their own permissions This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API. { "Version": "2012-10-17", "Statement": [ { "Sid": "ViewOwnUserInfo", "Effect": "Allow", "Action": [ "iam:GetUserPolicy", "iam:ListGroupsForUser", "iam:ListAttachedUserPolicies", "iam:ListUserPolicies", "iam:GetUser" ], "Resource": ["arn:aws:iam::*:user/${aws:username}"] }, { "Sid": "NavigateInConsole", "Effect": "Allow", "Action": [ "iam:GetGroupPolicy", "iam:GetPolicyVersion", "iam:GetPolicy", "iam:ListAttachedGroupPolicies", "iam:ListGroupPolicies", "iam:ListPolicyVersions", "iam:ListPolicies", "iam:ListUsers" ], Identity-based policy examples 997 Amazon CloudFront "Resource": "*" } ] } Developer Guide Permissions to access CloudFront programmatically The following shows a permissions policy. The Sid, or statement ID, is optional. { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAllCloudFrontPermissions", "Effect": "Allow", "Action": ["cloudfront:*"], "Resource": "*" } ] } The policy grants permissions to perform all CloudFront operations, which is sufficient to access CloudFront programmatically. If you're using the console to access CloudFront, see Permissions required to use the CloudFront console. For a list of actions and the ARN that you specify to grant or deny permission to use each action, see Actions, resources, and condition keys for Amazon CloudFront in the Service Authorization Reference. Permissions required to use the CloudFront console To grant full access to the CloudFront console, you grant the permissions in the following permissions policy: { "Version": "2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "acm:ListCertificates", Identity-based policy examples 998 Amazon CloudFront Developer Guide "cloudfront:*", "cloudwatch:DescribeAlarms", "cloudwatch:PutMetricAlarm", "cloudwatch:GetMetricStatistics", "elasticloadbalancing:DescribeLoadBalancers", "iam:ListServerCertificates", "sns:ListSubscriptionsByTopic", "sns:ListTopics", "waf:GetWebACL", "waf:ListWebACLs" ], "Resource":"*" }, { "Effect":"Allow", "Action":[ "s3:ListAllMyBuckets", "s3:PutBucketPolicy" ], "Resource":"arn:aws:s3:::*" } ] } Here's why the permissions are required: acm:ListCertificates When you're creating and updating distributions by using the CloudFront console and you want to configure CloudFront to require HTTPS between the viewer and CloudFront or between CloudFront and the origin, lets you view a list of ACM certificates. This permission isn't required if you aren't using the CloudFront console. cloudfront:* Lets you perform all CloudFront actions. cloudwatch:DescribeAlarms and cloudwatch:PutMetricAlarm Let you create and view CloudWatch alarms in the CloudFront console. See also sns:ListSubscriptionsByTopic and sns:ListTopics. These permissions aren't required if you aren't using the CloudFront console. Identity-based policy examples 999 Amazon CloudFront Developer Guide cloudwatch:GetMetricStatistics Lets CloudFront render CloudWatch metrics in the CloudFront console. This permission isn't required if you aren't using the CloudFront console. elasticloadbalancing:DescribeLoadBalancers When creating and updating distributions, lets you view a |
AmazonCloudFront_DevGuide-351 | AmazonCloudFront_DevGuide.pdf | 351 | between CloudFront and the origin, lets you view a list of ACM certificates. This permission isn't required if you aren't using the CloudFront console. cloudfront:* Lets you perform all CloudFront actions. cloudwatch:DescribeAlarms and cloudwatch:PutMetricAlarm Let you create and view CloudWatch alarms in the CloudFront console. See also sns:ListSubscriptionsByTopic and sns:ListTopics. These permissions aren't required if you aren't using the CloudFront console. Identity-based policy examples 999 Amazon CloudFront Developer Guide cloudwatch:GetMetricStatistics Lets CloudFront render CloudWatch metrics in the CloudFront console. This permission isn't required if you aren't using the CloudFront console. elasticloadbalancing:DescribeLoadBalancers When creating and updating distributions, lets you view a list of Elastic Load Balancing load balancers in the list of available origins. This permission isn't required if you aren't using the CloudFront console. iam:ListServerCertificates When you're creating and updating distributions by using the CloudFront console and you want to configure CloudFront to require HTTPS between the viewer and CloudFront or between CloudFront and the origin, lets you view a list of certificates in the IAM certificate store. This permission isn't required if you aren't using the CloudFront console. s3:ListAllMyBuckets When you're creating and updating distributions, lets you perform the following operations: • View a list of S3 buckets in the list of available origins • View a list of S3 buckets that you can save access logs in This permission isn't required if you aren't using the CloudFront console. S3:PutBucketPolicy When you're creating or updating distributions that restrict access to S3 buckets, lets a user update the bucket policy to grant access to the CloudFront origin access identity. For more information, see the section called “Use an origin access identity (legacy, not recommended)”. This permission isn't required if you aren't using the CloudFront console. sns:ListSubscriptionsByTopic and sns:ListTopics When you create CloudWatch alarms in the CloudFront console, lets you choose an SNS topic for notifications. These permissions aren't required if you aren't using the CloudFront console. waf:GetWebACL and waf:ListWebACLs Lets you view a list of AWS WAF web ACLs in the CloudFront console. Identity-based policy examples 1000 Amazon CloudFront Developer Guide These permissions aren't required if you aren't using the CloudFront console. Permission-only actions for the CloudFront console You can perform the following CloudFront actions on the CloudFront Security Savings Bundle page. The following API actions are not intended to be called by your code, and are not included in the AWS CLI and AWS SDKs. Action Description CreateSavingsPlan GetSavingsPlan ListRateCards ListSavingsPlans Grants permission to create a new savings plan. Grants permission to get a savings plan. Grants permission to list CloudFront rate cards for the account. Grants permission to list savings plans in the account. ListUsages Grants permission to list CloudFront usage. UpdateSavingsPlan Grants permission to update a savings plan. Notes • For more information about CloudFront savings plans, see the CloudFront Security Savings Bundle section of the Amazon CloudFront FAQs. • If you create a savings plan for CloudFront and then want to delete it later, contact AWS Support. Customer managed policy examples You can create your own custom IAM policies to allow permissions for CloudFront API actions. You can attach these custom policies to the IAM users or groups that require the specified permissions. Identity-based policy examples 1001 Amazon CloudFront Developer Guide These policies work when you are using the CloudFront API, the AWS SDKs, or the AWS CLI. The following examples show permissions for a few common use cases. For the policy that grants a user full access to CloudFront, see Permissions required to use the CloudFront console. Examples • Example 1: Allow read access to all distributions • Example 2: Allow creating, updating, and deleting distributions • Example 3: Allow creating and listing invalidations • Example 4: Allow creating a distribution Example 1: Allow read access to all distributions The following permissions policy grants the user permissions to view all distributions in the CloudFront console: { "Version": "2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "acm:ListCertificates", "cloudfront:GetDistribution", "cloudfront:GetDistributionConfig", "cloudfront:ListDistributions", "cloudfront:ListCloudFrontOriginAccessIdentities", "elasticloadbalancing:DescribeLoadBalancers", "iam:ListServerCertificates", "sns:ListSubscriptionsByTopic", "sns:ListTopics", "waf:GetWebACL", "waf:ListWebACLs" ], "Resource":"*" }, { "Effect":"Allow", "Action":[ "s3:ListAllMyBuckets" ], Identity-based policy examples 1002 Amazon CloudFront Developer Guide "Resource":"arn:aws:s3:::*" } ] } Example 2: Allow creating, updating, and deleting distributions The following permissions policy allows users to create, update, and delete distributions by using the CloudFront console: { "Version": "2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "acm:ListCertificates", "cloudfront:CreateDistribution", "cloudfront:DeleteDistribution", "cloudfront:GetDistribution", "cloudfront:GetDistributionConfig", "cloudfront:ListDistributions", "cloudfront:UpdateDistribution", "cloudfront:ListCloudFrontOriginAccessIdentities", "elasticloadbalancing:DescribeLoadBalancers", "iam:ListServerCertificates", "sns:ListSubscriptionsByTopic", "sns:ListTopics", "waf:GetWebACL", "waf:ListWebACLs" ], "Resource":"*" }, { "Effect":"Allow", "Action":[ "s3:ListAllMyBuckets", "s3:PutBucketPolicy" ], "Resource":"arn:aws:s3:::*" } ] } Identity-based policy examples 1003 Amazon CloudFront Developer Guide The cloudfront:ListCloudFrontOriginAccessIdentities permission allows users to automatically grant to an existing origin access identity the permission to access objects in an Amazon S3 bucket. If you also want users to be able to create origin access identities, you also need to allow the cloudfront:CreateCloudFrontOriginAccessIdentity permission. Example 3: Allow creating and listing invalidations The following permissions |
AmazonCloudFront_DevGuide-352 | AmazonCloudFront_DevGuide.pdf | 352 | the CloudFront console: { "Version": "2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "acm:ListCertificates", "cloudfront:CreateDistribution", "cloudfront:DeleteDistribution", "cloudfront:GetDistribution", "cloudfront:GetDistributionConfig", "cloudfront:ListDistributions", "cloudfront:UpdateDistribution", "cloudfront:ListCloudFrontOriginAccessIdentities", "elasticloadbalancing:DescribeLoadBalancers", "iam:ListServerCertificates", "sns:ListSubscriptionsByTopic", "sns:ListTopics", "waf:GetWebACL", "waf:ListWebACLs" ], "Resource":"*" }, { "Effect":"Allow", "Action":[ "s3:ListAllMyBuckets", "s3:PutBucketPolicy" ], "Resource":"arn:aws:s3:::*" } ] } Identity-based policy examples 1003 Amazon CloudFront Developer Guide The cloudfront:ListCloudFrontOriginAccessIdentities permission allows users to automatically grant to an existing origin access identity the permission to access objects in an Amazon S3 bucket. If you also want users to be able to create origin access identities, you also need to allow the cloudfront:CreateCloudFrontOriginAccessIdentity permission. Example 3: Allow creating and listing invalidations The following permissions policy allows users to create and list invalidations. It includes read access to CloudFront distributions because you create and view invalidations by first displaying settings for a distribution: { "Version": "2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "acm:ListCertificates", "cloudfront:GetDistribution", "cloudfront:GetStreamingDistribution", "cloudfront:GetDistributionConfig", "cloudfront:ListDistributions", "cloudfront:ListCloudFrontOriginAccessIdentities", "cloudfront:CreateInvalidation", "cloudfront:GetInvalidation", "cloudfront:ListInvalidations", "elasticloadbalancing:DescribeLoadBalancers", "iam:ListServerCertificates", "sns:ListSubscriptionsByTopic", "sns:ListTopics", "waf:GetWebACL", "waf:ListWebACLs" ], "Resource":"*" }, { "Effect":"Allow", "Action":[ "s3:ListAllMyBuckets" ], "Resource":"arn:aws:s3:::*" } ] Identity-based policy examples 1004 Amazon CloudFront } Example 4: Allow creating a distribution Developer Guide The following permission policy grants the user permission to create and list distributions in the CloudFront console. For the CreateDistribution action, specify the wildcard (*) character for the Resource instead of a wildcard for the distribution ARN (arn:aws:cloudfront::123456789012:distribution/*). For more information about the Resource element, see IAM JSON policy elements: Resource in the IAM User Guide. { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "cloudfront:CreateDistribution", "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "cloudfront:ListDistributions", "Resource": "*" } ] } AWS managed policies for Amazon CloudFront To add permissions to users, groups, and roles, it’s easier to use AWS managed policies than to write policies yourself. It takes time and expertise to create IAM customer managed policies that provide your users with only the permissions they need. To get started quickly, you can use our AWS managed policies. These policies cover common use cases and are available in your AWS account. For more information about AWS managed policies, see AWS managed policies in the IAM User Guide. AWS services maintain and update AWS managed policies. You can’t change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed AWS managed policies 1005 Amazon CloudFront Developer Guide policy to support new features. This type of update affects all identities (users, groups, and roles) where the policy is attached. Services are most likely to update an AWS managed policy when a new feature is launched or when new permissions become available. Services do not remove permissions from an AWS managed policy, so policy updates won’t break your existing permissions. Additionally, AWS supports managed policies for job functions that span multiple services. For example, the ReadOnlyAccess AWS managed policy provides read-only access to all AWS services and resources. When a service launches a new feature, AWS adds read-only permissions for new operations and resources. For a list and descriptions of job function policies, see AWS managed policies for job functions in the IAM User Guide. Topics • AWS managed policy: CloudFrontReadOnlyAccess • AWS managed policy: CloudFrontFullAccess • AWS managed policy: AWSCloudFrontLogger • AWS managed policy: AWSLambdaReplicator • AWS managed policy: AWSCloudFrontVPCOriginServiceRolePolicy • CloudFront updates to AWS managed policies AWS managed policy: CloudFrontReadOnlyAccess You can attach the CloudFrontReadOnlyAccess policy to your IAM identities. This policy allows read-only permissions to CloudFront resources. It also allows read-only permissions to other AWS service resources that are related to CloudFront and that are visible in the CloudFront console. Permissions details This policy includes the following permissions. • cloudfront:Describe* – Allows principals to get information about metadata about CloudFront resources. • cloudfront:Get* – Allows principals to get detailed information and configurations for CloudFront resources. • cloudfront:List* – Allows principals to get lists of CloudFront resources. AWS managed policies 1006 Amazon CloudFront Developer Guide • cloudfront-keyvaluestore:Describe* - Allows principals to get information about the key value store. • cloudfront-keyvaluestore:Get* - Allows principals to get detailed information and configurations for the key value store. • cloudfront-keyvaluestore:List* - Allows principals to get lists of the key value stores. • acm:DescribeCertificate – Allows principals to get details about an ACM certificate. • acm:ListCertificates – Allows principals to get a list of ACM certificates. • iam:ListServerCertificates – Allows principals to get a list of server certificates stored in IAM. • route53:List* – Allows principals to get lists of Route 53 resources. • waf:ListWebACLs – Allows principals to get a list of web ACLs in AWS WAF. • waf:GetWebACL – Allows principals to get detailed information about web ACLs in AWS WAF. • wafv2:ListWebACLs – Allows principals to get a list of web ACLs in AWS WAF. • wafv2:GetWebACL – Allows principals to get detailed information about web ACLs in AWS WAF. To view the permissions for this policy, see CloudFrontReadOnlyAccess |
AmazonCloudFront_DevGuide-353 | AmazonCloudFront_DevGuide.pdf | 353 | list of ACM certificates. • iam:ListServerCertificates – Allows principals to get a list of server certificates stored in IAM. • route53:List* – Allows principals to get lists of Route 53 resources. • waf:ListWebACLs – Allows principals to get a list of web ACLs in AWS WAF. • waf:GetWebACL – Allows principals to get detailed information about web ACLs in AWS WAF. • wafv2:ListWebACLs – Allows principals to get a list of web ACLs in AWS WAF. • wafv2:GetWebACL – Allows principals to get detailed information about web ACLs in AWS WAF. To view the permissions for this policy, see CloudFrontReadOnlyAccess in the AWS Managed Policy Reference. AWS managed policy: CloudFrontFullAccess You can attach the CloudFrontFullAccess policy to your IAM identities. This policy allows administrative permissions to CloudFront resources. It also allows read-only permissions to other AWS service resources that are related to CloudFront and that are visible in the CloudFront console. Permissions details This policy includes the following permissions. • s3:ListAllMyBuckets – Allows principals to get a list of all Amazon S3 buckets. • acm:DescribeCertificate – Allows principals to get details about an ACM certificate. • acm:ListCertificates – Allows principals to get a list of ACM certificates. • acm:RequestCertificate – Allows principals to request managed certificates from ACM. • cloudfront:* – Allows principals to perform all actions on all CloudFront resources. • cloudfront-keyvaluestore:* - Allows principals to perform all actions on the key value store. AWS managed policies 1007 Amazon CloudFront Developer Guide • iam:ListServerCertificates – Allows principals to get a list of server certificates stored in IAM. • waf:ListWebACLs – Allows principals to get a list of web ACLs in AWS WAF. • waf:GetWebACL – Allows principals to get detailed information about web ACLs in AWS WAF. • wafv2:ListWebACLs – Allows principals to get a list of web ACLs in AWS WAF. • wafv2:GetWebACL – Allows principals to get detailed information about web ACLs in AWS WAF. • kinesis:ListStreams – Allows principals to get a list of Amazon Kinesis streams. • ec2:DescribeInstances - Allows principals to get detailed information about instances in Amazon EC2. • elasticloadbalancing:DescribeLoadBalancers - Allows principals to get detailed information about load balancers in Elastic Load Balancing. • ec2:DescribeInternetGateways - Allows principals to get detailed information about internet gateways in Amazon EC2. • kinesis:DescribeStream – Allows principals to get detailed information about a Kinesis stream. • iam:ListRoles – Allows principals to get a list of roles in IAM. To view the permissions for this policy, see CloudFrontFullAccess in the AWS Managed Policy Reference. Important If you want CloudFront to create and save access logs, you need to grant additional permissions. For more information, see Permissions. AWS managed policy: AWSCloudFrontLogger You can’t attach the AWSCloudFrontLogger policy to your IAM identities. This policy is attached to a service-linked role that allows CloudFront to perform actions on your behalf. For more information, see the section called “Service-linked roles for Lambda@Edge”. This policy allows CloudFront to push log files to Amazon CloudWatch. For details about the permissions included in this policy, see the section called “Service-linked role permissions for CloudFront logger”. AWS managed policies 1008 Amazon CloudFront Developer Guide To view the permissions for this policy, see AWSCloudFrontLogger in the AWS Managed Policy Reference. AWS managed policy: AWSLambdaReplicator You can’t attach the AWSLambdaReplicator policy to your IAM identities. This policy is attached to a service-linked role that allows CloudFront to perform actions on your behalf. For more information, see the section called “Service-linked roles for Lambda@Edge”. This policy allows CloudFront to create, delete, and disable functions in AWS Lambda to replicate Lambda@Edge functions to AWS Regions. For details about the permissions included in this policy, see the section called “Service-linked role permissions for Lambda replicator”. To view the permissions for this policy, see AWSLambdaReplicator in the AWS Managed Policy Reference. AWS managed policy: AWSCloudFrontVPCOriginServiceRolePolicy You can't attach the AWSCloudFrontVPCOriginServiceRolePolicy policy to your IAM entities. This policy is attached to a service-linked role that allows CloudFront to perform actions on your behalf. For more information, see Use service-linked roles for CloudFront. This policy allows CloudFront to manage EC2 elastic network interfaces and security groups on your behalf. For details about the permissions included in this policy, see the section called “Service-linked role permissions for CloudFront VPC Origins”. To view the permissions for this policy, see AWSCloudFrontVPCOriginServiceRolePolicy in the AWS Managed Policy Reference. CloudFront updates to AWS managed policies View details about updates to AWS managed policies for CloudFront since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the CloudFront Document history page. Change Description Date CloudFrontReadOnlyAccess – Update to existing policy CloudFront added new permission for ACM. April 28, 2025 AWS managed policies 1009 Amazon CloudFront Developer Guide Change Description Date The new permission allows principals to get details |
AmazonCloudFront_DevGuide-354 | AmazonCloudFront_DevGuide.pdf | 354 | role permissions for CloudFront VPC Origins”. To view the permissions for this policy, see AWSCloudFrontVPCOriginServiceRolePolicy in the AWS Managed Policy Reference. CloudFront updates to AWS managed policies View details about updates to AWS managed policies for CloudFront since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the CloudFront Document history page. Change Description Date CloudFrontReadOnlyAccess – Update to existing policy CloudFront added new permission for ACM. April 28, 2025 AWS managed policies 1009 Amazon CloudFront Developer Guide Change Description Date The new permission allows principals to get details about an ACM certificate. CloudFrontFullAccess – Update to existing policy CloudFront added new permissions for ACM. April 28, 2025 CloudFrontFullAccess – Update to existing policy November 20, 2024 The new permissions allow principals to get details about an ACM certificate and to request a managed certificate from ACM. CloudFront added new permissions for Amazon EC2 and Elastic Load Balancing. The new permissions allow CloudFront to get detailed information about load balancers in Elastic Load Balancing and instances and internet gateways in Amazon EC2. AWSCloudFrontVPCOr iginServiceRolePolicy – New policy CloudFront added a new policy. November 20, 2024 This policy allows CloudFron t to manage EC2 elastic network interfaces and security groups on your behalf. AWS managed policies 1010 Amazon CloudFront Developer Guide Change Description Date CloudFrontReadOnlyAccess and CloudFrontFullAccess CloudFront added new permissions for key value December 19, 2023 - Update to two existing stores. policies. The new permissions allow users to get information about, and take action on, key value stores. CloudFrontReadOnlyAccess – Update to an existing policy CloudFront added a new permission to describe September 8, 2021 CloudFront Functions. This permission allows the user, group, or role to read information and metadata about a function, but not the function’s code. CloudFront started tracking changes CloudFront started tracking changes for its AWS managed September 8, 2021 policies. Use service-linked roles for CloudFront Amazon CloudFront uses AWS Identity and Access Management (IAM) service-linked roles. A service-linked role is a unique type of IAM role that is linked directly to CloudFront. Service-linked roles are predefined by CloudFront and include all the permissions that the service requires to call other AWS services on your behalf. A service-linked role makes setting up CloudFront easier because you don’t have to manually add the necessary permissions. CloudFront defines the permissions of its service-linked roles, and unless defined otherwise, only CloudFront can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity. Use service-linked roles 1011 Amazon CloudFront Developer Guide You can delete a service-linked role only after first deleting their related resources. This protects your CloudFront resources because you can't inadvertently remove permission to access the resources. For information about other services that support service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-linked roles column. Choose a Yes with a link to view the service-linked role documentation for that service. Service-linked role permissions for CloudFront VPC Origins CloudFront VPC Origins uses the service-linked role named AWSServiceRoleForCloudFrontVPCOrigin – Allows CloudFront to manage EC2 elastic network interfaces and security groups on your behalf. The AWSServiceRoleForCloudFrontVPCOrigin service-linked role trusts the following services to assume the role: • vpcorigin.cloudfront.amazonaws.com The role permissions policy named AWSCloudFrontVPCOriginServiceRolePolicy allows CloudFront VPC Origins to complete the following actions on the specified resources: • Action: ec2:CreateNetworkInterface on arn:aws:ec2:*:*:network-interface/* • Action: ec2:CreateNetworkInterface on arn:aws:ec2:*:*:subnet/* and arn:aws:ec2:*:*:security-group/* • Action: ec2:CreateSecurityGroup on arn:aws:ec2:*:*:security-group/* • Action: ec2:CreateSecurityGroup on arn:aws:ec2:*:*:vpc/* • Action: ec2:ModifyNetworkInterfaceAttribute, ec2:DeleteNetworkInterface, ec2:DeleteSecurityGroup, ec2:AssignIpv6Addresses, and ec2:UnassignIpv6Addresses on all AWS resources that the actions support • Action: ec2:DescribeNetworkInterfaces, ec2:DescribeSecurityGroups, ec2:DescribeInstances, ec2:DescribeInternetGateways, ec2:DescribeSubnets, ec2:DescribeRegions, and ec2:DescribeAddresses on all AWS resources that the actions support • Action: ec2:CreateTags on arn:aws:ec2:*:*:security-group/* and arn:aws:ec2:*:*:network-interface/* Use service-linked roles 1012 Amazon CloudFront Developer Guide • Action: elasticloadbalancing:DescribeLoadBalancers, elasticloadbalancing:DescribeListeners, and elasticloadbalancing:DescribeTargetGroups on all AWS resources that the actions support You must configure permissions to allow your users, groups, or roles to create, edit, or delete a service-linked role. For more information, see Service-linked role permissions in the IAM User Guide. Create a service-linked role for CloudFront VPC Origins You don't need to manually create a service-linked role. When you create a VPC origin in the AWS Management Console, the AWS CLI, or the AWS API, CloudFront VPC Origins creates the service- linked role for you. If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you create a VPC origin, CloudFront VPC Origins creates the service-linked role for you again. Edit a service-linked role for CloudFront VPC Origins CloudFront VPC Origins does not allow you to edit the AWSServiceRoleForCloudFrontVPCOrigin service-linked role. After you |
AmazonCloudFront_DevGuide-355 | AmazonCloudFront_DevGuide.pdf | 355 | need to manually create a service-linked role. When you create a VPC origin in the AWS Management Console, the AWS CLI, or the AWS API, CloudFront VPC Origins creates the service- linked role for you. If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you create a VPC origin, CloudFront VPC Origins creates the service-linked role for you again. Edit a service-linked role for CloudFront VPC Origins CloudFront VPC Origins does not allow you to edit the AWSServiceRoleForCloudFrontVPCOrigin service-linked role. After you create a service-linked role, you cannot change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see Editing a service-linked role in the IAM User Guide. Delete a service-linked role for CloudFront VPC Origins If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way you don’t have an unused entity that is not actively monitored or maintained. However, you must clean up the resources for your service-linked role before you can manually delete it. Note If the CloudFront service is using the role when you try to delete the resources, then the deletion might fail. If that happens, wait for a few minutes and try the operation again. Use service-linked roles 1013 Amazon CloudFront Developer Guide To delete CloudFront VPC Origins resources used by the AWSServiceRoleForCloudFrontVPCOrigin • Delete the VPC origin resources in your account. • It might take some time for CloudFront to finish deleting the resources from your account. If you can't delete the service-linked role right away, wait and try again. To manually delete the service-linked role using IAM Use the IAM console, the AWS CLI, or the AWS API to delete the AWSServiceRoleForCloudFrontVPCOrigin service-linked role. For more information, see Deleting a service-linked role in the IAM User Guide. Supported Regions for CloudFront VPC Origins service-linked roles CloudFront VPC Origins does not support using service-linked roles in every Region where the service is available. You can use the AWSServiceRoleForCloudFrontVPCOrigin role in the following Regions. Region name Region identity Support in CloudFront US East (N. Virginia) US East (Ohio) US West (N. California) US West (Oregon) Africa (Cape Town) Asia Pacific (Hong Kong) Asia Pacific (Jakarta) Asia Pacific (Melbourne) Asia Pacific (Mumbai) us-east-1 us-east-2 us-west-1 (except AZ usw1-az2) us-west-2 af-south-1 ap-east-1 ap-southeast-3 ap-southeast-4 ap-south-1 Yes Yes Yes Yes Yes Yes Yes Yes Yes Use service-linked roles 1014 Amazon CloudFront Region name Region identity Support in CloudFront Developer Guide Asia Pacific (Hyderabad) ap-south-2 Asia Pacific (Osaka) Asia Pacific (Seoul) Asia Pacific (Singapore) Asia Pacific (Sydney) Asia Pacific (Tokyo) Canada (Central) Canada West (Calgary) Europe (Frankfurt) Europe (Ireland) Europe (London) Europe (Milan) Europe (Paris) Europe (Spain) Europe (Stockholm) Europe (Zurich) Israel (Tel Aviv) Middle East (Bahrain) Middle East (UAE) ap-northeast-3 ap-northeast-2 ap-southeast-1 ap-southeast-2 ap-northeast-1 (except AZ apne1-az3) ca-central-1 (except AZ cac1-az3) ca-west-1 eu-central-1 eu-west-1 eu-west-2 eu-south-1 eu-west-3 eu-south-2 eu-north-1 eu-central-2 il-central-1 me-south-1 me-central-1 Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Use service-linked roles 1015 Amazon CloudFront Region name Region identity Support in CloudFront Developer Guide South America (São Paulo) sa-east-1 Yes Troubleshoot Amazon CloudFront identity and access Use the following information to help you diagnose and fix common issues that you might encounter when working with CloudFront and IAM. Topics • I'm not authorized to perform an action in CloudFront • I'm not authorized to perform iam:PassRole • I want to allow people outside of my AWS account to access my CloudFront resources I'm not authorized to perform an action in CloudFront If you receive an error that you're not authorized to perform an action, your policies must be updated to allow you to perform the action. The following example error occurs when the mateojackson IAM user tries to use the console to view details about a fictional my-example-widget resource but doesn't have the fictional cloudfront:GetWidget permissions. User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: cloudfront:GetWidget on resource: my-example-widget In this case, the policy for the mateojackson user must be updated to allow access to the my- example-widget resource by using the cloudfront:GetWidget action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. I'm not authorized to perform iam:PassRole If you receive an error that you're not authorized to perform the iam:PassRole action, your policies must be updated to allow you to pass a role to CloudFront. Troubleshoot CloudFront identity and access 1016 Amazon CloudFront Developer Guide Some AWS services allow you to pass an existing role to that service instead of creating a new service |
AmazonCloudFront_DevGuide-356 | AmazonCloudFront_DevGuide.pdf | 356 | user must be updated to allow access to the my- example-widget resource by using the cloudfront:GetWidget action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. I'm not authorized to perform iam:PassRole If you receive an error that you're not authorized to perform the iam:PassRole action, your policies must be updated to allow you to pass a role to CloudFront. Troubleshoot CloudFront identity and access 1016 Amazon CloudFront Developer Guide Some AWS services allow you to pass an existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service. The following example error occurs when an IAM user named marymajor tries to use the console to perform an action in CloudFront. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service. User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole In this case, Mary's policies must be updated to allow her to perform the iam:PassRole action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. I want to allow people outside of my AWS account to access my CloudFront resources You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources. To learn more, consult the following: • To learn whether CloudFront supports these features, see How Amazon CloudFront works with IAM. • To learn how to provide access to your resources across AWS accounts that you own, see Providing access to an IAM user in another AWS account that you own in the IAM User Guide. • To learn how to provide access to your resources to third-party AWS accounts, see Providing access to AWS accounts owned by third parties in the IAM User Guide. • To learn how to provide access through identity federation, see Providing access to externally authenticated users (identity federation) in the IAM User Guide. • To learn the difference between using roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. Troubleshoot CloudFront identity and access 1017 Amazon CloudFront Developer Guide Logging and monitoring in Amazon CloudFront Monitoring is an important part of maintaining the availability and performance of CloudFront and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multi-point failure if one occurs. AWS provides several tools for monitoring your CloudFront resources and activity, and responding to potential incidents: Amazon CloudWatch alarms Using CloudWatch alarms, you watch a single metric over a time period that you specify. If the metric exceeds a given threshold, a notification is sent to an Amazon SNS topic or AWS Auto Scaling policy. CloudWatch alarms do not invoke actions when a metric is in a particular state. Rather the state must have changed and been maintained for a specified number of periods. For more information, see Monitor CloudFront metrics with Amazon CloudWatch. AWS CloudTrail logs CloudTrail provides a record of API actions taken by a user, role, or an AWS service in CloudFront. Using the information collected by CloudTrail, you can determine the API request that was made to CloudFront, the IP address from which the request was made, who made the request, when it was made, and additional details. For more information, see Logging Amazon CloudFront API calls using AWS CloudTrail. CloudFront standard logs and real-time logs CloudFront logs provide detailed records about requests that are made to a distribution. These logs are useful for many applications. For example, log information can be useful in security and access audits. For more information, see Standard logging (access logs) and Create and use real-time log configurations. Edge function logs Logs generated by edge functions, both CloudFront Functions and Lambda@Edge, are sent directly to Amazon CloudWatch Logs and are not stored anywhere by CloudFront. CloudFront Functions uses an AWS Identity and Access Management (IAM) service-linked role to send customer-generated logs directly to CloudWatch Logs in your account. For more information, see Edge function logs. Logging and monitoring 1018 Amazon CloudFront CloudFront console reports Developer Guide The CloudFront console includes a variety of reports, including the cache statistics report, the popular objects report, and the top referrers report. Most CloudFront console reports are based on the data in CloudFront access logs, which contain detailed |
AmazonCloudFront_DevGuide-357 | AmazonCloudFront_DevGuide.pdf | 357 | edge functions, both CloudFront Functions and Lambda@Edge, are sent directly to Amazon CloudWatch Logs and are not stored anywhere by CloudFront. CloudFront Functions uses an AWS Identity and Access Management (IAM) service-linked role to send customer-generated logs directly to CloudWatch Logs in your account. For more information, see Edge function logs. Logging and monitoring 1018 Amazon CloudFront CloudFront console reports Developer Guide The CloudFront console includes a variety of reports, including the cache statistics report, the popular objects report, and the top referrers report. Most CloudFront console reports are based on the data in CloudFront access logs, which contain detailed information about every user request that CloudFront receives. However, you don't need to enable access logs to view the reports. For more information, see View CloudFront reports in the console. Compliance validation for Amazon CloudFront Third-party auditors assess the security and compliance of Amazon CloudFront as part of multiple AWS compliance programs. These include SOC, PCI, HIPAA, and others. For a list of AWS services in scope of specific compliance programs, see AWS Services in Scope by Compliance Program. For general information, see AWS Compliance Programs. You can download third-party audit reports using AWS Artifact. For more information, see Downloading Reports in AWS Artifact. Your compliance responsibility when using CloudFront is determined by the sensitivity of your data, your company’s compliance objectives, and applicable laws and regulations. AWS provides the following resources to help with compliance: • Security and Compliance Quick Start Guides – These deployment guides discuss architectural considerations and provide steps for deploying security- and compliance-focused baseline environments on AWS. • Architecting for HIPAA Security and Compliance on AWS – This whitepaper describes how companies can use AWS to create HIPAA-compliant applications. The AWS HIPAA compliance program includes CloudFront (excluding content delivery through CloudFront Embedded POPs) as a HIPAA eligible service. If you have an executed Business Associate Addendum (BAA) with AWS, you can use CloudFront (excluding content delivery through CloudFront Embedded POPs) to deliver content that contains protected health information (PHI). For more information, see HIPAA Compliance. • AWS Compliance Resources – This collection of workbooks and guides might apply to your industry and location. Compliance validation 1019 Amazon CloudFront Developer Guide • AWS Config – This AWS service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations. • AWS Security Hub – This AWS service uses security controls to evaluate resource configurations and security standards to help you comply with various compliance frameworks. For more information about using Security Hub to evaluate CloudFront resources, see Amazon CloudFront controls in the AWS Security Hub User Guide. CloudFront compliance best practices This section provides best practices and recommendations for compliance when you use Amazon CloudFront to serve your content. If you run PCI-compliant or HIPAA-compliant workloads that are based on the AWS shared responsibility model, we recommend that you log your CloudFront usage data for the last 365 days for future auditing purposes. To log usage data, you can do the following: • Enable CloudFront access logs. For more information, see Standard logging (access logs). • Capture requests that are sent to the CloudFront API. For more information, see Logging Amazon CloudFront API calls using AWS CloudTrail. In addition, see the following for details about how CloudFront is compliant with the PCI DSS and SOC standards. Payment Card Industry Data Security Standard (PCI DSS) CloudFront (excluding content delivery through CloudFront Embedded POPs) supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. As a security best practice, we recommend that you don't cache credit card information in CloudFront edge caches. For example, you can configure your origin to include a Cache- Control:no-cache="field-name" header in responses that contain credit card information, such as the last four digits of a credit card number and the card owner's contact information. CloudFront compliance best practices 1020 Amazon CloudFront Developer Guide System and Organization Controls (SOC) CloudFront (excluding content delivery through CloudFront Embedded POPs) is compliant with System and Organization Controls (SOC) measures, including SOC 1, SOC 2, and SOC 3. SOC reports are independent, third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. These audits ensure that the appropriate safeguards and procedures are in place to protect against risks that might affect the security, confidentiality, and availability of customer and company data. The results of these third-party audits are available on the AWS SOC Compliance website, where you can view the published reports to get more information about the controls that support AWS operations and compliance. Resilience in |
AmazonCloudFront_DevGuide-358 | AmazonCloudFront_DevGuide.pdf | 358 | POPs) is compliant with System and Organization Controls (SOC) measures, including SOC 1, SOC 2, and SOC 3. SOC reports are independent, third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. These audits ensure that the appropriate safeguards and procedures are in place to protect against risks that might affect the security, confidentiality, and availability of customer and company data. The results of these third-party audits are available on the AWS SOC Compliance website, where you can view the published reports to get more information about the controls that support AWS operations and compliance. Resilience in Amazon CloudFront The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between Availability Zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure. CloudFront origin failover In addition to the support of AWS global infrastructure, Amazon CloudFront offers an origin failover feature to help support your data resiliency needs. CloudFront is a global service that delivers your content through a worldwide network of data centers called edge locations or points of presence (POPs). If your content is not already cached in an edge location, CloudFront retrieves it from an origin that you've identified as the source for the definitive version of the content. You can improve resiliency and increase availability for specific scenarios by setting up CloudFront with origin failover. To get started, you create an origin group in which you designate a primary origin for CloudFront plus a second origin. CloudFront automatically switches to the second origin when the primary origin returns specific HTTP status code failure responses. For more information, see Optimize high availability with CloudFront origin failover. Resilience 1021 Amazon CloudFront Developer Guide Infrastructure security in Amazon CloudFront As a managed service, Amazon CloudFront is protected by AWS global network security. For information about AWS security services and how AWS protects infrastructure, see AWS Cloud Security. To design your AWS environment using the best practices for infrastructure security, see Infrastructure Protection in Security Pillar AWS Well‐Architected Framework. You use AWS published API calls to access CloudFront through the network. Clients must support the following: • Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3. • Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. CloudFront Functions uses a highly secure isolation barrier between AWS accounts, ensuring that customer environments are secure against side-channel attacks like Spectre and Meltdown. Functions cannot access or modify data belonging to other customers. Functions run in a dedicated single-threaded process on a dedicated CPU without hyperthreading. In any given CloudFront edge location point of presence (POP), CloudFront Functions only serves one customer at a time, and all customer-specific data is cleared between function executions. Infrastructure security 1022 Amazon CloudFront Developer Guide Troubleshooting Use this section to troubleshoot common problems you might encounter when you set up Amazon CloudFront to distribute your content. Each topic provides detailed guidance on identifying the root cause of common issues and step-by- step instructions to resolve them. Topics • Troubleshooting distribution issues • Troubleshooting error response status codes in CloudFront • Load testing CloudFront Troubleshooting distribution issues Use the information here to help you diagnose and fix certificate errors, access-denied issues, or other common issues that you might encounter when setting up your website or application with Amazon CloudFront distributions. Topics • CloudFront returns an Access Denied error • CloudFront returns an InvalidViewerCertificate error when I try to add an alternate domain name • CloudFront returns an incorrectly configured DNS record error when I try to add a new CNAME • I can't view the files in my distribution • Error message: Certificate: <certificate-id> is being used by CloudFront CloudFront returns an Access Denied error If you're using an Amazon S3 bucket as the origin for your CloudFront distribution, you might see an Access Denied (403) error message in the following examples. Contents • You specified a missing object from the Amazon S3 origin • Your Amazon S3 origin is missing IAM permissions Troubleshooting distribution issues 1023 Amazon CloudFront Developer Guide • You're using invalid credentials or don't have |
AmazonCloudFront_DevGuide-359 | AmazonCloudFront_DevGuide.pdf | 359 | DNS record error when I try to add a new CNAME • I can't view the files in my distribution • Error message: Certificate: <certificate-id> is being used by CloudFront CloudFront returns an Access Denied error If you're using an Amazon S3 bucket as the origin for your CloudFront distribution, you might see an Access Denied (403) error message in the following examples. Contents • You specified a missing object from the Amazon S3 origin • Your Amazon S3 origin is missing IAM permissions Troubleshooting distribution issues 1023 Amazon CloudFront Developer Guide • You're using invalid credentials or don't have sufficient permissions You specified a missing object from the Amazon S3 origin Verify that the requested object in your bucket exists. Object names are case sensitive. Entering an invalid object name can return an access denied error code. For example, if you follow the CloudFront tutorial to create a basic distribution, you create an Amazon S3 bucket as the origin and upload an example index.html file. In your web browser, if you enter https://d111111abcdef8.cloudfront.net/INDEX.HTML instead of https://d111111abcdef8.cloudfront.net/index.html, you might see a similar message because the index.html file in the URL path is case sensitive. <Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>22Q367AHT7Y1ABCD</RequestId> <HostId> ABCDE/Vg+7PSNa/d/IfFQ8Fb92TGQ0KH0ZwG5iEKbc6+e06DdMS1ZW+ryB9GFRIVtS66rSSy6So= </HostId> </Error> Your Amazon S3 origin is missing IAM permissions Verify that you've selected the correct Amazon S3 bucket as the origin domain and name. The origin (Amazon S3) must have the correct permissions. If you don’t specify the correct permissions, the following access denied message can appear for your viewers. <Code>AccessDenied</Code> <Message>User: arn:aws:sts::856369053181:assumed-role/OriginAccessControlRole/ EdgeCredentialsProxy+EdgeHostAuthenticationClient is not authorized to perform: kms:Decrypt on the resource associated with this ciphertext because the resource does not exist in this Region, no resource-based policies allow access, or a resource-based policy explicitly denies access</Message> <RequestId>22Q367AHT7Y1ABCD/RequestId> <HostId> ABCDE/Vg+7PSNa/d/IfFQ8Fb92TGQ0KH0ZwG5iEKbc6+e06DdMS1ZW+ryB9GFRIVtS66rSSy6So= CloudFront returns an Access Denied error 1024 Amazon CloudFront </HostId> </Error> Note Developer Guide In this error message, the account ID 856369053181 is an AWS managed account. When you distribute content from Amazon S3 and you’re also using AWS Key Management Service (AWS KMS) service-side encryption (SSE-KMS), there are additional IAM permissions that you need to specify for the KMS key and Amazon S3 bucket. Your CloudFront distribution needs these permissions to use the KMS key, which is used for encryption of the origin Amazon S3 bucket.. The configurations to the Amazon S3 bucket policy allow the CloudFront distribution to retrieve the encrypted objects for content delivery. To verify your Amazon S3 bucket and KMS key permissions 1. Verify that the KMS key that you’re using is the same key that your Amazon S3 bucket uses for default encryption. For more information, see Specifying server-side encryption with AWS KMS (SSE-KMS) in the Amazon Simple Storage Service User Guide. 2. Verify that the objects in the bucket are encrypted with the same KMS key. You can select any object from the Amazon S3 bucket and check the server-side encryption settings to verify the KMS key ARN. 3. 4. Edit the Amazon S3 bucket policy to grant CloudFront permission to call the GetObject API operation from the Amazon S3 bucket. For an example Amazon S3 bucket policy that uses origin access control, see Grant CloudFront permission to access the S3 bucket. Edit the KMS key policy to grant CloudFront permission to perform the actions to Encrypt, Decrypt, and GenerateDataKey*. To align with least privilege permission, specify a Condition element so that only the specified CloudFront distribution can perform the listed actions. You can customize the policy for your existing AWS KMS policy. For an example KMS key policy, see the SSE-KMS. If you’re using origin access identity (OAI) instead of OAC, the permissions to the Amazon S3 bucket are slightly different because you grant permission to an identity instead of the AWS service. For more information, see Give an origin access identity permission to read files in the Amazon S3 bucket. CloudFront returns an Access Denied error 1025 Amazon CloudFront Developer Guide If you still can't view your files in your distribution, see I can't view the files in my distribution. You're using invalid credentials or don't have sufficient permissions An Access Denied error message can appear if you’re using incorrect or expired AWS SCT credentials (access key and secret key) or your IAM role or user is missing a required permission to perform an action on a CloudFront resource. For more information about access denied error messages, see Troubleshooting access denied error messages in the IAM User Guide. For information about how IAM works with CloudFront, see Identity and Access Management for Amazon CloudFront. CloudFront returns an InvalidViewerCertificate error when I try to add an alternate domain name If CloudFront returns an InvalidViewerCertificate error when you try to add an alternate domain name (CNAME) to your distribution, review the following information to help troubleshoot the problem. This error |
AmazonCloudFront_DevGuide-360 | AmazonCloudFront_DevGuide.pdf | 360 | secret key) or your IAM role or user is missing a required permission to perform an action on a CloudFront resource. For more information about access denied error messages, see Troubleshooting access denied error messages in the IAM User Guide. For information about how IAM works with CloudFront, see Identity and Access Management for Amazon CloudFront. CloudFront returns an InvalidViewerCertificate error when I try to add an alternate domain name If CloudFront returns an InvalidViewerCertificate error when you try to add an alternate domain name (CNAME) to your distribution, review the following information to help troubleshoot the problem. This error can indicate that one of the following issues must be resolved before you can successfully add the alternate domain name. The following errors are listed in the order in which CloudFront checks for authorization to add an alternate domain name. This can help you troubleshoot issues because based on the error that CloudFront returns, you can tell which verification checks have completed successfully. There's no certificate attached to your distribution. To add an alternate domain name (CNAME), you must attach a trusted, valid certificate to your distribution. Please review the requirements, obtain a valid certificate that meets them, attach it to your distribution, and then try again. For more information, see Requirements for using alternate domain names. There are too many certificates in the certificate chain for the certificate that you've attached. You can only have up to five certificates in a certificate chain. Reduce the number of certificates in the chain, and then try again. The certificate chain includes one or more certificates that aren't valid for the current date. The certificate chain for a certificate that you have added has one or more certificates that aren't valid, either because a certificate isn't valid yet or a certificate has expired. Check the Not CloudFront returns an InvalidViewerCertificate error when I try to add an alternate domain name 1026 Amazon CloudFront Developer Guide Valid Before and Not Valid After fields in the certificates in your certificate chain to make sure that all of the certificates are valid based on the dates that you've listed. The certificate that you've attached isn't signed by a trusted Certificate Authority (CA). The certificate that you attach to CloudFront to verify an alternate domain name cannot be a self-signed certificate. It must be signed by a trusted CA. For more information, see Requirements for using alternate domain names. The certificate that you've attached isn't formatted correctly The domain name and IP address format that are included in the certificate, and the format of the certificate itself, must follow the standard for certificates. There was a CloudFront internal error. CloudFront was blocked by an internal issue and couldn't make validation checks for certificates. In this scenario, CloudFront returns an HTTP 500 status code and indicates that there is an internal CloudFront problem with attaching the certificate. Wait a few minutes, and then try again to add the alternate domain name with the certificate. The certificate that you've attached doesn't cover the alternate domain name that you're trying to add. For each alternate domain name that you add, CloudFront requires that you attach a valid SSL/TLS certificate from a trusted Certificate Authority (CA) that covers the domain name, to validate your authorization to use it. Please update your certificate to include a domain name that covers the CNAME that you're trying to add. For more information and examples of using domain names with wildcards, see Requirements for using alternate domain names. CloudFront returns an incorrectly configured DNS record error when I try to add a new CNAME When you have an existing wildcard DNS entry pointing to a CloudFront distribution, if you try to add a new CNAME with a more specific name, you might encounter the following error: One or more aliases specified for the distribution includes an incorrectly configured DNS record that points to another CloudFront distribution. You must update the DNS record to correct the problem. This error occurs because CloudFront queries the DNS against the CNAME and the wildcard DNS entry resolves to another distribution. CloudFront returns an incorrectly configured DNS record error when I try to add a new CNAME 1027 Amazon CloudFront Developer Guide To resolve this, first create another distribution, then create a DNS entry pointing to the new distribution. Finally, add the more specific CNAME. For more information on how to add CNAMEs, see Add an alternate domain name. I can't view the files in my distribution If you can't view the files in your CloudFront distribution, see the following topics for some common solutions. Did you sign up for both CloudFront and Amazon S3? To use Amazon CloudFront with an Amazon S3 origin, you must sign up for both CloudFront and Amazon S3, separately. For more information about |
AmazonCloudFront_DevGuide-361 | AmazonCloudFront_DevGuide.pdf | 361 | CloudFront Developer Guide To resolve this, first create another distribution, then create a DNS entry pointing to the new distribution. Finally, add the more specific CNAME. For more information on how to add CNAMEs, see Add an alternate domain name. I can't view the files in my distribution If you can't view the files in your CloudFront distribution, see the following topics for some common solutions. Did you sign up for both CloudFront and Amazon S3? To use Amazon CloudFront with an Amazon S3 origin, you must sign up for both CloudFront and Amazon S3, separately. For more information about signing up for CloudFront and Amazon S3, see Set up your AWS account. Are your Amazon S3 bucket and object permissions set correctly? If you're using CloudFront with an Amazon S3 origin, the original versions of your content are stored in an S3 bucket. To serve the content to your viewers, we recommend that you use CloudFront Origin Access Control (OAC) to secure Amazon S3 bucket access. This means your S3 bucket is reachable only through CloudFront. OAC controls viewer access and secure delivery via CloudFront. For more information about OAC, see the section called “Restrict access to an Amazon S3 origin”. For more information about managing your bucket access, see Blocking public access to your Amazon S3 storage in the Amazon S3 User Guide. Object properties and bucket properties are independent. You must explicitly grant privileges to each object in Amazon S3. Objects do not inherit properties from buckets, and object properties must be set independently of the bucket. Is your alternate domain name (CNAME) correctly configured? If you already have an existing CNAME record for your domain name, update that record or replace it with a new one that points to your distribution's domain name. Also, make sure that your CNAME record points to your distribution's domain name, not your Amazon S3 bucket. You can confirm that the CNAME record in your DNS system points to your distribution's domain name. To do so, use a DNS tool like dig. I can't view the files in my distribution 1028 Amazon CloudFront Developer Guide The following example shows a dig request for a domain name called images.example.com and the relevant part of the response. Under ANSWER SECTION, see the line that contains CNAME. The CNAME record for your domain name is set up correctly if the value on the right side of CNAME is your CloudFront distribution's domain name. If it's your Amazon S3 origin server bucket or some other domain name, then the CNAME record is set up incorrectly. [prompt]> dig images.example.com ; <<> DiG 9.3.3rc2 <<> images.example.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15917 ;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;images.example.com. IN A ;; ANSWER SECTION: images.example.com. 10800 IN CNAME d111111abcdef8.cloudfront.net. ... ... For more information about CNAMEs, see Use custom URLs by adding alternate domain names (CNAMEs). Are you referencing the correct URL for your CloudFront distribution? Make sure that the URL that you're referencing uses the domain name (or CNAME) of your CloudFront distribution, not your Amazon S3 bucket or custom origin. Do you need help troubleshooting a custom origin? If you need AWS to help you troubleshoot a custom origin, we probably will need to inspect the X-Amz-Cf-Id header entries from your requests. If you are not already logging these entries, you might want to consider it for the future. For more information, see the section called “Use Amazon EC2 (or another custom origin)”. For further help, see the AWS Support Center. Error message: Certificate: <certificate-id> is being used by CloudFront Problem: You're trying to delete an SSL/TLS certificate from the IAM certificate store, and you're getting the message "Certificate: <certificate-id> is being used by CloudFront." Error message: Certificate: <certificate-id> is being used by CloudFront 1029 Amazon CloudFront Developer Guide Solution: Every CloudFront distribution must be associated either with the default CloudFront certificate or with a custom SSL/TLS certificate. Before you can delete an SSL/TLS certificate, you must either rotate the certificate (replace the current custom SSL/TLS certificate with another custom SSL/TLS certificate) or revert from using a custom SSL/TLS certificate to using the default CloudFront certificate. To fix that, complete the steps in one of the following procedures: • Rotate SSL/TLS certificates • Revert from a custom SSL/TLS certificate to the default CloudFront certificate Troubleshooting error response status codes in CloudFront If CloudFront requests an object from your origin, and the origin returns an HTTP 4xx or 5xx status code, there's a problem with communication between CloudFront and your origin. This topic also includes troubleshooting steps for these status codes when using Lambda@Edge or CloudFront Functions. The following topics provide detailed explanations of the potential causes |
AmazonCloudFront_DevGuide-362 | AmazonCloudFront_DevGuide.pdf | 362 | a custom SSL/TLS certificate to using the default CloudFront certificate. To fix that, complete the steps in one of the following procedures: • Rotate SSL/TLS certificates • Revert from a custom SSL/TLS certificate to the default CloudFront certificate Troubleshooting error response status codes in CloudFront If CloudFront requests an object from your origin, and the origin returns an HTTP 4xx or 5xx status code, there's a problem with communication between CloudFront and your origin. This topic also includes troubleshooting steps for these status codes when using Lambda@Edge or CloudFront Functions. The following topics provide detailed explanations of the potential causes behind these error responses and offers step-by-step guidance on how to diagnose and resolve the underlying issues. Topics • HTTP 400 status code (Bad Request) • HTTP 401 status code (Unauthorized) • HTTP 403 status code (Permission Denied) • HTTP 404 status code (Not Found) • HTTP 405 status code (Method Not Allowed) • HTTP 412 status code (Precondition Failed) • HTTP 500 status code (Internal Server Error) • HTTP 502 status code (Bad Gateway) • HTTP 503 status code (Service Unavailable) • HTTP 504 status code (Gateway Timeout) Troubleshooting error response status codes 1030 Amazon CloudFront Developer Guide HTTP 400 status code (Bad Request) CloudFront returns a 400 bad request when the client sends some invalid data in the request such as missing or incorrect content in the payload or parameters. This could also represent a generic client error. Amazon S3 origin returns a 400 error If you're using an Amazon S3 origin with your CloudFront distribution, your distribution might send error responses with HTTP status code 400 Bad Request, and a message similar to the following: The authorization header is malformed; the region '<AWS Region>' is wrong; expecting '<AWS Region>' For example: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2' This problem can occur in the following scenario: 1. Your CloudFront distribution's origin is an Amazon S3 bucket. 2. You moved the S3 bucket from one AWS Region to another. That is, you deleted the S3 bucket, then later you created a new bucket with the same bucket name, but in a different AWS Region than where the original S3 bucket was located. To fix this error, update your CloudFront distribution so that it finds the S3 bucket in the bucket's current AWS Region. To update your CloudFront distribution 1. Sign in to the AWS Management Console and open the CloudFront console at https:// console.aws.amazon.com/cloudfront/v4/home. 2. Choose the distribution that produces this error. 3. Choose Origins and Origin Groups. 4. Find the origin for the S3 bucket that you moved. Select the check box next to this origin, then choose Edit. 5. Choose Yes, Edit. You do not need to change any settings before choosing Yes, Edit. HTTP 400 status code (Bad Request) 1031 Amazon CloudFront Developer Guide When you complete these steps, CloudFront redeploys your distribution. While the distribution is deploying, you see the Deploying status under the Last modified column. Some time after the deployment is complete, you should stop receiving the AuthorizationHeaderMalformed error responses. Application Load Balancer origin returns a 400 error If you're using an Application Load Balancer origin with your CloudFront distribution, possible causes of a 400 error include the following: • The client sent a malformed request that does not meet the HTTP specification. • The request header exceeds 16 KB per request line, 16 KB per single header, or 64 KB for the entire request header. • The client closed the connection before sending the full request body. HTTP 401 status code (Unauthorized) A 401 Unauthorized response status code indicates that the client request hasn't been completed because it lacks valid authentication credentials for the requested resource. This status code is sent with an HTTP WWW-Authenticate response header that contains information about how the client can request the resource again after prompting the user for authentication credentials. For more information, see 401 Unauthorized. In CloudFront, if your origin expects an Authorization header to authenticate the requests, CloudFront needs to forward the Authorization header to the origin to avoid a 401 Unauthorized error. When CloudFront forwards a viewer request to your origin, CloudFront removes some viewer headers by default, including the Authorization header. To make sure that your origin always receives the Authorization header in origin requests, you have the following options: • Add the Authorization header to the cache key by using a cache policy. All headers in the cache key are automatically included in origin requests. For more information, see Control the cache key with a policy. • Use an origin request policy that forwards all viewer headers to the origin. You can't forward the Authorization header individually in an origin request policy, but when you forward all viewer headers, CloudFront includes the Authorization header in |
AmazonCloudFront_DevGuide-363 | AmazonCloudFront_DevGuide.pdf | 363 | default, including the Authorization header. To make sure that your origin always receives the Authorization header in origin requests, you have the following options: • Add the Authorization header to the cache key by using a cache policy. All headers in the cache key are automatically included in origin requests. For more information, see Control the cache key with a policy. • Use an origin request policy that forwards all viewer headers to the origin. You can't forward the Authorization header individually in an origin request policy, but when you forward all viewer headers, CloudFront includes the Authorization header in viewer requests. CloudFront HTTP 401 status code (Unauthorized) 1032 Amazon CloudFront Developer Guide provides the managed AllViewer origin request policy for this use case. For more information, see Use managed origin request policies. For more information, see How can I configure CloudFront to forward the Authorization header to the origin? HTTP 403 status code (Permission Denied) An HTTP 403 error means the client isn't authorized to access the requested resource. The client understands the request, but can't authorize viewer access. The following are common causes when CloudFront returns this status code: Topics • Alternate CNAME is incorrectly configured • AWS WAF is configured on CloudFront distribution or at the origin • Custom origin returns a 403 error • Amazon S3 origin returns a 403 error • Geographic restrictions return a 403 error • Signed URL or signed cookie configuration returns a 403 error • Stacked distributions cause a 403 error Alternate CNAME is incorrectly configured Verify that you specified the correct CNAME for our distribution. To use an alternate CNAME instead of the default CloudFront URL: 1. Create a CNAME record in your DNS to point the CNAME to CloudFront distribution URL. 2. Add the CNAME in your CloudFront distribution configuration. If you create the DNS record but don't add the CNAME in your CloudFront distribution configuration, then the request returns a 403 error. For more information about configuring a custom CNAME, see Use custom URLs by adding alternate domain names (CNAMEs). HTTP 403 status code (Permission Denied) 1033 Amazon CloudFront Developer Guide AWS WAF is configured on CloudFront distribution or at the origin When AWS WAF sits between the client and CloudFront, CloudFront can't distinguish between a 403 error code that’s returned by your origin and a 403 error code that's returned by AWS WAF when a request is blocked. To find the source of the 403 status code, check your AWS WAF web access control list (ACL) rule for a blocked request. For more information, see the following topics: • AWS WAF web access control lists (web ACLs) • Testing and tuning your AWS WAF protections Custom origin returns a 403 error If you're using a custom origin, you might see a 403 error if you have a custom firewall configuration at the origin. To troubleshoot, make the request directly to the origin. If you can replicate the error without CloudFront, then the origin is causing the 403 error. If the custom origin is causing the error, check the origin logs to identify what might be causing the error. For more information, see the following troubleshooting topics: • How do I troubleshoot HTTP 403 errors from API Gateway? • How do I troubleshoot Application Load Balancer HTTP 403 forbidden errors? Amazon S3 origin returns a 403 error You might see a 403 error for the following reasons: • CloudFront doesn't have access to the Amazon S3 bucket. This can happen if origin access identity (OAI) or origin access control (OAC) isn't enabled for your distribution and the bucket is private. • The specified path in the requested URL isn't correct. • The requested object doesn't exist. • The host header was forwarded with the REST API endpoint. For more information, see HTTP Host header bucket specification in the Amazon Simple Storage Service User Guide. • You configured custom error pages. For more information, see How CloudFront processes errors when you have configured custom error pages. HTTP 403 status code (Permission Denied) 1034 Amazon CloudFront Developer Guide Geographic restrictions return a 403 error If you enabled geographic restrictions (also known as geoblocking) to prevent users in specific geographic locations from accessing content that you're distributing through a CloudFront distribution, blocked users receive a 403 error. For more information, see Restrict the geographic distribution of your content. Signed URL or signed cookie configuration returns a 403 error If you enabled Restrict viewer access for your distribution's behavior configuration, then requests that don't use signed cookies or signed URLs result in a 403 error. For more information, see the following topics: • Serve private content with signed URLs and signed cookies • How do I troubleshoot issues related to a signed URL or signed cookies in CloudFront? Stacked distributions cause a |
AmazonCloudFront_DevGuide-364 | AmazonCloudFront_DevGuide.pdf | 364 | from accessing content that you're distributing through a CloudFront distribution, blocked users receive a 403 error. For more information, see Restrict the geographic distribution of your content. Signed URL or signed cookie configuration returns a 403 error If you enabled Restrict viewer access for your distribution's behavior configuration, then requests that don't use signed cookies or signed URLs result in a 403 error. For more information, see the following topics: • Serve private content with signed URLs and signed cookies • How do I troubleshoot issues related to a signed URL or signed cookies in CloudFront? Stacked distributions cause a 403 error If you have two or more distributions within a chain of requests to the origin endpoint, CloudFront returns a 403 error. We don't recommend placing one distribution in front of another. HTTP 404 status code (Not Found) CloudFront returns a 404 (Not Found) error when the client attempts to access a resource that doesn’t exist. If you receive this error with your CloudFront distribution, common causes include the following: • Resource doesn't exist. • URL is incorrect. • Custom origin returning a 404. • Custom error pages returning a 404. (Any error code might be translated to 404.) For more information, see How CloudFront processes errors when you have configured custom error pages. • Custom error page accidentally deleted, resulting in a 404 because the request looks for the deleted custom error page. For more information, see How CloudFront processes errors if you haven't configured custom error pages. HTTP 404 status code (Not Found) 1035 Amazon CloudFront Developer Guide • Incorrect origin path. If the origin path is populated, its value is appended to the path of each request from the browser before the request is forwarded to the origin. For more information, see Origin path. HTTP 405 status code (Method Not Allowed) CloudFront returns a 405 (Method Not Allowed) error if you're trying to use an HTTP method that you haven't specified in the CloudFront distribution. You can specify one of the following options for your distribution: • CloudFront forwards only GET and HEAD requests. • CloudFront forwards only GET, HEAD, and OPTIONS requests. • CloudFront forwards GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE requests. (If you select this option, you might need to restrict access to your Amazon S3 bucket or custom origin so that users can't perform operations that you don't want them to. For example, you might not want users to have permissions to delete objects from your origin. HTTP 412 status code (Precondition Failed) CloudFront returns a 412 (Precondition Failed) error code when access to the target resource has been denied. In some cases, a server is configured to accept requests only after certain conditions are fulfilled. If any of the specified conditions are not met, then the server doesn't allow the client to access the given resource. Instead, the server responds with a 412 error code. Common causes of a 412 error in CloudFront include: • Conditional requests on methods other than GET or HEAD when the condition defined by the If-Unmodified-Since or If-None-Match headers is not fulfilled. In that case, the request, usually an upload or a modification of a resource, can't be made. • A condition in one or more of the request fields in the CloudFront UpdateDistribution API operation evaluates as false. HTTP 405 status code (Method Not Allowed) 1036 Amazon CloudFront Developer Guide HTTP 500 status code (Internal Server Error) An HTTP 500 status code (Internal Server Error) indicates that the server encountered an unexpected condition that prevented it from fulfilling the request. The following are some common causes of 500 errors in Amazon CloudFront. Topics • Origin server returns 500 error to CloudFront Origin server returns 500 error to CloudFront Your origin server might be returning a 500 error to CloudFront. Refer to the following troubleshooting topics for more information: • If Amazon S3 returns a 500 error, see How do I troubleshoot a HTTP 500 or 503 error from Amazon S3? • If API Gateway returns a 500 error, see How do I troubleshoot 5xx errors for API Gateway REST API?. • If Elastic Load Balancing returns a 500 error, see HTTP 500: Internal server error in the User Guide for Application Load Balancers. If the preceding list doesn't resolve the 500 error, the issue might be with a CloudFront Point of Presence returning an internal server error. You can contact Support for assistance. HTTP 502 status code (Bad Gateway) CloudFront returns a HTTP 502 status code (Bad Gateway) when CloudFront wasn't able to serve the requested object because it couldn't connect to the origin server. If you're using Lambda@Edge, the issue might be a Lambda validation error. If you receive an HTTP 502 error with the NonS3OriginDnsError error code, there's likely a DNS configuration |
AmazonCloudFront_DevGuide-365 | AmazonCloudFront_DevGuide.pdf | 365 | error in the User Guide for Application Load Balancers. If the preceding list doesn't resolve the 500 error, the issue might be with a CloudFront Point of Presence returning an internal server error. You can contact Support for assistance. HTTP 502 status code (Bad Gateway) CloudFront returns a HTTP 502 status code (Bad Gateway) when CloudFront wasn't able to serve the requested object because it couldn't connect to the origin server. If you're using Lambda@Edge, the issue might be a Lambda validation error. If you receive an HTTP 502 error with the NonS3OriginDnsError error code, there's likely a DNS configuration problem that's preventing CloudFront from connecting to the origin. Topics • SSL/TLS negotiation failure between CloudFront and a custom origin server • Origin is not responding with supported ciphers/protocols HTTP 500 status code (Internal Server Error) 1037 Amazon CloudFront Developer Guide • SSL/TLS certificate on the origin is expired, invalid, self-signed, or the certificate chain is in the wrong order • Origin is not responding on specified ports in origin settings • Lambda validation error • CloudFront function validation error • DNS error (NonS3OriginDnsError) • Application Load Balancer origin 502 error • API Gateway origin 502 error SSL/TLS negotiation failure between CloudFront and a custom origin server If you use a custom origin that requires HTTPS between CloudFront and your origin, mismatched domain names might cause errors. The SSL/TLS certificate on your origin must include a domain name that matches either the Origin Domain that you specified for the CloudFront distribution or the Host header of the origin request. If the domain names don't match, the SSL/TLS handshake fails, and CloudFront returns an HTTP status code 502 (Bad Gateway) and sets the X-Cache header to Error from cloudfront. To determine whether domain names in the certificate match the Origin Domain in the distribution or the Host header, you can use an online SSL checker or OpenSSL. If the domain names don't match, you have two options: • Get a new SSL/TLS certificate that includes the applicable domain names. If you use AWS Certificate Manager (ACM), see Requesting a public certificate in the AWS Certificate Manager User Guide to request a new certificate. • Change the distribution configuration so CloudFront no longer tries to use SSL to connect with your origin. Online SSL checker To find an SSL test tool, search the internet for "online ssl checker." Typically, you specify the name of your domain, and the tool returns a variety of information about your SSL/TLS certificate. Confirm that the certificate contains your domain name in the Common Name or Subject Alternative Names fields. HTTP 502 status code (Bad Gateway) 1038 Amazon CloudFront OpenSSL Developer Guide To help troubleshoot HTTP 502 errors from CloudFront, you can use OpenSSL to try to make an SSL/TLS connection to your origin server. If OpenSSL is not able to make a connection, that can indicate a problem with your origin server's SSL/TLS configuration. If OpenSSL is able to make a connection, it returns information about the origin server's certificate, including the certificate's common name (Subject CN field) and subject alternative name (Subject Alternative Name field). Use the following OpenSSL command to test the connection to your origin server (replace origin domain with your origin server's domain name, such as example.com): openssl s_client -connect origin domain name:443 If the following are true: • Your origin server supports multiple domain names with multiple SSL/TLS certificates • Your distribution is configured to forward the Host header to the origin Then add the -servername option to the OpenSSL command, as in the following example (replace CNAME with the CNAME that's configured in your distribution): openssl s_client -connect origin domain name:443 -servername CNAME Origin is not responding with supported ciphers/protocols CloudFront connects to origin servers using ciphers and protocols. For a list of the ciphers and protocols that CloudFront supports, see the section called “Supported protocols and ciphers between CloudFront and the origin”. If your origin does not respond with one of these ciphers or protocols in the SSL/TLS exchange, CloudFront fails to connect. You can validate that your origin supports the ciphers and protocols by using an online tool such as SSL Labs. Type the domain name of your origin in the Hostname field, and then choose Submit. Review the Common names and Alternative names fields from the test to see if they match your origin's domain name. After the test is finished, find the Protocols and Cipher Suites sections in the test results to see which ciphers or protocols are supported by your origin. Compare them with the list of the section called “Supported protocols and ciphers between CloudFront and the origin”. HTTP 502 status code (Bad Gateway) 1039 Amazon CloudFront Developer Guide SSL/TLS certificate on the origin is expired, invalid, self-signed, or the certificate chain is in |
AmazonCloudFront_DevGuide-366 | AmazonCloudFront_DevGuide.pdf | 366 | origin in the Hostname field, and then choose Submit. Review the Common names and Alternative names fields from the test to see if they match your origin's domain name. After the test is finished, find the Protocols and Cipher Suites sections in the test results to see which ciphers or protocols are supported by your origin. Compare them with the list of the section called “Supported protocols and ciphers between CloudFront and the origin”. HTTP 502 status code (Bad Gateway) 1039 Amazon CloudFront Developer Guide SSL/TLS certificate on the origin is expired, invalid, self-signed, or the certificate chain is in the wrong order If the origin server returns the following, CloudFront drops the TCP connection, returns HTTP status code 502 (Bad Gateway), and sets the X-Cache header to Error from cloudfront: • An expired certificate • Invalid certificate • Self-signed certificate • Certificate chain in the wrong order Note If the full chain of certificates, including the intermediate certificate, is not present, CloudFront drops the TCP connection. For information about installing an SSL/TLS certificate on your custom origin server, see the section called “Require HTTPS to a custom origin”. Origin is not responding on specified ports in origin settings When you create an origin on your CloudFront distribution, you can set the ports that CloudFront connects to the origin with for HTTP and HTTPS traffic. By default, these are TCP 80/443. You have the option to modify these ports. If your origin is rejecting traffic on these ports for any reason, or if your backend server isn't responding on the ports, CloudFront will fail to connect. To troubleshoot these issues, check any firewalls running in your infrastructure and validate that they are not blocking the supported IP ranges. For more information, see AWS IP address ranges in the Amazon VPC User Guide. Additionally, verify whether your web server is running on the origin. Lambda validation error If you're using Lambda@Edge, an HTTP 502 status code can indicate that your Lambda function response was incorrectly formed or included invalid content. For more information about troubleshooting Lambda@Edge errors, see Test and debug Lambda@Edge functions. HTTP 502 status code (Bad Gateway) 1040 Amazon CloudFront Developer Guide CloudFront function validation error If you're using CloudFront functions, an HTTP 502 status code can indicate that the CloudFront function is trying to add, delete, or change a read-only header. This error does not show up during testing, but will show up after you deploy the function and run the request. To resolve this error, check and update your CloudFront function. For more information, see Update functions. DNS error (NonS3OriginDnsError) An HTTP 502 error with the NonS3OriginDnsError error code indicates that there's a DNS configuration problem that prevents CloudFront from connecting to the origin. If you get this error from CloudFront, make sure that the origin's DNS configuration is correct and working. When CloudFront receives a request for an object that's expired or is not in its cache, it makes a request to the origin to get the object. To make a successful request to the origin, CloudFront performs a DNS resolution on the origin domain. If the DNS service for your domain is experiencing issues, CloudFront can't resolve the domain name to get the IP address, which results in an HTTP 502 error (NonS3OriginDnsError). To fix this problem, contact your DNS provider, or, if you are using Amazon Route 53, see Why can't I access my website that uses Route 53 DNS services? To further troubleshoot this issue, ensure that the authoritative name servers of your origin's root domain or zone apex (such as example.com) are functioning correctly. You can use the following commands to find the name servers for your apex origin, with a tool such as dig or nslookup: dig OriginAPEXDomainName NS +short nslookup -query=NS OriginAPEXDomainName When you have the names of your name servers, use the following commands to query the domain name of your origin against them to make sure that each responds with an answer: dig OriginDomainName @NameServer nslookup OriginDomainName NameServer HTTP 502 status code (Bad Gateway) 1041 Amazon CloudFront Important Developer Guide Make sure that you perform this DNS troubleshooting using a computer that's connected to the public internet. CloudFront resolves the origin domain using public DNS on the internet, so it's important to troubleshoot in a similar context. If your origin is a subdomain whose DNS authority is delegated to a different name server than the root domain, make sure that the name server (NS) and start of authority (SOA) records are configured correctly for the subdomain. You can check for these records using commands similar to the preceding examples. For more information about DNS, see Domain Name System (DNS) concepts in the Amazon Route 53 documentation. Application Load Balancer origin 502 error If you use Application Load Balancer |
AmazonCloudFront_DevGuide-367 | AmazonCloudFront_DevGuide.pdf | 367 | resolves the origin domain using public DNS on the internet, so it's important to troubleshoot in a similar context. If your origin is a subdomain whose DNS authority is delegated to a different name server than the root domain, make sure that the name server (NS) and start of authority (SOA) records are configured correctly for the subdomain. You can check for these records using commands similar to the preceding examples. For more information about DNS, see Domain Name System (DNS) concepts in the Amazon Route 53 documentation. Application Load Balancer origin 502 error If you use Application Load Balancer as your origin and receive a 502 error, see How do I troubleshoot Application Load Balancer HTTP 502 errors?. API Gateway origin 502 error If you use API Gateway and receive a 502 error, see How do I resolve HTTP 502 errors from API Gateway REST APIs with Lambda proxy integration?. HTTP 503 status code (Service Unavailable) An HTTP 503 status code (Service Unavailable) typically indicates a performance issue on the origin server. In rare cases, it indicates that CloudFront temporarily can't satisfy a request because of resource constraints at an edge location. If you are using Lambda@Edge or CloudFront Functions, the issue might be an execution error or a Lambda@Edge limit exceeded error. Topics • Origin server does not have enough capacity to support the request rate • CloudFront caused the error due to resource constraints at the edge location • Lambda@Edge or CloudFront Function execution error • Lambda@Edge limit exceeded HTTP 503 status code (Service Unavailable) 1042 Amazon CloudFront Developer Guide Origin server does not have enough capacity to support the request rate When an origin server is unavailable or unable to serve incoming requests, it returns an HTTP 503 status code (Service Unavailable). CloudFront then relays the error back to the user. To resolve this issue, try the following solutions: • If you use Amazon S3 as your origin server: • You can send 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned Amazon S3 prefix. When Amazon S3 returns a 503 Slow Down response, this typically indicates an excessive request rate against a specific Amazon S3 prefix. Because request rates apply per prefix in an S3 bucket, objects should be distributed across multiple prefixes. As the request rate on the prefixes gradually increases, Amazon S3 scales up to handle requests for each of the prefixes separately. As a result, the overall request rate that the bucket handles is a multiple of the number of prefixes. • For more information, see Optimizing Amazon S3 performance in the Amazon Simple Storage Service User Guide. • If you use Elastic Load Balancing as your origin server: • Make sure that your backend instances can respond to health checks. • Make sure that your load balancer and backend instances can handle the load. For more information, see: • How do I troubleshoot 503 errors returned while using Classic Load Balancer? • How do I troubleshoot 503 (service unavailable) errors from my Application Load Balancer? • If you use a custom origin: • Examine the application logs to ensure that your origin has sufficient resources, such as memory, CPU, and disk size. • If you use Amazon EC2 as the backend, make sure that the instance type has the appropriate resources to fulfill the incoming requests. For more information, see Instance types in the Amazon EC2 User Guide. • If you use API Gateway: • This error is related to the backend integration when the API Gateway API is unable to receive a response. The backend server might be: • Overloaded beyond capacity and unable to process new client requests. • Under temporary maintenance. HTTP 503 status code (Service Unavailable) 1043 Amazon CloudFront Developer Guide • To resolve this error, look at your API Gateway application logs to determine if there is an issue with backend capacity, integration, or something else. CloudFront caused the error due to resource constraints at the edge location You will receive this error in the rare situation that CloudFront can't route requests to the next best available edge location, and so can't satisfy a request. This error is common when you perform load testing on your CloudFront distribution. To help prevent this, follow the the section called “Load testing CloudFront” guidelines for avoiding 503 (capacity exceeded) errors. If this happens in your production environment, contact Support. Lambda@Edge or CloudFront Function execution error If you're using Lambda@Edge or CloudFront Functions, an HTTP 503 status code can indicate that your function returned an execution error. For more details about how to identify and resolve Lambda@Edge errors, see Test and debug Lambda@Edge functions. For more information about testing CloudFront Functions, see Test functions. Lambda@Edge limit exceeded If you're using Lambda@Edge, an HTTP 503 status code can indicate that |
AmazonCloudFront_DevGuide-368 | AmazonCloudFront_DevGuide.pdf | 368 | your CloudFront distribution. To help prevent this, follow the the section called “Load testing CloudFront” guidelines for avoiding 503 (capacity exceeded) errors. If this happens in your production environment, contact Support. Lambda@Edge or CloudFront Function execution error If you're using Lambda@Edge or CloudFront Functions, an HTTP 503 status code can indicate that your function returned an execution error. For more details about how to identify and resolve Lambda@Edge errors, see Test and debug Lambda@Edge functions. For more information about testing CloudFront Functions, see Test functions. Lambda@Edge limit exceeded If you're using Lambda@Edge, an HTTP 503 status code can indicate that Lambda returned an error. The error might be caused by one of the following: • The number of function executions exceeded one of the quotas that Lambda sets to throttle executions in an AWS Region (concurrent executions or invocation frequency). • The function exceeded the Lambda function timeout quota. For more information about the Lambda@Edge quotas, see Quotas on Lambda@Edge. For more details about how to identify and resolve Lambda@Edge errors, see the section called “Test and debug”. You can also see the Lambda service quotas in the AWS Lambda Developer Guide. HTTP 504 status code (Gateway Timeout) An HTTP 504 status code (gateway timeout) indicates that when CloudFront forwarded a request to the origin (because the requested object wasn't in the edge cache), one of the following happened: HTTP 504 status code (Gateway Timeout) 1044 Amazon CloudFront Developer Guide • The origin returned an HTTP 504 status code to CloudFront. • The origin didn't respond before the request expired. CloudFront will return an HTTP 504 status code if traffic is blocked to the origin by a firewall or security group, or if the origin isn't accessible on the internet. Check for those issues first. Then, if access isn't the problem, explore application delays and server timeouts to help you identify and fix the issues. Topics • Configure the firewall on your origin server to allow CloudFront traffic • Configure the security groups on your origin server to allow CloudFront traffic • Make your custom origin server accessible on the internet • Find and fix delayed responses from applications on your origin server Configure the firewall on your origin server to allow CloudFront traffic If the firewall on your origin server blocks CloudFront traffic, CloudFront returns an HTTP 504 status code, so it's good to make sure that isn't the issue before checking for other problems. The method that you use to determine if this is an issue with your firewall depends on what system your origin server uses: • If you use an IPTable firewall on a Linux server, you can search for tools and information to help you work with IPTables. • If you use Windows Firewall on a Windows server, see Add or Edit Firewall Rule in the Microsoft documentation. When you evaluate the firewall configuration on your origin server, look for any firewalls or security rules that block traffic from CloudFront edge locations, based on the published IP address range. For more information, see Locations and IP address ranges of CloudFront edge servers. If the CloudFront IP address range is allowed to connect to your origin server, make sure to update your server's security rules to incorporate changes. You can subscribe to an Amazon SNS topic and receive notifications when the IP address range file is updated. After you receive the notification, you can use code to retrieve the file, parse it, and make adjustments for your local environment. HTTP 504 status code (Gateway Timeout) 1045 Amazon CloudFront Developer Guide For more information, see Subscribe to AWS Public IP Address Changes via Amazon SNS on the AWS News Blog. Configure the security groups on your origin server to allow CloudFront traffic If your origin uses Elastic Load Balancing, review the ELB security groups and make sure that the security groups allow inbound traffic from CloudFront. You can also use AWS Lambda to automatically update your security groups to allow inbound traffic from CloudFront. Make your custom origin server accessible on the internet If CloudFront can't access your custom origin server because it isn't publicly available on the internet, CloudFront returns an HTTP 504 error. CloudFront edge locations connect to origin servers through the internet. If your custom origin is on a private network, CloudFront can't reach it. Because of this, you can't use private servers, including internal Classic Load Balancers, as origin servers with CloudFront. To check that internet traffic can connect to your origin server, run the following commands (where OriginDomainName is the domain name for your server): For HTTPS traffic: • nc -zv OriginDomainName 443 • telnet OriginDomainName 443 For HTTP traffic: • nc -zv OriginDomainName 80 • telnet OriginDomainName 80 Find and fix delayed responses from applications on your origin server Server timeouts |
AmazonCloudFront_DevGuide-369 | AmazonCloudFront_DevGuide.pdf | 369 | locations connect to origin servers through the internet. If your custom origin is on a private network, CloudFront can't reach it. Because of this, you can't use private servers, including internal Classic Load Balancers, as origin servers with CloudFront. To check that internet traffic can connect to your origin server, run the following commands (where OriginDomainName is the domain name for your server): For HTTPS traffic: • nc -zv OriginDomainName 443 • telnet OriginDomainName 443 For HTTP traffic: • nc -zv OriginDomainName 80 • telnet OriginDomainName 80 Find and fix delayed responses from applications on your origin server Server timeouts are often the result of either an application taking a very long time to respond, or a timeout value that is set too low. A quick fix to help avoid HTTP 504 errors is to simply set a higher CloudFront timeout value for your distribution. But we recommend that you first make sure that you address any performance HTTP 504 status code (Gateway Timeout) 1046 Amazon CloudFront Developer Guide and latency issues with the application and origin server. Then you can set a reasonable timeout value that helps prevent HTTP 504 errors and provides good responsiveness to users. Here's an overview of the steps you can take to find performance issues and correct them: 1. Measure the typical and high-load latency (responsiveness) of your web application. 2. Add additional resources, such as CPU or memory, if needed. Take other steps to address issues, such as tuning database queries to accommodate high-load scenarios. 3. If needed, adjust the timeout value for your CloudFront distribution. Following are details about each step. Measure typical and high-load latency To determine if one or more backend web application servers are experiencing high latency, run the following Linux curl command on each server: curl -w "DNS Lookup Time: %{time_namelookup} \nConnect time: %{time_connect} \nTLS Setup: %{time_appconnect} \nRedirect Time: %{time_redirect} \nTime to first byte: %{time_starttransfer} \nTotal time: %{time_total} \n" -o /dev/null https:// www.example.com/yourobject Note If you run Windows on your servers, you can search for and download curl for Windows to run a similar command. As you measure and evaluate the latency of an application that runs on your server, keep in mind the following: • Latency values are relative to each application. However, a time to first byte in milliseconds rather than seconds or more, is reasonable. • If you measure the application latency under normal load and it's fine, be aware that viewers might still experience timeouts under high load. When there is high demand, servers can have delayed responses or not respond at all. To help prevent high-load latency issues, check your server's resources such as CPU, memory, and disk reads and writes to make sure that your servers have the capacity to scale for high load. HTTP 504 status code (Gateway Timeout) 1047 Amazon CloudFront Developer Guide You can run the following Linux command to check the memory that is used by Apache processes: watch -n 1 "echo -n 'Apache Processes: ' && ps -C apache2 --no-headers | wc -l && free -m" • High CPU utilization on the server can significantly reduce an application's performance. If you use an Amazon EC2 instance for your backend server, review the CloudWatch metrics for the server to check the CPU utilization. For more information, see the Amazon CloudWatch User Guide. Or if you're using your own server, refer to the server help documentation for instructions on how to check CPU utilization. • Check for other potential issues under high loads, such as database queries that run slowly when there's a high volume of requests. Add resources, and tune servers and databases After you evaluate the responsiveness of your applications and servers, make sure that you have sufficient resources in place for typical traffic and high load situations: • If you have your own server, make sure it has enough CPU, memory, and disk space to handle viewer requests, based on your evaluation. • If you use an Amazon EC2 instance as your backend server, make sure that the instance type has the appropriate resources to fulfill incoming requests. For more information, see Instance types in the Amazon EC2 User Guide. In addition, consider the following tuning steps to help avoid timeouts: • If the Time to First Byte value that is returned by the curl command seems high, take steps to improve the performance of your application. Improving application responsiveness will in turn help reduce timeout errors. • Tune database queries to make sure that they can handle high request volumes without slow performance. • Set up keep-alive (persistent) connections on your backend server. This option helps to avoid latencies that occur when connections must be re-established for subsequent requests or users. • If you use Elastic Load Balancing as your origin, the following are possible |
AmazonCloudFront_DevGuide-370 | AmazonCloudFront_DevGuide.pdf | 370 | steps to help avoid timeouts: • If the Time to First Byte value that is returned by the curl command seems high, take steps to improve the performance of your application. Improving application responsiveness will in turn help reduce timeout errors. • Tune database queries to make sure that they can handle high request volumes without slow performance. • Set up keep-alive (persistent) connections on your backend server. This option helps to avoid latencies that occur when connections must be re-established for subsequent requests or users. • If you use Elastic Load Balancing as your origin, the following are possible causes for a 504 error: HTTP 504 status code (Gateway Timeout) 1048 Amazon CloudFront Developer Guide • The load balancer fails to establish a connection to the target before the connection timeout expires (10 seconds). • The load balancer establishes a connection to the target but the target doesn't respond before the idle timeout period elapses. • The network access control list (ACL) for the subnet doesn't allow traffic from the targets to the load balancer nodes on the ephemeral ports (1024-65535). • The target returns a content-length header that is larger than the entity body. The load balancer times out waiting for the missing bytes. • The target is a Lambda function and Lambda doesn't respond before the connection timeout expires. For more information about reducing latency, see How do I troubleshoot high latency on my ELB Classic Load Balancer? • If you use MediaTailor as your origin, the following are possible causes for a 504 error: • If relative URLs are mishandled, MediaTailor can receive malformed URLs from the players. • If MediaPackage is the manifest origin for MediaTailor, MediaPackage 404 manifest errors can cause MediaTailor to return a 504 error. • The request to the MediaTailor origin server takes more than 2 seconds to complete. • If you use Amazon API Gateway as your origin, the following is a possible cause for a 504 error: • An integration request takes longer than your API Gateway REST API maximum integration timeout parameter. For more information, see How can I troubleshoot API HTTP 504 timeout errors with API Gateway? If needed, adjust the CloudFront timeout value If you have evaluated and addressed slow application performance, origin server capacity, and other issues, but viewers are still experiencing HTTP 504 errors, then you should consider changing the time that is specified in your distribution for origin response timeout. For more information, see the section called “Response timeout (custom and VPC origins only)”. Load testing CloudFront Traditional load testing methods don't work well with CloudFront because CloudFront uses DNS to balance loads across geographically dispersed edge locations and within each edge location. When a client requests content from CloudFront, the client receives a DNS response that includes a set Load testing CloudFront 1049 Amazon CloudFront Developer Guide of IP addresses. If you test by sending requests to just one of the IP addresses that DNS returns, you're testing only a small subset of the resources in one CloudFront edge location, which doesn't accurately represent actual traffic patterns. Depending on the volume of data requested, testing in this way may overload and degrade the performance of that small subset of CloudFront servers. CloudFront is designed to scale for viewers that have different client IP addresses and different DNS resolvers across multiple geographic regions. To perform load testing that accurately assesses CloudFront performance, we recommend that you do all of the following: • Send client requests from multiple geographic regions. • Configure your test so each client makes an independent DNS request. Each client will then receive a different set of IP addresses from DNS. • For each client that is making requests, spread your client requests across the set of IP addresses that are returned by DNS. This ensures that the load is distributed across multiple servers in a CloudFront edge location. Notes • Load testing isn't allowed on cache behaviors that have Lambda@Edgeviewer request or viewer response triggers. • Load testing isn't allowed on origins that have Origin Shield enabled. Load testing CloudFront 1050 Amazon CloudFront Quotas Developer Guide You can request a CloudFront quota increase by using the following options: • You can use the Service Quotas console or the AWS Command Line Interface. For more information, see the following topics: • Requesting a quota increase in the Service Quotas User Guide • request-service-quota-increase in the AWS CLI Command Reference • If a CloudFront quota isn't available in Service Quotas, use the AWS Support Center Console to create a service quota increase case. CloudFront is subject to the following quotas. Topics • General quotas • General quotas on distributions • General quotas on policies • Quotas on CloudFront Functions • Quotas on key value stores • Quotas on Lambda@Edge • Quotas |
AmazonCloudFront_DevGuide-371 | AmazonCloudFront_DevGuide.pdf | 371 | You can use the Service Quotas console or the AWS Command Line Interface. For more information, see the following topics: • Requesting a quota increase in the Service Quotas User Guide • request-service-quota-increase in the AWS CLI Command Reference • If a CloudFront quota isn't available in Service Quotas, use the AWS Support Center Console to create a service quota increase case. CloudFront is subject to the following quotas. Topics • General quotas • General quotas on distributions • General quotas on policies • Quotas on CloudFront Functions • Quotas on key value stores • Quotas on Lambda@Edge • Quotas on SSL certificates • Quotas on invalidations • Quotas on key groups • Quotas on WebSocket connections • Quotas on field-level encryption • Quotas on cookies (legacy cache settings) • Quotas on query strings (legacy cache settings) • Quotas on headers • Quotas on multi-tenant distributions • Related information 1051 Amazon CloudFront General quotas Entity Data transfer rate per distribution Requests per second per distribution Tags that can be added to a distribution Files that you can serve per distribution Developer Guide Default quota 150 Gbps Request a higher quota 250,000 Request a higher quota 50 Request a higher quota No quota Maximum length of a request or an origin response, including headers and query strings, but not including the body content 20,480 bytes Maximum length of a URL 8,192 bytes Maximum number of real-time log delivery configurations per AWS account 150 General quotas on distributions Entity Default quota Alternate domain names (CNAMEs) per distribution 100 For more information, see Use custom URLs by adding alternate domain names (CNAMEs). Request a higher quota Cache behaviors per distribution 75 General quotas 1052 Amazon CloudFront Entity Connection attempts per origin For more information, see Connection attempts. Developer Guide Default quota Request a higher quota 1-3 Connection timeout per origin 1-10 seconds For more information, see Connection timeout. Distributions per AWS account 500 For more information, see Create a distribution. Request a higher Distributions per origin access control Distributions within chain of requests to origin endpoint We don't recommend placing one distribution in front of another. E xceeding this quota results in a 403 error. quota 100 Request a higher quota 2 File compression: range of file sizes that CloudFront compresses For more information, see Serve compressed files. 1,000 to 10,000,000 bytes Keep-alive timeout per origin 1-120 seconds For more information, see Keep-alive timeout (custom and VPC origins only). Request a higher quota Maximum cacheable file size per HTTP GET response. 50 GB Only the responses for an HTTP GET are cached. Responses for POST or PUT are not cached. General quotas on distributions 1053 Amazon CloudFront Entity Origin access controls per AWS account Origin access identities per AWS account Origins per distribution Origin groups per distribution Response timeout per origin Developer Guide Default quota 100 Request a higher quota 100 Request a higher quota 100 Request a higher quota 10 Request a higher quota 1-120 seconds For more information, see Response timeout (custom and VPC origins Request a higher only). Staging distributions per AWS account quota 20 For more information, see the section called “Use continuous deployment to safely test changes”. Request a higher quota Distributions associated with the same VPC origin VPC origins per AWS account 50 25 Request a higher quota General quotas on distributions 1054 Amazon CloudFront Entity Developer Guide Default quota Maximum number of distributions that can be associated with a single Anycast static IP list. 100 Request a higher quota General quotas on policies Entity Custom cache policies per AWS account Default quota 20 (Does not apply to CloudFront managed cache policies) Request a higher Distributions associated with the same cache policy Query strings per cache policy Headers per cache policy Cookies per cache policy quota 100 10 Request a higher quota 10 Request a higher quota 10 Request a higher quota Total combined length of all query string, header, and cookie names in a cache policy 1024 Custom origin request policies per AWS account 20 (Does not apply to CloudFront managed origin request policies) General quotas on policies 1055 Amazon CloudFront Entity Distributions associated with the same origin request policy Query strings per origin request policy Headers per origin request policy Cookies per origin request policy Total combined length of all query string, header, and cookie names in an origin request policy Custom response headers policies per AWS account Developer Guide Default quota Request a higher quota 100 10 Request a higher quota 10 Request a higher quota 10 Request a higher quota 1024 20 (Does not apply to CloudFront managed response headers policies) Request a higher quota Distributions associated with the same response headers policy 100 Request a higher quota Custom headers per response headers policy 10 Request a higher quota General quotas on policies |
AmazonCloudFront_DevGuide-372 | AmazonCloudFront_DevGuide.pdf | 372 | Headers per origin request policy Cookies per origin request policy Total combined length of all query string, header, and cookie names in an origin request policy Custom response headers policies per AWS account Developer Guide Default quota Request a higher quota 100 10 Request a higher quota 10 Request a higher quota 10 Request a higher quota 1024 20 (Does not apply to CloudFront managed response headers policies) Request a higher quota Distributions associated with the same response headers policy 100 Request a higher quota Custom headers per response headers policy 10 Request a higher quota General quotas on policies 1056 Amazon CloudFront Entity Developer Guide Default quota Continuous deployment policies per AWS account 20 Quotas on CloudFront Functions Entity Functions per AWS account Maximum function size This quota isn't adjustable. To store additional data for your Cloud Front Functions, create a key value store and add your key-value pairs. For more information, see Amazon CloudFront KeyValueStore. Maximum function memory Distributions associated with the same function Request a higher quota Default quota 100 10 KB 2 MB 100 In addition to these quotas, there are some other restrictions when using CloudFront Functions. For more information, see Restrictions on CloudFront Functions. Quotas on key value stores Entity Maximum size of a key in a key-value pair Maximum size of the value in a key-value pair Default quota 512 Bytes 1 KB Quotas on CloudFront Functions 1057 Amazon CloudFront Entity Maximum key values pairs that you can update in a single API request Maximum size of an individual key value store Maximum number of functions that a single key value store can be associated with Maximum number of key value stores per function Maximum number of key value stores per account Quotas on Lambda@Edge General quotas Entity Developer Guide Default quota 50 keys or 3 MB payload, whichever is reached first 5 MB 10 1 50 Request a higher quota Default quota Distributions per AWS account that can have Lambda@Edge functions 500 Lambda@Edge functions per distribution Concurrent executions Request a higher quota 100 Request a higher quota 1,000 (in each AWS Region) Request a higher quota Quotas on Lambda@Edge 1058 Amazon CloudFront Entity Note Developer Guide Default quota Lambda manages the concurrency quotas for Lambda@Edge. All Lambda functions in the AWS Region share this quota. For more information, see Function scaling in the AWS Lambda Developer Guide. Distributions associated with the same function Maximum compressed size of a Lambda function and any included l ibraries 500 50 MB Lambda@Edge requests per second (each supported AWS Region). 10,000 For more information, see Concurrency quotas in the AWS Lambda Developer Guide. Quotas that differ by event type Entity Function memory size Function timeout. The function can make network calls to resources such as Amazon S3 buckets, DynamoDB tables, or Amazon EC2 instances in AWS Regions. Viewer request and viewer response Origin request and origin response events 128 MB events Same as Lambda quotas 5 seconds 30 seconds Size of a response that is generated by a Lambda function, including headers and body 40 KB 1 MB Quotas on Lambda@Edge 1059 Amazon CloudFront Notes Developer Guide • For a list of additional Lambda@Edge quotas that can be increased from Service Quotas, see Amazon CloudFront endpoints and quotas in the AWS General Reference. • In addition to these quotas, there are some other restrictions when using Lambda@Edge functions. For more information, see Restrictions on Lambda@Edge. Quotas on SSL certificates Entity Default quota SSL certificates per AWS account when serving HTTPS requests using dedicated IP addresses (no quota when serving HTTPS requests using 2 SNI) For more information, see Use HTTPS with CloudFront. Request a higher quota SSL certificates that can be associated with a CloudFront distribution 1 If your SSL certificate is specifically for HTTPS communication between viewers and CloudFront, and if you used AWS Certificate Manager (ACM) or the IAM certificate store to provision or import your certificate, additional quotas apply. For more information, see Quotas on using SSL/TLS certificates with CloudFront (HTTPS between viewers and CloudFront only). There are also quotas on the number of SSL certificates that you can import into AWS Certificate Manager (ACM) or upload to AWS Identity and Access Management (IAM). For more information, see Increase the quotas for SSL/TLS certificates. Quotas on invalidations Entity File invalidation: maximum number of files allowed in active inv alidation requests, excluding wildcard invalidations Default quota 3,000 Quotas on SSL certificates 1060 Amazon CloudFront Entity Developer Guide Default quota For more information, see Invalidate files to remove content. File invalidation: maximum number of active wildcard invalidations allowed 15 File invalidation: maximum number of files that one wildcard invalidati on can process No quota Quotas on key groups Entity Public keys in a single key group Key groups associated with a single cache behavior Key |
AmazonCloudFront_DevGuide-373 | AmazonCloudFront_DevGuide.pdf | 373 | (IAM). For more information, see Increase the quotas for SSL/TLS certificates. Quotas on invalidations Entity File invalidation: maximum number of files allowed in active inv alidation requests, excluding wildcard invalidations Default quota 3,000 Quotas on SSL certificates 1060 Amazon CloudFront Entity Developer Guide Default quota For more information, see Invalidate files to remove content. File invalidation: maximum number of active wildcard invalidations allowed 15 File invalidation: maximum number of files that one wildcard invalidati on can process No quota Quotas on key groups Entity Public keys in a single key group Key groups associated with a single cache behavior Key groups per AWS account Distributions associated with a single key group Default quota 5 Request a higher quota 4 Request a higher quota 10 Request a higher quota 100 Request a higher quota Quotas on key groups 1061 Amazon CloudFront Developer Guide Quotas on WebSocket connections Entity Origin response timeout (idle timeout) Default quota 10 minutes If CloudFront hasn't detected any bytes sent from the origin to the client within the past 10 minutes, the connection is assumed to be idle and is closed. Quotas on field-level encryption Entity Maximum length of a field to encrypt Default quota 16 KB For more information, see Use field-level encryption to help protect sensitive data. Maximum number of fields in a request body when field-level encrypt ion is configured 10 Maximum length of a request body when field-level encryption is configured 1 MB Maximum number of field-level encryption configurations that can be associated with one AWS account Maximum number of field-level encryption profiles that can be associa ted with one AWS account 10 10 Quotas on WebSocket connections 1062 Amazon CloudFront Entity Developer Guide Default quota Maximum number of public keys that can be added to one AWS account Maximum number of fields to encrypt that can be specified in one profile Maximum number of CloudFront distributions that can be associated with a field-level encryption configuration Maximum number of query argument profile mappings that can be included in a field-level encryption configuration 10 10 20 5 Quotas on cookies (legacy cache settings) These quotas apply to CloudFront's legacy cache settings. We recommend using a cache policy or origin request policy instead of the legacy settings. Entity Cookies per cache behavior Default quota 10 For more information, see Cache content based on cookies. Request a higher quota Total number of bytes in cookie names (doesn't apply if you configure CloudFront to forward all cookies to the origin) 512 minus the number of cookies Quotas on query strings (legacy cache settings) These quotas apply to CloudFront's legacy cache settings. We recommend using a cache policy or origin request policy instead of the legacy settings. Entity Maximum number of characters in a query string Default quota 128 characters Quotas on cookies (legacy cache settings) 1063 Amazon CloudFront Entity Maximum number of characters total for all query strings in the same parameter Developer Guide Default quota 512 characters Query strings per cache behavior 10 For more information, see Cache content based on query string Request a higher parameters. quota Quotas on headers Entity Default quota Headers per cache behavior (legacy cache settings) 10 For more information, see the section called “Cache content based on Request a higher request headers”. Forward headers per cache behavior Custom headers: maximum number of custom headers that you can configure CloudFront to add to origin requests For more information, see the section called “Add custom headers to origin requests”. Custom headers: maximum number of custom headers that you can add to a response headers policy quota 25 Request a higher quota 30 Request a higher quota 10 Request a higher quota Custom headers: maximum length of a header name 256 characters Custom headers: maximum length of a header value 1,783 characters Quotas on headers 1064 Amazon CloudFront Entity Custom headers: maximum length of all header values and names combined Developer Guide Default quota 10,240 characters Maximum length of the Content-Security-Policy header value 1,783 characters Maximum length of a CORS (Access-Control-Allow-Origin) header value Quotas on multi-tenant distributions Request a higher quota 1,783 characters Entity Default quota Maximum number of distribution tenants per AWS account 10,000 Request a higher quota Maximum number of multi-tenant distributions per AWS account 20 Maximum number of connection groups per AWS account Request a higher quota 100 Request a higher quota Maximum number of aliases per distribution tenant 100 Request a higher quota Maximum number of parameters per distribution tenant 5 Quotas on multi-tenant distributions 1065 Amazon CloudFront Entity Developer Guide Default quota Request a higher quota Maximum number of parameters per multi-tenant distribution 5 Maximum number of parameters in a field in a multi-tenant distribut ion Request a higher quota 2 Request a higher quota Maximum number of connection groups per Anycast static IP list 5 Request a |
AmazonCloudFront_DevGuide-374 | AmazonCloudFront_DevGuide.pdf | 374 | distributions per AWS account 20 Maximum number of connection groups per AWS account Request a higher quota 100 Request a higher quota Maximum number of aliases per distribution tenant 100 Request a higher quota Maximum number of parameters per distribution tenant 5 Quotas on multi-tenant distributions 1065 Amazon CloudFront Entity Developer Guide Default quota Request a higher quota Maximum number of parameters per multi-tenant distribution 5 Maximum number of parameters in a field in a multi-tenant distribut ion Request a higher quota 2 Request a higher quota Maximum number of connection groups per Anycast static IP list 5 Request a higher quota For more information about multi-tenant distributions, see Understand how multi-tenant distributions work. Related information For more information, see Amazon CloudFront endpoints and quotas in the AWS General Reference. Related information 1066 Amazon CloudFront Developer Guide Code examples for CloudFront using AWS SDKs The following code examples show how to use CloudFront with an AWS software development kit (SDK). Actions are code excerpts from larger programs and must be run in context. While actions show you how to call individual service functions, you can see actions in context in their related scenarios. Scenarios are code examples that show you how to accomplish specific tasks by calling multiple functions within a service or combined with other AWS services. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Code examples • Basic examples for CloudFront using AWS SDKs • Actions for CloudFront using AWS SDKs • Use CreateDistribution with an AWS SDK or CLI • Use CreateFunction with an AWS SDK • Use CreateInvalidation with a CLI • Use CreateKeyGroup with an AWS SDK • Use CreatePublicKey with an AWS SDK or CLI • Use DeleteDistribution with an AWS SDK or CLI • Use GetCloudFrontOriginAccessIdentity with a CLI • Use GetCloudFrontOriginAccessIdentityConfig with a CLI • Use GetDistribution with a CLI • Use GetDistributionConfig with an AWS SDK or CLI • Use ListCloudFrontOriginAccessIdentities with a CLI • Use ListDistributions with an AWS SDK or CLI • Use UpdateDistribution with an AWS SDK or CLI • Scenarios for CloudFront using AWS SDKs • Delete CloudFront signing resources using AWS SDK • Create signed URLs and cookies using an AWS SDK 1067 Amazon CloudFront Developer Guide • CloudFront Functions examples for CloudFront • Add HTTP security headers to a CloudFront Functions viewer response event • Add a CORS header to a CloudFront Functions viewer response event • Add a cache control header to a CloudFront Functions viewer response event • Add a true client IP header to a CloudFront Functions viewer request event • Add an origin header to a CloudFront Functions viewer request event • Add index.html to request URLs without a file name in a CloudFront Functions viewer request event • Normalize query string parameters in a CloudFront Functions viewer request • Redirect to a new URL in a CloudFront Functions viewer request event • Rewrite a request URI based on KeyValueStore configuration for a CloudFront Functions viewer request event • Route requests to an origin closer to the viewer in a CloudFront Functions viewer request event • Use key-value pairs in a CloudFront Functions viewer request • Validate a simple token in a CloudFront Functions viewer request Basic examples for CloudFront using AWS SDKs The following code examples show how to use the basics of Amazon CloudFront with AWS SDKs. Examples • Actions for CloudFront using AWS SDKs • Use CreateDistribution with an AWS SDK or CLI • Use CreateFunction with an AWS SDK • Use CreateInvalidation with a CLI • Use CreateKeyGroup with an AWS SDK • Use CreatePublicKey with an AWS SDK or CLI • Use DeleteDistribution with an AWS SDK or CLI • Use GetCloudFrontOriginAccessIdentity with a CLI • Use GetCloudFrontOriginAccessIdentityConfig with a CLI • Use GetDistribution with a CLI Basics • Use GetDistributionConfig with an AWS SDK or CLI 1068 Amazon CloudFront Developer Guide • Use ListCloudFrontOriginAccessIdentities with a CLI • Use ListDistributions with an AWS SDK or CLI • Use UpdateDistribution with an AWS SDK or CLI Actions for CloudFront using AWS SDKs The following code examples demonstrate how to perform individual CloudFront actions with AWS SDKs. Each example includes a link to GitHub, where you can find instructions for setting up and running the code. These excerpts call the CloudFront API and are code excerpts from larger programs that must be run in context. You can see actions in context in Scenarios for CloudFront using AWS SDKs . The following examples include only the most commonly used actions. For a complete list, see the Amazon CloudFront API Reference. Examples • Use CreateDistribution with an AWS SDK or CLI • Use CreateFunction |
AmazonCloudFront_DevGuide-375 | AmazonCloudFront_DevGuide.pdf | 375 | SDKs The following code examples demonstrate how to perform individual CloudFront actions with AWS SDKs. Each example includes a link to GitHub, where you can find instructions for setting up and running the code. These excerpts call the CloudFront API and are code excerpts from larger programs that must be run in context. You can see actions in context in Scenarios for CloudFront using AWS SDKs . The following examples include only the most commonly used actions. For a complete list, see the Amazon CloudFront API Reference. Examples • Use CreateDistribution with an AWS SDK or CLI • Use CreateFunction with an AWS SDK • Use CreateInvalidation with a CLI • Use CreateKeyGroup with an AWS SDK • Use CreatePublicKey with an AWS SDK or CLI • Use DeleteDistribution with an AWS SDK or CLI • Use GetCloudFrontOriginAccessIdentity with a CLI • Use GetCloudFrontOriginAccessIdentityConfig with a CLI • Use GetDistribution with a CLI • Use GetDistributionConfig with an AWS SDK or CLI • Use ListCloudFrontOriginAccessIdentities with a CLI • Use ListDistributions with an AWS SDK or CLI • Use UpdateDistribution with an AWS SDK or CLI Use CreateDistribution with an AWS SDK or CLI The following code examples show how to use CreateDistribution. Actions 1069 Amazon CloudFront CLI AWS CLI Developer Guide Example 1: To create a CloudFront distribution The following example creates a distribution for an S3 bucket named amzn-s3-demo- bucket, and also specifies index.html as the default root object, using command line arguments. aws cloudfront create-distribution \ --origin-domain-name amzn-s3-demo-bucket.s3.amazonaws.com \ --default-root-object index.html Output: { "Location": "https://cloudfront.amazonaws.com/2019-03-26/distribution/ EMLARXS9EXAMPLE", "ETag": "E9LHASXEXAMPLE", "Distribution": { "Id": "EMLARXS9EXAMPLE", "ARN": "arn:aws:cloudfront::123456789012:distribution/EMLARXS9EXAMPLE", "Status": "InProgress", "LastModifiedTime": "2019-11-22T00:55:15.705Z", "InProgressInvalidationBatches": 0, "DomainName": "d111111abcdef8.cloudfront.net", "ActiveTrustedSigners": { "Enabled": false, "Quantity": 0 }, "DistributionConfig": { "CallerReference": "cli-example", "Aliases": { "Quantity": 0 }, "DefaultRootObject": "index.html", "Origins": { "Quantity": 1, "Items": [ { "Id": "amzn-s3-demo-bucket.s3.amazonaws.com-cli-example", "DomainName": "amzn-s3-demo-bucket.s3.amazonaws.com", Actions 1070 Amazon CloudFront Developer Guide "OriginPath": "", "CustomHeaders": { "Quantity": 0 }, "S3OriginConfig": { "OriginAccessIdentity": "" } } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "amzn-s3-demo-bucket.s3.amazonaws.com-cli- example", "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" }, "Headers": { "Quantity": 0 }, "QueryStringCacheKeys": { "Quantity": 0 } }, "TrustedSigners": { "Enabled": false, "Quantity": 0 }, "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, "AllowedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ], "CachedMethods": { "Quantity": 2, "Items": [ "HEAD", Actions 1071 Amazon CloudFront Developer Guide "GET" ] } }, "SmoothStreaming": false, "DefaultTTL": 86400, "MaxTTL": 31536000, "Compress": false, "LambdaFunctionAssociations": { "Quantity": 0 }, "FieldLevelEncryptionId": "" }, "CacheBehaviors": { "Quantity": 0 }, "CustomErrorResponses": { "Quantity": 0 }, "Comment": "", "Logging": { "Enabled": false, "IncludeCookies": false, "Bucket": "", "Prefix": "" }, "PriceClass": "PriceClass_All", "Enabled": true, "ViewerCertificate": { "CloudFrontDefaultCertificate": true, "MinimumProtocolVersion": "TLSv1", "CertificateSource": "cloudfront" }, "Restrictions": { "GeoRestriction": { "RestrictionType": "none", "Quantity": 0 } }, "WebACLId": "", "HttpVersion": "http2", "IsIPV6Enabled": true } } Actions 1072 Amazon CloudFront } Developer Guide Example 2: To create a CloudFront distribution using a JSON file The following example creates a distribution for an S3 bucket named amzn-s3-demo- bucket, and also specifies index.html as the default root object, using a JSON file. aws cloudfront create-distribution \ --distribution-config file://dist-config.json Contents of dist-config.json: { "CallerReference": "cli-example", "Aliases": { "Quantity": 0 }, "DefaultRootObject": "index.html", "Origins": { "Quantity": 1, "Items": [ { "Id": "amzn-s3-demo-bucket.s3.amazonaws.com-cli-example", "DomainName": "amzn-s3-demo-bucket.s3.amazonaws.com", "OriginPath": "", "CustomHeaders": { "Quantity": 0 }, "S3OriginConfig": { "OriginAccessIdentity": "" } } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "amzn-s3-demo-bucket.s3.amazonaws.com-cli-example", "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" Actions 1073 Amazon CloudFront Developer Guide }, "Headers": { "Quantity": 0 }, "QueryStringCacheKeys": { "Quantity": 0 } }, "TrustedSigners": { "Enabled": false, "Quantity": 0 }, "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, "AllowedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ], "CachedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ] } }, "SmoothStreaming": false, "DefaultTTL": 86400, "MaxTTL": 31536000, "Compress": false, "LambdaFunctionAssociations": { "Quantity": 0 }, "FieldLevelEncryptionId": "" }, "CacheBehaviors": { "Quantity": 0 }, "CustomErrorResponses": { "Quantity": 0 }, "Comment": "", Actions 1074 Amazon CloudFront Developer Guide "Logging": { "Enabled": false, "IncludeCookies": false, "Bucket": "", "Prefix": "" }, "PriceClass": "PriceClass_All", "Enabled": true, "ViewerCertificate": { "CloudFrontDefaultCertificate": true, "MinimumProtocolVersion": "TLSv1", "CertificateSource": "cloudfront" }, "Restrictions": { "GeoRestriction": { "RestrictionType": "none", "Quantity": 0 } }, "WebACLId": "", "HttpVersion": "http2", "IsIPV6Enabled": true } See Example 1 for sample output. • For API details, see CreateDistribution in AWS CLI Command Reference. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. The following example uses an Amazon Simple Storage Service (Amazon S3) bucket as a content origin. Actions 1075 Amazon CloudFront Developer Guide After creating the distribution, the code creates a CloudFrontWaiter to wait until the distribution is deployed before returning the distribution. import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.internal.waiters.ResponseOrException; import software.amazon.awssdk.services.cloudfront.CloudFrontClient; import software.amazon.awssdk.services.cloudfront.model.CreateDistributionResponse; import software.amazon.awssdk.services.cloudfront.model.Distribution; import software.amazon.awssdk.services.cloudfront.model.GetDistributionResponse; import software.amazon.awssdk.services.cloudfront.model.ItemSelection; import software.amazon.awssdk.services.cloudfront.model.Method; import software.amazon.awssdk.services.cloudfront.model.ViewerProtocolPolicy; import |
AmazonCloudFront_DevGuide-376 | AmazonCloudFront_DevGuide.pdf | 376 | API details, see CreateDistribution in AWS CLI Command Reference. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. The following example uses an Amazon Simple Storage Service (Amazon S3) bucket as a content origin. Actions 1075 Amazon CloudFront Developer Guide After creating the distribution, the code creates a CloudFrontWaiter to wait until the distribution is deployed before returning the distribution. import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.internal.waiters.ResponseOrException; import software.amazon.awssdk.services.cloudfront.CloudFrontClient; import software.amazon.awssdk.services.cloudfront.model.CreateDistributionResponse; import software.amazon.awssdk.services.cloudfront.model.Distribution; import software.amazon.awssdk.services.cloudfront.model.GetDistributionResponse; import software.amazon.awssdk.services.cloudfront.model.ItemSelection; import software.amazon.awssdk.services.cloudfront.model.Method; import software.amazon.awssdk.services.cloudfront.model.ViewerProtocolPolicy; import software.amazon.awssdk.services.cloudfront.waiters.CloudFrontWaiter; import software.amazon.awssdk.services.s3.S3Client; import java.time.Instant; public class CreateDistribution { private static final Logger logger = LoggerFactory.getLogger(CreateDistribution.class); public static Distribution createDistribution(CloudFrontClient cloudFrontClient, S3Client s3Client, final String bucketName, final String keyGroupId, final String originAccessControlId) { final String region = s3Client.headBucket(b -> b.bucket(bucketName)).sdkHttpResponse().headers() .get("x-amz-bucket-region").get(0); final String originDomain = bucketName + ".s3." + region + ".amazonaws.com"; String originId = originDomain; // Use the originDomain value for the originId. // The service API requires some deprecated methods, such as // DefaultCacheBehavior.Builder#minTTL and #forwardedValue. CreateDistributionResponse createDistResponse = cloudFrontClient.createDistribution(builder -> builder .distributionConfig(b1 -> b1 Actions 1076 Amazon CloudFront Developer Guide .origins(b2 -> b2 .quantity(1) .items(b3 -> b3 .domainName(originDomain) .id(originId) .s3OriginConfig(builder4 -> builder4 .originAccessIdentity( "")) .originAccessControlId( originAccessControlId))) .defaultCacheBehavior(b2 -> b2 .viewerProtocolPolicy(ViewerProtocolPolicy.ALLOW_ALL) .targetOriginId(originId) .minTTL(200L) .forwardedValues(b5 -> b5 .cookies(cp -> cp .forward(ItemSelection.NONE)) .queryString(true)) .trustedKeyGroups(b3 -> b3 .quantity(1) .items(keyGroupId) .enabled(true)) .allowedMethods(b4 -> b4 .quantity(2) Actions 1077 Amazon CloudFront Developer Guide .items(Method.HEAD, Method.GET) .cachedMethods(b5 -> b5 .quantity(2) .items(Method.HEAD, Method.GET)))) .cacheBehaviors(b -> b .quantity(1) .items(b2 -> b2 .pathPattern("/index.html") .viewerProtocolPolicy( ViewerProtocolPolicy.ALLOW_ALL) .targetOriginId(originId) .trustedKeyGroups(b3 -> b3 .quantity(1) .items(keyGroupId) .enabled(true)) .minTTL(200L) .forwardedValues(b4 -> b4 .cookies(cp -> cp .forward(ItemSelection.NONE)) .queryString(true)) .allowedMethods(b5 -> b5.quantity(2) .items(Method.HEAD, Actions 1078 Amazon CloudFront Developer Guide Method.GET) .cachedMethods(b6 -> b6 .quantity(2) .items(Method.HEAD, Method.GET))))) .enabled(true) .comment("Distribution built with java") .callerReference(Instant.now().toString()))); final Distribution distribution = createDistResponse.distribution(); logger.info("Distribution created. DomainName: [{}] Id: [{}]", distribution.domainName(), distribution.id()); logger.info("Waiting for distribution to be deployed ..."); try (CloudFrontWaiter cfWaiter = CloudFrontWaiter.builder().client(cloudFrontClient).build()) { ResponseOrException<GetDistributionResponse> responseOrException = cfWaiter .waitUntilDistributionDeployed(builder -> builder.id(distribution.id())) .matched(); responseOrException.response() .orElseThrow(() -> new RuntimeException("Distribution not created")); logger.info("Distribution deployed. DomainName: [{}] Id: [{}]", distribution.domainName(), distribution.id()); } return distribution; } } • For API details, see CreateDistribution in AWS SDK for Java 2.x API Reference. Actions 1079 Amazon CloudFront PowerShell Tools for PowerShell Developer Guide Example 1: Creates a basic CloudFront distribution, configured with logging and caching. $origin = New-Object Amazon.CloudFront.Model.Origin $origin.DomainName = "amzn-s3-demo-bucket.s3.amazonaws.com" $origin.Id = "UniqueOrigin1" $origin.S3OriginConfig = New-Object Amazon.CloudFront.Model.S3OriginConfig $origin.S3OriginConfig.OriginAccessIdentity = "" New-CFDistribution ` -DistributionConfig_Enabled $true ` -DistributionConfig_Comment "Test distribution" ` -Origins_Item $origin ` -Origins_Quantity 1 ` -Logging_Enabled $true ` -Logging_IncludeCookie $true ` -Logging_Bucket amzn-s3-demo-logging-bucket.s3.amazonaws.com ` -Logging_Prefix "help/" ` -DistributionConfig_CallerReference Client1 ` -DistributionConfig_DefaultRootObject index.html ` -DefaultCacheBehavior_TargetOriginId $origin.Id ` -ForwardedValues_QueryString $true ` -Cookies_Forward all ` -WhitelistedNames_Quantity 0 ` -TrustedSigners_Enabled $false ` -TrustedSigners_Quantity 0 ` -DefaultCacheBehavior_ViewerProtocolPolicy allow-all ` -DefaultCacheBehavior_MinTTL 1000 ` -DistributionConfig_PriceClass "PriceClass_All" ` -CacheBehaviors_Quantity 0 ` -Aliases_Quantity 0 • For API details, see CreateDistribution in AWS Tools for PowerShell Cmdlet Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Actions 1080 Amazon CloudFront Developer Guide Use CreateFunction with an AWS SDK The following code example shows how to use CreateFunction. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.cloudfront.CloudFrontClient; import software.amazon.awssdk.services.cloudfront.model.CloudFrontException; import software.amazon.awssdk.services.cloudfront.model.CreateFunctionRequest; import software.amazon.awssdk.services.cloudfront.model.CreateFunctionResponse; import software.amazon.awssdk.services.cloudfront.model.FunctionConfig; import software.amazon.awssdk.services.cloudfront.model.FunctionRuntime; import java.io.InputStream; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get- started.html */ public class CreateFunction { public static void main(String[] args) { final String usage = """ Usage: <functionName> <filePath> Where: Actions 1081 Amazon CloudFront Developer Guide functionName - The name of the function to create.\s filePath - The path to a file that contains the application logic for the function.\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String functionName = args[0]; String filePath = args[1]; CloudFrontClient cloudFrontClient = CloudFrontClient.builder() .region(Region.AWS_GLOBAL) .build(); String funArn = createNewFunction(cloudFrontClient, functionName, filePath); System.out.println("The function ARN is " + funArn); cloudFrontClient.close(); } public static String createNewFunction(CloudFrontClient cloudFrontClient, String functionName, String filePath) { try { InputStream fileIs = CreateFunction.class.getClassLoader().getResourceAsStream(filePath); SdkBytes functionCode = SdkBytes.fromInputStream(fileIs); FunctionConfig config = FunctionConfig.builder() .comment("Created by using the CloudFront Java API") .runtime(FunctionRuntime.CLOUDFRONT_JS_1_0) .build(); CreateFunctionRequest functionRequest = CreateFunctionRequest.builder() .name(functionName) .functionCode(functionCode) .functionConfig(config) .build(); CreateFunctionResponse response = cloudFrontClient.createFunction(functionRequest); return response.functionSummary().functionMetadata().functionARN(); Actions 1082 Amazon CloudFront Developer Guide } catch (CloudFrontException e) { System.err.println(e.getMessage()); System.exit(1); } return ""; } } • For API details, see CreateFunction in AWS SDK for Java 2.x API Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This |
AmazonCloudFront_DevGuide-377 | AmazonCloudFront_DevGuide.pdf | 377 | } public static String createNewFunction(CloudFrontClient cloudFrontClient, String functionName, String filePath) { try { InputStream fileIs = CreateFunction.class.getClassLoader().getResourceAsStream(filePath); SdkBytes functionCode = SdkBytes.fromInputStream(fileIs); FunctionConfig config = FunctionConfig.builder() .comment("Created by using the CloudFront Java API") .runtime(FunctionRuntime.CLOUDFRONT_JS_1_0) .build(); CreateFunctionRequest functionRequest = CreateFunctionRequest.builder() .name(functionName) .functionCode(functionCode) .functionConfig(config) .build(); CreateFunctionResponse response = cloudFrontClient.createFunction(functionRequest); return response.functionSummary().functionMetadata().functionARN(); Actions 1082 Amazon CloudFront Developer Guide } catch (CloudFrontException e) { System.err.println(e.getMessage()); System.exit(1); } return ""; } } • For API details, see CreateFunction in AWS SDK for Java 2.x API Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use CreateInvalidation with a CLI The following code examples show how to use CreateInvalidation. CLI AWS CLI To create an invalidation for a CloudFront distribution The following create-invalidation example creates an invalidation for the specified files in the specified CloudFront distribution: aws cloudfront create-invalidation \ --distribution-id EDFDVBD6EXAMPLE \ --paths "/example-path/example-file.jpg" "/example-path/example-file2.png" Output: { "Location": "https://cloudfront.amazonaws.com/2019-03-26/distribution/ EDFDVBD6EXAMPLE/invalidation/I1JLWSDAP8FU89", "Invalidation": { "Id": "I1JLWSDAP8FU89", "Status": "InProgress", "CreateTime": "2019-12-05T18:24:51.407Z", Actions 1083 Amazon CloudFront Developer Guide "InvalidationBatch": { "Paths": { "Quantity": 2, "Items": [ "/example-path/example-file2.png", "/example-path/example-file.jpg" ] }, "CallerReference": "cli-1575570291-670203" } } } In the previous example, the AWS CLI automatically generated a random CallerReference. To specify your own CallerReference, or to avoid passing the invalidation parameters as command line arguments, you can use a JSON file. The following example creates an invalidation for two files, by providing the invalidation parameters in a JSON file named inv-batch.json: aws cloudfront create-invalidation \ --distribution-id EDFDVBD6EXAMPLE \ --invalidation-batch file://inv-batch.json Contents of inv-batch.json: { "Paths": { "Quantity": 2, "Items": [ "/example-path/example-file.jpg", "/example-path/example-file2.png" ] }, "CallerReference": "cli-example" } Output: { "Location": "https://cloudfront.amazonaws.com/2019-03-26/distribution/ EDFDVBD6EXAMPLE/invalidation/I2J0I21PCUYOIK", Actions 1084 Amazon CloudFront Developer Guide "Invalidation": { "Id": "I2J0I21PCUYOIK", "Status": "InProgress", "CreateTime": "2019-12-05T18:40:49.413Z", "InvalidationBatch": { "Paths": { "Quantity": 2, "Items": [ "/example-path/example-file.jpg", "/example-path/example-file2.png" ] }, "CallerReference": "cli-example" } } } • For API details, see CreateInvalidation in AWS CLI Command Reference. PowerShell Tools for PowerShell Example 1: This example creates a new invalidation on a distribution with an ID of EXAMPLENSTXAXE. The CallerReference is a unique ID chosen by the user; in this case, a time stamp representing May 15, 2019 at 9:00 a.m. is used. The $Paths variable stores three paths to image and media files that the user does not want as part of the distribution's cache. The -Paths_Quantity parameter value is the total number of paths specified in the -Paths_Item parameter. $Paths = "/images/*.gif", "/images/image1.jpg", "/videos/*.mp4" New-CFInvalidation -DistributionId "EXAMPLENSTXAXE" - InvalidationBatch_CallerReference 20190515090000 -Paths_Item $Paths - Paths_Quantity 3 Output: Invalidation Location ------------ -------- Actions 1085 Amazon CloudFront Developer Guide Amazon.CloudFront.Model.Invalidation https://cloudfront.amazonaws.com/2018-11-05/ distribution/EXAMPLENSTXAXE/invalidation/EXAMPLE8NOK9H • For API details, see CreateInvalidation in AWS Tools for PowerShell Cmdlet Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use CreateKeyGroup with an AWS SDK The following code example shows how to use CreateKeyGroup. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. A key group requires at least one public key that is used to verify signed URLs or cookies. import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.services.cloudfront.CloudFrontClient; import java.util.UUID; public class CreateKeyGroup { private static final Logger logger = LoggerFactory.getLogger(CreateKeyGroup.class); public static String createKeyGroup(CloudFrontClient cloudFrontClient, String publicKeyId) { String keyGroupId = cloudFrontClient.createKeyGroup(b -> b.keyGroupConfig(c -> c .items(publicKeyId) .name("JavaKeyGroup" + UUID.randomUUID()))) .keyGroup().id(); Actions 1086 Amazon CloudFront Developer Guide logger.info("KeyGroup created with ID: [{}]", keyGroupId); return keyGroupId; } } • For API details, see CreateKeyGroup in AWS SDK for Java 2.x API Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use CreatePublicKey with an AWS SDK or CLI The following code examples show how to use CreatePublicKey. CLI AWS CLI To create a CloudFront public key The following example creates a CloudFront public key by providing the parameters in a JSON file named pub-key-config.json. Before you can use this command, you must have a PEM-encoded public key. For more information, see Create an RSA Key Pair in the Amazon CloudFront Developer Guide. aws cloudfront create-public-key \ --public-key-config file://pub-key-config.json The file pub-key-config.json is a JSON document in the current folder that contains the following. Note that the public key is encoded in PEM format. { "CallerReference": "cli-example", "Name": "ExampleKey", "EncodedKey": "-----BEGIN PUBLIC KEY----- \nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxPMbCA2Ks0lnd7IR+3pw \nwd3H/7jPGwj8bLUmore7bX+oeGpZ6QmLAe/1UOWcmZX2u70dYcSIzB1ofZtcn4cJ \nenHBAzO3ohBY/L1tQGJfS2A+omnN6H16VZE1JCK8XSJyfze7MDLcUyHZETdxuvRb \nA9X343/vMAuQPnhinFJ8Wdy8YBXSPpy7r95ylUQd9LfYTBzVZYG2tSesplcOkjM3\n2Uu Actions 1087 Amazon CloudFront Developer Guide +oMWxQAw1NINnSLPinMVsutJy6ZqlV3McWNWe4T+STGtWhrPNqJEn45sIcCx4\nq +kGZ2NQ0FyIyT2eiLKOX5Rgb/a36E/aMk4VoDsaenBQgG7WLTnstb9sr7MIhS6A\nrwIDAQAB\n----- END PUBLIC KEY-----\n", "Comment": "example public key" } Output: { "Location": "https://cloudfront.amazonaws.com/2019-03-26/public-key/ KDFB19YGCR002", "ETag": "E2QWRUHEXAMPLE", "PublicKey": { "Id": "KDFB19YGCR002", "CreatedTime": "2019-12-05T18:51:43.781Z", "PublicKeyConfig": { "CallerReference": "cli-example", "Name": |
AmazonCloudFront_DevGuide-378 | AmazonCloudFront_DevGuide.pdf | 378 | you must have a PEM-encoded public key. For more information, see Create an RSA Key Pair in the Amazon CloudFront Developer Guide. aws cloudfront create-public-key \ --public-key-config file://pub-key-config.json The file pub-key-config.json is a JSON document in the current folder that contains the following. Note that the public key is encoded in PEM format. { "CallerReference": "cli-example", "Name": "ExampleKey", "EncodedKey": "-----BEGIN PUBLIC KEY----- \nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxPMbCA2Ks0lnd7IR+3pw \nwd3H/7jPGwj8bLUmore7bX+oeGpZ6QmLAe/1UOWcmZX2u70dYcSIzB1ofZtcn4cJ \nenHBAzO3ohBY/L1tQGJfS2A+omnN6H16VZE1JCK8XSJyfze7MDLcUyHZETdxuvRb \nA9X343/vMAuQPnhinFJ8Wdy8YBXSPpy7r95ylUQd9LfYTBzVZYG2tSesplcOkjM3\n2Uu Actions 1087 Amazon CloudFront Developer Guide +oMWxQAw1NINnSLPinMVsutJy6ZqlV3McWNWe4T+STGtWhrPNqJEn45sIcCx4\nq +kGZ2NQ0FyIyT2eiLKOX5Rgb/a36E/aMk4VoDsaenBQgG7WLTnstb9sr7MIhS6A\nrwIDAQAB\n----- END PUBLIC KEY-----\n", "Comment": "example public key" } Output: { "Location": "https://cloudfront.amazonaws.com/2019-03-26/public-key/ KDFB19YGCR002", "ETag": "E2QWRUHEXAMPLE", "PublicKey": { "Id": "KDFB19YGCR002", "CreatedTime": "2019-12-05T18:51:43.781Z", "PublicKeyConfig": { "CallerReference": "cli-example", "Name": "ExampleKey", "EncodedKey": "-----BEGIN PUBLIC KEY----- \nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxPMbCA2Ks0lnd7IR+3pw \nwd3H/7jPGwj8bLUmore7bX+oeGpZ6QmLAe/1UOWcmZX2u70dYcSIzB1ofZtcn4cJ \nenHBAzO3ohBY/L1tQGJfS2A+omnN6H16VZE1JCK8XSJyfze7MDLcUyHZETdxuvRb \nA9X343/vMAuQPnhinFJ8Wdy8YBXSPpy7r95ylUQd9LfYTBzVZYG2tSesplcOkjM3\n2Uu +oMWxQAw1NINnSLPinMVsutJy6ZqlV3McWNWe4T+STGtWhrPNqJEn45sIcCx4\nq +kGZ2NQ0FyIyT2eiLKOX5Rgb/a36E/aMk4VoDsaenBQgG7WLTnstb9sr7MIhS6A\nrwIDAQAB\n----- END PUBLIC KEY-----\n", "Comment": "example public key" } } } • For API details, see CreatePublicKey in AWS CLI Command Reference. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. Actions 1088 Amazon CloudFront Developer Guide The following code example reads in a public key and uploads it to Amazon CloudFront. import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.services.cloudfront.CloudFrontClient; import software.amazon.awssdk.services.cloudfront.model.CreatePublicKeyResponse; import software.amazon.awssdk.utils.IoUtils; import java.io.IOException; import java.io.InputStream; import java.util.UUID; public class CreatePublicKey { private static final Logger logger = LoggerFactory.getLogger(CreatePublicKey.class); public static String createPublicKey(CloudFrontClient cloudFrontClient, String publicKeyFileName) { try (InputStream is = CreatePublicKey.class.getClassLoader().getResourceAsStream(publicKeyFileName)) { String publicKeyString = IoUtils.toUtf8String(is); CreatePublicKeyResponse createPublicKeyResponse = cloudFrontClient .createPublicKey(b -> b.publicKeyConfig(c -> c .name("JavaCreatedPublicKey" + UUID.randomUUID()) .encodedKey(publicKeyString) .callerReference(UUID.randomUUID().toString()))); String createdPublicKeyId = createPublicKeyResponse.publicKey().id(); logger.info("Public key created with id: [{}]", createdPublicKeyId); return createdPublicKeyId; } catch (IOException e) { throw new RuntimeException(e); } } } • For API details, see CreatePublicKey in AWS SDK for Java 2.x API Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Actions 1089 Amazon CloudFront Developer Guide Use DeleteDistribution with an AWS SDK or CLI The following code examples show how to use DeleteDistribution. CLI AWS CLI To delete a CloudFront distribution The following example deletes the CloudFront distribution with the ID EDFDVBD6EXAMPLE. Before you can delete a distribution, you must disable it. To disable a distribution, use the update-distribution command. For more information, see the update-distribution examples. When a distribution is disabled, you can delete it. To delete a distribution, you must use the --if-match option to provide the distribution's ETag. To get the ETag, use the get- distribution or get-distribution-config command. aws cloudfront delete-distribution \ --id EDFDVBD6EXAMPLE \ --if-match E2QWRUHEXAMPLE When successful, this command has no output. • For API details, see DeleteDistribution in AWS CLI Command Reference. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. The following code example updates a distribution to disabled, uses a waiter that waits for the change to be deployed, then deletes the distribution. import org.slf4j.Logger; import org.slf4j.LoggerFactory; Actions 1090 Amazon CloudFront Developer Guide import software.amazon.awssdk.core.internal.waiters.ResponseOrException; import software.amazon.awssdk.services.cloudfront.CloudFrontClient; import software.amazon.awssdk.services.cloudfront.model.DeleteDistributionResponse; import software.amazon.awssdk.services.cloudfront.model.DistributionConfig; import software.amazon.awssdk.services.cloudfront.model.GetDistributionResponse; import software.amazon.awssdk.services.cloudfront.waiters.CloudFrontWaiter; public class DeleteDistribution { private static final Logger logger = LoggerFactory.getLogger(DeleteDistribution.class); public static void deleteDistribution(final CloudFrontClient cloudFrontClient, final String distributionId) { // First, disable the distribution by updating it. GetDistributionResponse response = cloudFrontClient.getDistribution(b -> b .id(distributionId)); String etag = response.eTag(); DistributionConfig distConfig = response.distribution().distributionConfig(); cloudFrontClient.updateDistribution(builder -> builder .id(distributionId) .distributionConfig(builder1 -> builder1 .cacheBehaviors(distConfig.cacheBehaviors()) .defaultCacheBehavior(distConfig.defaultCacheBehavior()) .enabled(false) .origins(distConfig.origins()) .comment(distConfig.comment()) .callerReference(distConfig.callerReference()) .defaultCacheBehavior(distConfig.defaultCacheBehavior()) .priceClass(distConfig.priceClass()) .aliases(distConfig.aliases()) .logging(distConfig.logging()) .defaultRootObject(distConfig.defaultRootObject()) .customErrorResponses(distConfig.customErrorResponses()) Actions 1091 Amazon CloudFront Developer Guide .httpVersion(distConfig.httpVersion()) .isIPV6Enabled(distConfig.isIPV6Enabled()) .restrictions(distConfig.restrictions()) .viewerCertificate(distConfig.viewerCertificate()) .webACLId(distConfig.webACLId()) .originGroups(distConfig.originGroups())) .ifMatch(etag)); logger.info("Distribution [{}] is DISABLED, waiting for deployment before deleting ...", distributionId); GetDistributionResponse distributionResponse; try (CloudFrontWaiter cfWaiter = CloudFrontWaiter.builder().client(cloudFrontClient).build()) { ResponseOrException<GetDistributionResponse> responseOrException = cfWaiter .waitUntilDistributionDeployed(builder -> builder.id(distributionId)).matched(); distributionResponse = responseOrException.response() .orElseThrow(() -> new RuntimeException("Could not disable distribution")); } DeleteDistributionResponse deleteDistributionResponse = cloudFrontClient .deleteDistribution(builder -> builder .id(distributionId) .ifMatch(distributionResponse.eTag())); if (deleteDistributionResponse.sdkHttpResponse().isSuccessful()) { logger.info("Distribution [{}] DELETED", distributionId); } } } • For API details, see DeleteDistribution in AWS SDK for Java 2.x API Reference. Actions 1092 Amazon CloudFront Developer Guide For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use GetCloudFrontOriginAccessIdentity with a CLI The following code examples show how to use GetCloudFrontOriginAccessIdentity. CLI AWS CLI To get a CloudFront origin access identity The following example gets the CloudFront origin access identity (OAI) with the ID E74FTE3AEXAMPLE, including its ETag and the associated S3 canonical ID. The OAI ID is returned in the output of the create-cloud-front-origin-access-identity and list-cloud-front- origin-access-identities commands. aws cloudfront get-cloud-front-origin-access-identity --id E74FTE3AEXAMPLE Output: { "ETag": "E2QWRUHEXAMPLE", "CloudFrontOriginAccessIdentity": { "Id": "E74FTE3AEXAMPLE", "S3CanonicalUserId": "cd13868f797c227fbea2830611a26fe0a21ba1b826ab4bed9b7771c9aEXAMPLE", "CloudFrontOriginAccessIdentityConfig": { "CallerReference": "cli-example", "Comment": "Example OAI" } } } • For API details, see GetCloudFrontOriginAccessIdentity in AWS CLI Command Reference. Actions 1093 Amazon |
AmazonCloudFront_DevGuide-379 | AmazonCloudFront_DevGuide.pdf | 379 | GetCloudFrontOriginAccessIdentity with a CLI The following code examples show how to use GetCloudFrontOriginAccessIdentity. CLI AWS CLI To get a CloudFront origin access identity The following example gets the CloudFront origin access identity (OAI) with the ID E74FTE3AEXAMPLE, including its ETag and the associated S3 canonical ID. The OAI ID is returned in the output of the create-cloud-front-origin-access-identity and list-cloud-front- origin-access-identities commands. aws cloudfront get-cloud-front-origin-access-identity --id E74FTE3AEXAMPLE Output: { "ETag": "E2QWRUHEXAMPLE", "CloudFrontOriginAccessIdentity": { "Id": "E74FTE3AEXAMPLE", "S3CanonicalUserId": "cd13868f797c227fbea2830611a26fe0a21ba1b826ab4bed9b7771c9aEXAMPLE", "CloudFrontOriginAccessIdentityConfig": { "CallerReference": "cli-example", "Comment": "Example OAI" } } } • For API details, see GetCloudFrontOriginAccessIdentity in AWS CLI Command Reference. Actions 1093 Amazon CloudFront PowerShell Tools for PowerShell Developer Guide Example 1: This example returns a specific Amazon CloudFront origin access identity, specified by the -Id parameter. Although the -Id parameter is not required, if you do not specify it, no results are returned. Get-CFCloudFrontOriginAccessIdentity -Id E3XXXXXXXXXXRT Output: CloudFrontOriginAccessIdentityConfig Id S3CanonicalUserId ------------------------------------ -- ----------------- Amazon.CloudFront.Model.CloudFrontOr... E3XXXXXXXXXXRT 4b6e... • For API details, see GetCloudFrontOriginAccessIdentity in AWS Tools for PowerShell Cmdlet Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use GetCloudFrontOriginAccessIdentityConfig with a CLI The following code examples show how to use GetCloudFrontOriginAccessIdentityConfig. CLI AWS CLI To get a CloudFront origin access identity configuration The following example gets metadata about the CloudFront origin access identity (OAI) with the ID E74FTE3AEXAMPLE, including its ETag. The OAI ID is returned in the output of the create-cloud-front-origin-access-identity and list-cloud-front-origin-access-identities commands. Actions 1094 Amazon CloudFront Developer Guide aws cloudfront get-cloud-front-origin-access-identity-config --id E74FTE3AEXAMPLE Output: { "ETag": "E2QWRUHEXAMPLE", "CloudFrontOriginAccessIdentityConfig": { "CallerReference": "cli-example", "Comment": "Example OAI" } } • For API details, see GetCloudFrontOriginAccessIdentityConfig in AWS CLI Command Reference. PowerShell Tools for PowerShell Example 1: This example returns configuration information about a single Amazon CloudFront origin access identity, specified by the -Id parameter. Errors occur if no -Id parameter is specified.. Get-CFCloudFrontOriginAccessIdentityConfig -Id E3XXXXXXXXXXRT Output: CallerReference Comment --------------- ------- mycallerreference: 2/1/2011 1:16:32 PM Caller reference: 2/1/2011 1:16:32 PM • For API details, see GetCloudFrontOriginAccessIdentityConfig in AWS Tools for PowerShell Cmdlet Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Actions 1095 Amazon CloudFront Developer Guide Use GetDistribution with a CLI The following code examples show how to use GetDistribution. CLI AWS CLI To get a CloudFront distribution The following get-distribution example gets the CloudFront distribution with the ID EDFDVBD6EXAMPLE, including its ETag. The distribution ID is returned in the create- distribution and list-distributions commands. aws cloudfront get-distribution \ --id EDFDVBD6EXAMPLE Output: { "ETag": "E2QWRUHEXAMPLE", "Distribution": { "Id": "EDFDVBD6EXAMPLE", "ARN": "arn:aws:cloudfront::123456789012:distribution/EDFDVBD6EXAMPLE", "Status": "Deployed", "LastModifiedTime": "2019-12-04T23:35:41.433Z", "InProgressInvalidationBatches": 0, "DomainName": "d111111abcdef8.cloudfront.net", "ActiveTrustedSigners": { "Enabled": false, "Quantity": 0 }, "DistributionConfig": { "CallerReference": "cli-example", "Aliases": { "Quantity": 0 }, "DefaultRootObject": "index.html", "Origins": { "Quantity": 1, "Items": [ { "Id": "amzn-s3-demo-bucket.s3.amazonaws.com-cli-example", Actions 1096 Amazon CloudFront Developer Guide "DomainName": "amzn-s3-demo-bucket.s3.amazonaws.com", "OriginPath": "", "CustomHeaders": { "Quantity": 0 }, "S3OriginConfig": { "OriginAccessIdentity": "" } } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "amzn-s3-demo-bucket.s3.amazonaws.com-cli- example", "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" }, "Headers": { "Quantity": 0 }, "QueryStringCacheKeys": { "Quantity": 0 } }, "TrustedSigners": { "Enabled": false, "Quantity": 0 }, "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, "AllowedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ], "CachedMethods": { "Quantity": 2, "Items": [ Actions 1097 Amazon CloudFront Developer Guide "HEAD", "GET" ] } }, "SmoothStreaming": false, "DefaultTTL": 86400, "MaxTTL": 31536000, "Compress": false, "LambdaFunctionAssociations": { "Quantity": 0 }, "FieldLevelEncryptionId": "" }, "CacheBehaviors": { "Quantity": 0 }, "CustomErrorResponses": { "Quantity": 0 }, "Comment": "", "Logging": { "Enabled": false, "IncludeCookies": false, "Bucket": "", "Prefix": "" }, "PriceClass": "PriceClass_All", "Enabled": true, "ViewerCertificate": { "CloudFrontDefaultCertificate": true, "MinimumProtocolVersion": "TLSv1", "CertificateSource": "cloudfront" }, "Restrictions": { "GeoRestriction": { "RestrictionType": "none", "Quantity": 0 } }, "WebACLId": "", "HttpVersion": "http2", "IsIPV6Enabled": true } Actions 1098 Amazon CloudFront } } Developer Guide • For API details, see GetDistribution in AWS CLI Command Reference. PowerShell Tools for PowerShell Example 1: Retrieves the information for a specific distribution. Get-CFDistribution -Id EXAMPLE0000ID • For API details, see GetDistribution in AWS Tools for PowerShell Cmdlet Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use GetDistributionConfig with an AWS SDK or CLI The following code examples show how to use GetDistributionConfig. CLI AWS CLI To get a CloudFront distribution configuration The following example gets metadata about the CloudFront distribution with the ID EDFDVBD6EXAMPLE, including its ETag. The distribution ID is returned in the create- distribution and list-distributions commands. aws cloudfront get-distribution-config \ --id EDFDVBD6EXAMPLE Output: { "ETag": "E2QWRUHEXAMPLE", "DistributionConfig": { |
AmazonCloudFront_DevGuide-380 | AmazonCloudFront_DevGuide.pdf | 380 | Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use GetDistributionConfig with an AWS SDK or CLI The following code examples show how to use GetDistributionConfig. CLI AWS CLI To get a CloudFront distribution configuration The following example gets metadata about the CloudFront distribution with the ID EDFDVBD6EXAMPLE, including its ETag. The distribution ID is returned in the create- distribution and list-distributions commands. aws cloudfront get-distribution-config \ --id EDFDVBD6EXAMPLE Output: { "ETag": "E2QWRUHEXAMPLE", "DistributionConfig": { Actions 1099 Amazon CloudFront Developer Guide "CallerReference": "cli-example", "Aliases": { "Quantity": 0 }, "DefaultRootObject": "index.html", "Origins": { "Quantity": 1, "Items": [ { "Id": "amzn-s3-demo-bucket.s3.amazonaws.com-cli-example", "DomainName": "amzn-s3-demo-bucket.s3.amazonaws.com", "OriginPath": "", "CustomHeaders": { "Quantity": 0 }, "S3OriginConfig": { "OriginAccessIdentity": "" } } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "amzn-s3-demo-bucket.s3.amazonaws.com-cli-example", "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" }, "Headers": { "Quantity": 0 }, "QueryStringCacheKeys": { "Quantity": 0 } }, "TrustedSigners": { "Enabled": false, "Quantity": 0 }, "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, Actions 1100 Amazon CloudFront Developer Guide "AllowedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ], "CachedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ] } }, "SmoothStreaming": false, "DefaultTTL": 86400, "MaxTTL": 31536000, "Compress": false, "LambdaFunctionAssociations": { "Quantity": 0 }, "FieldLevelEncryptionId": "" }, "CacheBehaviors": { "Quantity": 0 }, "CustomErrorResponses": { "Quantity": 0 }, "Comment": "", "Logging": { "Enabled": false, "IncludeCookies": false, "Bucket": "", "Prefix": "" }, "PriceClass": "PriceClass_All", "Enabled": true, "ViewerCertificate": { "CloudFrontDefaultCertificate": true, "MinimumProtocolVersion": "TLSv1", "CertificateSource": "cloudfront" }, "Restrictions": { Actions 1101 Amazon CloudFront Developer Guide "GeoRestriction": { "RestrictionType": "none", "Quantity": 0 } }, "WebACLId": "", "HttpVersion": "http2", "IsIPV6Enabled": true } } • For API details, see GetDistributionConfig in AWS CLI Command Reference. PowerShell Tools for PowerShell Example 1: Retrieves the configuration for a specific distribution. Get-CFDistributionConfig -Id EXAMPLE0000ID • For API details, see GetDistributionConfig in AWS Tools for PowerShell Cmdlet Reference. Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. class CloudFrontWrapper: """Encapsulates Amazon CloudFront operations.""" def __init__(self, cloudfront_client): """ :param cloudfront_client: A Boto3 CloudFront client """ Actions 1102 Amazon CloudFront Developer Guide self.cloudfront_client = cloudfront_client def update_distribution(self): distribution_id = input( "This script updates the comment for a CloudFront distribution.\n" "Enter a CloudFront distribution ID: " ) distribution_config_response = self.cloudfront_client.get_distribution_config( Id=distribution_id ) distribution_config = distribution_config_response["DistributionConfig"] distribution_etag = distribution_config_response["ETag"] distribution_config["Comment"] = input( f"\nThe current comment for distribution {distribution_id} is " f"'{distribution_config['Comment']}'.\n" f"Enter a new comment: " ) self.cloudfront_client.update_distribution( DistributionConfig=distribution_config, Id=distribution_id, IfMatch=distribution_etag, ) print("Done!") • For API details, see GetDistributionConfig in AWS SDK for Python (Boto3) API Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use ListCloudFrontOriginAccessIdentities with a CLI The following code examples show how to use ListCloudFrontOriginAccessIdentities. Actions 1103 Amazon CloudFront CLI AWS CLI Developer Guide To list CloudFront origin access identities The following example gets a list of the CloudFront origin access identities (OAIs) in your AWS account: aws cloudfront list-cloud-front-origin-access-identities Output: { "CloudFrontOriginAccessIdentityList": { "Items": [ { "Id": "E74FTE3AEXAMPLE", "S3CanonicalUserId": "cd13868f797c227fbea2830611a26fe0a21ba1b826ab4bed9b7771c9aEXAMPLE", "Comment": "Example OAI" }, { "Id": "EH1HDMBEXAMPLE", "S3CanonicalUserId": "1489f6f2e6faacaae7ff64c4c3e6956c24f78788abfc1718c3527c263bf7a17EXAMPLE", "Comment": "Test OAI" }, { "Id": "E2X2C9TEXAMPLE", "S3CanonicalUserId": "cbfeebb915a64749f9be546a45b3fcfd3a31c779673c13c4dd460911ae402c2EXAMPLE", "Comment": "Example OAI #2" } ] } } • For API details, see ListCloudFrontOriginAccessIdentities in AWS CLI Command Reference. Actions 1104 Amazon CloudFront PowerShell Tools for PowerShell Developer Guide Example 1: This example returns a list of Amazon CloudFront origin access identities. Because the -MaxItem parameter specifies a value of 2, the results include two identities. Get-CFCloudFrontOriginAccessIdentityList -MaxItem 2 Output: IsTruncated : True Items : {E326XXXXXXXXXT, E1YWXXXXXXX9B} Marker : MaxItems : 2 NextMarker : E1YXXXXXXXXX9B Quantity : 2 • For API details, see ListCloudFrontOriginAccessIdentities in AWS Tools for PowerShell Cmdlet Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use ListDistributions with an AWS SDK or CLI The following code examples show how to use ListDistributions. CLI AWS CLI To list CloudFront distributions The following example gets a list of the CloudFront distributions in your AWS account. aws cloudfront list-distributions Output: Actions 1105 Amazon CloudFront Developer Guide { "DistributionList": { "Items": [ { "Id": "E23YS8OEXAMPLE", "ARN": "arn:aws:cloudfront::123456789012:distribution/ E23YS8OEXAMPLE", "Status": "Deployed", "LastModifiedTime": "2024-08-05T18:23:40.375000+00:00", "DomainName": "abcdefgh12ijk.cloudfront.net", "Aliases": { "Quantity": 0 }, "Origins": { "Quantity": 1, "Items": [ { "Id": "amzn-s3-demo-bucket.s3.us- east-1.amazonaws.com", "DomainName": "amzn-s3-demo-bucket.s3.us- east-1.amazonaws.com", "OriginPath": "", "CustomHeaders": { "Quantity": 0 }, "S3OriginConfig": { "OriginAccessIdentity": "" }, "ConnectionAttempts": 3, "ConnectionTimeout": 10, "OriginShield": { "Enabled": false }, "OriginAccessControlId": "EIAP8PEXAMPLE" } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "amzn-s3-demo-bucket.s3.us- |
AmazonCloudFront_DevGuide-381 | AmazonCloudFront_DevGuide.pdf | 381 | AWS CLI To list CloudFront distributions The following example gets a list of the CloudFront distributions in your AWS account. aws cloudfront list-distributions Output: Actions 1105 Amazon CloudFront Developer Guide { "DistributionList": { "Items": [ { "Id": "E23YS8OEXAMPLE", "ARN": "arn:aws:cloudfront::123456789012:distribution/ E23YS8OEXAMPLE", "Status": "Deployed", "LastModifiedTime": "2024-08-05T18:23:40.375000+00:00", "DomainName": "abcdefgh12ijk.cloudfront.net", "Aliases": { "Quantity": 0 }, "Origins": { "Quantity": 1, "Items": [ { "Id": "amzn-s3-demo-bucket.s3.us- east-1.amazonaws.com", "DomainName": "amzn-s3-demo-bucket.s3.us- east-1.amazonaws.com", "OriginPath": "", "CustomHeaders": { "Quantity": 0 }, "S3OriginConfig": { "OriginAccessIdentity": "" }, "ConnectionAttempts": 3, "ConnectionTimeout": 10, "OriginShield": { "Enabled": false }, "OriginAccessControlId": "EIAP8PEXAMPLE" } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "amzn-s3-demo-bucket.s3.us- east-1.amazonaws.com", Actions 1106 Amazon CloudFront Developer Guide "TrustedSigners": { "Enabled": false, "Quantity": 0 }, "TrustedKeyGroups": { "Enabled": false, "Quantity": 0 }, "ViewerProtocolPolicy": "allow-all", "AllowedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ], "CachedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ] } }, "SmoothStreaming": false, "Compress": true, "LambdaFunctionAssociations": { "Quantity": 0 }, "FunctionAssociations": { "Quantity": 0 }, "FieldLevelEncryptionId": "", "CachePolicyId": "658327ea-f89d-4fab-a63d-7e886EXAMPLE" }, "CacheBehaviors": { "Quantity": 0 }, "CustomErrorResponses": { "Quantity": 0 }, "Comment": "", "PriceClass": "PriceClass_All", "Enabled": true, "ViewerCertificate": { Actions 1107 Amazon CloudFront Developer Guide "CloudFrontDefaultCertificate": true, "SSLSupportMethod": "vip", "MinimumProtocolVersion": "TLSv1", "CertificateSource": "cloudfront" }, "Restrictions": { "GeoRestriction": { "RestrictionType": "none", "Quantity": 0 } }, "WebACLId": "", "HttpVersion": "HTTP2", "IsIPV6Enabled": true, "Staging": false } ] } } • For API details, see ListDistributions in AWS CLI Command Reference. PowerShell Tools for PowerShell Example 1: Returns distributions. Get-CFDistributionList • For API details, see ListDistributions in AWS Tools for PowerShell Cmdlet Reference. Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. Actions 1108 Amazon CloudFront Developer Guide class CloudFrontWrapper: """Encapsulates Amazon CloudFront operations.""" def __init__(self, cloudfront_client): """ :param cloudfront_client: A Boto3 CloudFront client """ self.cloudfront_client = cloudfront_client def list_distributions(self): print("CloudFront distributions:\n") distributions = self.cloudfront_client.list_distributions() if distributions["DistributionList"]["Quantity"] > 0: for distribution in distributions["DistributionList"]["Items"]: print(f"Domain: {distribution['DomainName']}") print(f"Distribution Id: {distribution['Id']}") print( f"Certificate Source: " f"{distribution['ViewerCertificate']['CertificateSource']}" ) if distribution["ViewerCertificate"]["CertificateSource"] == "acm": print( f"Certificate: {distribution['ViewerCertificate'] ['Certificate']}" ) print("") else: print("No CloudFront distributions detected.") • For API details, see ListDistributions in AWS SDK for Python (Boto3) API Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use UpdateDistribution with an AWS SDK or CLI The following code examples show how to use UpdateDistribution. Actions 1109 Amazon CloudFront CLI AWS CLI Developer Guide Example 1: To update a CloudFront distribution's default root object The following example updates the default root object to index.html for the CloudFront distribution with the ID EDFDVBD6EXAMPLE. aws cloudfront update-distribution \ --id EDFDVBD6EXAMPLE \ --default-root-object index.html Output: { "ETag": "E2QWRUHEXAMPLE", "Distribution": { "Id": "EDFDVBD6EXAMPLE", "ARN": "arn:aws:cloudfront::123456789012:distribution/EDFDVBD6EXAMPLE", "Status": "InProgress", "LastModifiedTime": "2019-12-06T18:55:39.870Z", "InProgressInvalidationBatches": 0, "DomainName": "d111111abcdef8.cloudfront.net", "ActiveTrustedSigners": { "Enabled": false, "Quantity": 0 }, "DistributionConfig": { "CallerReference": "6b10378d-49be-4c4b-a642-419ccaf8f3b5", "Aliases": { "Quantity": 0 }, "DefaultRootObject": "index.html", "Origins": { "Quantity": 1, "Items": [ { "Id": "example-website", "DomainName": "www.example.com", "OriginPath": "", "CustomHeaders": { "Quantity": 0 Actions 1110 Amazon CloudFront Developer Guide }, "CustomOriginConfig": { "HTTPPort": 80, "HTTPSPort": 443, "OriginProtocolPolicy": "match-viewer", "OriginSslProtocols": { "Quantity": 2, "Items": [ "SSLv3", "TLSv1" ] }, "OriginReadTimeout": 30, "OriginKeepaliveTimeout": 5 } } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "example-website", "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" }, "Headers": { "Quantity": 1, "Items": [ "*" ] }, "QueryStringCacheKeys": { "Quantity": 0 } }, "TrustedSigners": { "Enabled": false, "Quantity": 0 }, "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, Actions 1111 Amazon CloudFront Developer Guide "AllowedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ], "CachedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ] } }, "SmoothStreaming": false, "DefaultTTL": 86400, "MaxTTL": 31536000, "Compress": false, "LambdaFunctionAssociations": { "Quantity": 0 }, "FieldLevelEncryptionId": "" }, "CacheBehaviors": { "Quantity": 0 }, "CustomErrorResponses": { "Quantity": 0 }, "Comment": "", "Logging": { "Enabled": false, "IncludeCookies": false, "Bucket": "", "Prefix": "" }, "PriceClass": "PriceClass_All", "Enabled": true, "ViewerCertificate": { "CloudFrontDefaultCertificate": true, "MinimumProtocolVersion": "TLSv1", "CertificateSource": "cloudfront" }, "Restrictions": { Actions 1112 Amazon CloudFront Developer Guide "GeoRestriction": { "RestrictionType": "none", "Quantity": 0 } }, "WebACLId": "", "HttpVersion": "http1.1", "IsIPV6Enabled": true } } } Example 2: To update a CloudFront distribution The following example disables the CloudFront distribution with the ID EMLARXS9EXAMPLE by providing the distribution configuration in a JSON file named dist-config- disable.json. To update a distribution, you must use the --if-match option to provide the distribution's ETag. To get the ETag, use the get-distribution or get-distribution-config command. Note that the Enabled field is set to false in the JSON file. After you use the following example to disable a distribution, you can use the delete- distribution command to delete it. aws cloudfront update-distribution \ --id EMLARXS9EXAMPLE \ --if-match E2QWRUHEXAMPLE \ --distribution-config file://dist-config-disable.json Contents of dist-config-disable.json: { "CallerReference": "cli-1574382155-496510", "Aliases": { "Quantity": 0 }, "DefaultRootObject": |
AmazonCloudFront_DevGuide-382 | AmazonCloudFront_DevGuide.pdf | 382 | CloudFront distribution with the ID EMLARXS9EXAMPLE by providing the distribution configuration in a JSON file named dist-config- disable.json. To update a distribution, you must use the --if-match option to provide the distribution's ETag. To get the ETag, use the get-distribution or get-distribution-config command. Note that the Enabled field is set to false in the JSON file. After you use the following example to disable a distribution, you can use the delete- distribution command to delete it. aws cloudfront update-distribution \ --id EMLARXS9EXAMPLE \ --if-match E2QWRUHEXAMPLE \ --distribution-config file://dist-config-disable.json Contents of dist-config-disable.json: { "CallerReference": "cli-1574382155-496510", "Aliases": { "Quantity": 0 }, "DefaultRootObject": "index.html", "Origins": { "Quantity": 1, "Items": [ { "Id": "amzn-s3-demo-bucket.s3.amazonaws.com-1574382155-273939", "DomainName": "amzn-s3-demo-bucket.s3.amazonaws.com", Actions 1113 Amazon CloudFront Developer Guide "OriginPath": "", "CustomHeaders": { "Quantity": 0 }, "S3OriginConfig": { "OriginAccessIdentity": "" } } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "amzn-s3-demo- bucket.s3.amazonaws.com-1574382155-273939", "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" }, "Headers": { "Quantity": 0 }, "QueryStringCacheKeys": { "Quantity": 0 } }, "TrustedSigners": { "Enabled": false, "Quantity": 0 }, "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, "AllowedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ], "CachedMethods": { "Quantity": 2, "Items": [ "HEAD", Actions 1114 Amazon CloudFront Developer Guide "GET" ] } }, "SmoothStreaming": false, "DefaultTTL": 86400, "MaxTTL": 31536000, "Compress": false, "LambdaFunctionAssociations": { "Quantity": 0 }, "FieldLevelEncryptionId": "" }, "CacheBehaviors": { "Quantity": 0 }, "CustomErrorResponses": { "Quantity": 0 }, "Comment": "", "Logging": { "Enabled": false, "IncludeCookies": false, "Bucket": "", "Prefix": "" }, "PriceClass": "PriceClass_All", "Enabled": false, "ViewerCertificate": { "CloudFrontDefaultCertificate": true, "MinimumProtocolVersion": "TLSv1", "CertificateSource": "cloudfront" }, "Restrictions": { "GeoRestriction": { "RestrictionType": "none", "Quantity": 0 } }, "WebACLId": "", "HttpVersion": "http2", "IsIPV6Enabled": true } Actions 1115 Amazon CloudFront Output: { Developer Guide "ETag": "E9LHASXEXAMPLE", "Distribution": { "Id": "EMLARXS9EXAMPLE", "ARN": "arn:aws:cloudfront::123456789012:distribution/EMLARXS9EXAMPLE", "Status": "InProgress", "LastModifiedTime": "2019-12-06T18:32:35.553Z", "InProgressInvalidationBatches": 0, "DomainName": "d111111abcdef8.cloudfront.net", "ActiveTrustedSigners": { "Enabled": false, "Quantity": 0 }, "DistributionConfig": { "CallerReference": "cli-1574382155-496510", "Aliases": { "Quantity": 0 }, "DefaultRootObject": "index.html", "Origins": { "Quantity": 1, "Items": [ { "Id": "amzn-s3-demo- bucket.s3.amazonaws.com-1574382155-273939", "DomainName": "amzn-s3-demo-bucket.s3.amazonaws.com", "OriginPath": "", "CustomHeaders": { "Quantity": 0 }, "S3OriginConfig": { "OriginAccessIdentity": "" } } ] }, "OriginGroups": { "Quantity": 0 }, "DefaultCacheBehavior": { "TargetOriginId": "amzn-s3-demo- bucket.s3.amazonaws.com-1574382155-273939", Actions 1116 Amazon CloudFront Developer Guide "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" }, "Headers": { "Quantity": 0 }, "QueryStringCacheKeys": { "Quantity": 0 } }, "TrustedSigners": { "Enabled": false, "Quantity": 0 }, "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, "AllowedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ], "CachedMethods": { "Quantity": 2, "Items": [ "HEAD", "GET" ] } }, "SmoothStreaming": false, "DefaultTTL": 86400, "MaxTTL": 31536000, "Compress": false, "LambdaFunctionAssociations": { "Quantity": 0 }, "FieldLevelEncryptionId": "" }, "CacheBehaviors": { "Quantity": 0 }, Actions 1117 Amazon CloudFront Developer Guide "CustomErrorResponses": { "Quantity": 0 }, "Comment": "", "Logging": { "Enabled": false, "IncludeCookies": false, "Bucket": "", "Prefix": "" }, "PriceClass": "PriceClass_All", "Enabled": false, "ViewerCertificate": { "CloudFrontDefaultCertificate": true, "MinimumProtocolVersion": "TLSv1", "CertificateSource": "cloudfront" }, "Restrictions": { "GeoRestriction": { "RestrictionType": "none", "Quantity": 0 } }, "WebACLId": "", "HttpVersion": "http2", "IsIPV6Enabled": true } } } • For API details, see UpdateDistribution in AWS CLI Command Reference. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. Actions 1118 Amazon CloudFront Developer Guide import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.cloudfront.CloudFrontClient; import software.amazon.awssdk.services.cloudfront.model.GetDistributionRequest; import software.amazon.awssdk.services.cloudfront.model.GetDistributionResponse; import software.amazon.awssdk.services.cloudfront.model.Distribution; import software.amazon.awssdk.services.cloudfront.model.DistributionConfig; import software.amazon.awssdk.services.cloudfront.model.UpdateDistributionRequest; import software.amazon.awssdk.services.cloudfront.model.CloudFrontException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get- started.html */ public class ModifyDistribution { public static void main(String[] args) { final String usage = """ Usage: <id>\s Where: id - the id value of the distribution.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String id = args[0]; CloudFrontClient cloudFrontClient = CloudFrontClient.builder() .region(Region.AWS_GLOBAL) .build(); modDistribution(cloudFrontClient, id); cloudFrontClient.close(); } Actions 1119 Amazon CloudFront Developer Guide public static void modDistribution(CloudFrontClient cloudFrontClient, String idVal) { try { // Get the Distribution to modify. GetDistributionRequest disRequest = GetDistributionRequest.builder() .id(idVal) .build(); GetDistributionResponse response = cloudFrontClient.getDistribution(disRequest); Distribution disObject = response.distribution(); DistributionConfig config = disObject.distributionConfig(); // Create a new DistributionConfig object and add new values to comment and // aliases DistributionConfig config1 = DistributionConfig.builder() .aliases(config.aliases()) // You can pass in new values here .comment("New Comment") .cacheBehaviors(config.cacheBehaviors()) .priceClass(config.priceClass()) .defaultCacheBehavior(config.defaultCacheBehavior()) .enabled(config.enabled()) .callerReference(config.callerReference()) .logging(config.logging()) .originGroups(config.originGroups()) .origins(config.origins()) .restrictions(config.restrictions()) .defaultRootObject(config.defaultRootObject()) .webACLId(config.webACLId()) .httpVersion(config.httpVersion()) .viewerCertificate(config.viewerCertificate()) .customErrorResponses(config.customErrorResponses()) .build(); UpdateDistributionRequest updateDistributionRequest = UpdateDistributionRequest.builder() .distributionConfig(config1) .id(disObject.id()) .ifMatch(response.eTag()) .build(); cloudFrontClient.updateDistribution(updateDistributionRequest); Actions 1120 Amazon CloudFront Developer Guide } catch (CloudFrontException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } } • For API details, see UpdateDistribution in AWS SDK for Java 2.x API Reference. Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. class CloudFrontWrapper: """Encapsulates Amazon CloudFront operations.""" def __init__(self, cloudfront_client): """ :param cloudfront_client: A Boto3 CloudFront client """ self.cloudfront_client = cloudfront_client def update_distribution(self): distribution_id |
AmazonCloudFront_DevGuide-383 | AmazonCloudFront_DevGuide.pdf | 383 | .logging(config.logging()) .originGroups(config.originGroups()) .origins(config.origins()) .restrictions(config.restrictions()) .defaultRootObject(config.defaultRootObject()) .webACLId(config.webACLId()) .httpVersion(config.httpVersion()) .viewerCertificate(config.viewerCertificate()) .customErrorResponses(config.customErrorResponses()) .build(); UpdateDistributionRequest updateDistributionRequest = UpdateDistributionRequest.builder() .distributionConfig(config1) .id(disObject.id()) .ifMatch(response.eTag()) .build(); cloudFrontClient.updateDistribution(updateDistributionRequest); Actions 1120 Amazon CloudFront Developer Guide } catch (CloudFrontException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } } • For API details, see UpdateDistribution in AWS SDK for Java 2.x API Reference. Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. class CloudFrontWrapper: """Encapsulates Amazon CloudFront operations.""" def __init__(self, cloudfront_client): """ :param cloudfront_client: A Boto3 CloudFront client """ self.cloudfront_client = cloudfront_client def update_distribution(self): distribution_id = input( "This script updates the comment for a CloudFront distribution.\n" "Enter a CloudFront distribution ID: " ) distribution_config_response = self.cloudfront_client.get_distribution_config( Id=distribution_id ) distribution_config = distribution_config_response["DistributionConfig"] Actions 1121 Amazon CloudFront Developer Guide distribution_etag = distribution_config_response["ETag"] distribution_config["Comment"] = input( f"\nThe current comment for distribution {distribution_id} is " f"'{distribution_config['Comment']}'.\n" f"Enter a new comment: " ) self.cloudfront_client.update_distribution( DistributionConfig=distribution_config, Id=distribution_id, IfMatch=distribution_etag, ) print("Done!") • For API details, see UpdateDistribution in AWS SDK for Python (Boto3) API Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Scenarios for CloudFront using AWS SDKs The following code examples show you how to implement common scenarios in CloudFront with AWS SDKs. These scenarios show you how to accomplish specific tasks by calling multiple functions within CloudFront or combined with other AWS services. Each scenario includes a link to the complete source code, where you can find instructions on how to set up and run the code. Scenarios target an intermediate level of experience to help you understand service actions in context. Examples • Delete CloudFront signing resources using AWS SDK • Create signed URLs and cookies using an AWS SDK Scenarios 1122 Amazon CloudFront Developer Guide Delete CloudFront signing resources using AWS SDK The following code example shows how to delete resources that are used to gain access to restricted content in an Amazon Simple Storage Service (Amazon S3) bucket. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.services.cloudfront.CloudFrontClient; import software.amazon.awssdk.services.cloudfront.model.DeleteKeyGroupResponse; import software.amazon.awssdk.services.cloudfront.model.DeleteOriginAccessControlResponse; import software.amazon.awssdk.services.cloudfront.model.DeletePublicKeyResponse; import software.amazon.awssdk.services.cloudfront.model.GetKeyGroupResponse; import software.amazon.awssdk.services.cloudfront.model.GetOriginAccessControlResponse; import software.amazon.awssdk.services.cloudfront.model.GetPublicKeyResponse; public class DeleteSigningResources { private static final Logger logger = LoggerFactory.getLogger(DeleteSigningResources.class); public static void deleteOriginAccessControl(final CloudFrontClient cloudFrontClient, final String originAccessControlId) { GetOriginAccessControlResponse getResponse = cloudFrontClient .getOriginAccessControl(b -> b.id(originAccessControlId)); DeleteOriginAccessControlResponse deleteResponse = cloudFrontClient.deleteOriginAccessControl(builder -> builder .id(originAccessControlId) .ifMatch(getResponse.eTag())); if (deleteResponse.sdkHttpResponse().isSuccessful()) { Delete signing resources 1123 Amazon CloudFront Developer Guide logger.info("Successfully deleted Origin Access Control [{}]", originAccessControlId); } } public static void deleteKeyGroup(final CloudFrontClient cloudFrontClient, final String keyGroupId) { GetKeyGroupResponse getResponse = cloudFrontClient.getKeyGroup(b -> b.id(keyGroupId)); DeleteKeyGroupResponse deleteResponse = cloudFrontClient.deleteKeyGroup(builder -> builder .id(keyGroupId) .ifMatch(getResponse.eTag())); if (deleteResponse.sdkHttpResponse().isSuccessful()) { logger.info("Successfully deleted Key Group [{}]", keyGroupId); } } public static void deletePublicKey(final CloudFrontClient cloudFrontClient, final String publicKeyId) { GetPublicKeyResponse getResponse = cloudFrontClient.getPublicKey(b -> b.id(publicKeyId)); DeletePublicKeyResponse deleteResponse = cloudFrontClient.deletePublicKey(builder -> builder .id(publicKeyId) .ifMatch(getResponse.eTag())); if (deleteResponse.sdkHttpResponse().isSuccessful()) { logger.info("Successfully deleted Public Key [{}]", publicKeyId); } } } • For API details, see the following topics in AWS SDK for Java 2.x API Reference. • DeleteKeyGroup • DeleteOriginAccessControl • DeletePublicKey Delete signing resources 1124 Amazon CloudFront Developer Guide For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Create signed URLs and cookies using an AWS SDK The following code example shows how to create signed URLs and cookies that allow access to restricted resources. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. Use the CannedSignerRequest class to sign URLs or cookies with a canned policy. import software.amazon.awssdk.services.cloudfront.model.CannedSignerRequest; import java.net.URL; import java.nio.file.Path; import java.nio.file.Paths; import java.time.Instant; import java.time.temporal.ChronoUnit; public class CreateCannedPolicyRequest { public static CannedSignerRequest createRequestForCannedPolicy(String distributionDomainName, String fileNameToUpload, String privateKeyFullPath, String publicKeyId) throws Exception { String protocol = "https"; String resourcePath = "/" + fileNameToUpload; String cloudFrontUrl = new URL(protocol, distributionDomainName, resourcePath).toString(); Instant expirationDate = Instant.now().plus(7, ChronoUnit.DAYS); Path path = Paths.get(privateKeyFullPath); Sign URLs and cookies 1125 Amazon CloudFront Developer Guide return CannedSignerRequest.builder() .resourceUrl(cloudFrontUrl) .privateKey(path) .keyPairId(publicKeyId) .expirationDate(expirationDate) .build(); } } Use the CustomSignerRequest class to sign URLs or cookies with a custom policy. The activeDate and ipRange are optional methods. import software.amazon.awssdk.services.cloudfront.model.CustomSignerRequest; import java.net.URL; import java.nio.file.Path; import java.nio.file.Paths; import java.time.Instant; import java.time.temporal.ChronoUnit; public class CreateCustomPolicyRequest { public static CustomSignerRequest createRequestForCustomPolicy(String distributionDomainName, String fileNameToUpload, String privateKeyFullPath, String publicKeyId) throws Exception { String protocol = "https"; String resourcePath = "/" + fileNameToUpload; String cloudFrontUrl = new URL(protocol, distributionDomainName, resourcePath).toString(); Instant expireDate = Instant.now().plus(7, ChronoUnit.DAYS); // URL will |
AmazonCloudFront_DevGuide-384 | AmazonCloudFront_DevGuide.pdf | 384 | = Instant.now().plus(7, ChronoUnit.DAYS); Path path = Paths.get(privateKeyFullPath); Sign URLs and cookies 1125 Amazon CloudFront Developer Guide return CannedSignerRequest.builder() .resourceUrl(cloudFrontUrl) .privateKey(path) .keyPairId(publicKeyId) .expirationDate(expirationDate) .build(); } } Use the CustomSignerRequest class to sign URLs or cookies with a custom policy. The activeDate and ipRange are optional methods. import software.amazon.awssdk.services.cloudfront.model.CustomSignerRequest; import java.net.URL; import java.nio.file.Path; import java.nio.file.Paths; import java.time.Instant; import java.time.temporal.ChronoUnit; public class CreateCustomPolicyRequest { public static CustomSignerRequest createRequestForCustomPolicy(String distributionDomainName, String fileNameToUpload, String privateKeyFullPath, String publicKeyId) throws Exception { String protocol = "https"; String resourcePath = "/" + fileNameToUpload; String cloudFrontUrl = new URL(protocol, distributionDomainName, resourcePath).toString(); Instant expireDate = Instant.now().plus(7, ChronoUnit.DAYS); // URL will be accessible tomorrow using the signed URL. Instant activeDate = Instant.now().plus(1, ChronoUnit.DAYS); Path path = Paths.get(privateKeyFullPath); return CustomSignerRequest.builder() .resourceUrl(cloudFrontUrl) // .resourceUrlPattern("https://*.example.com/*") // Optional. .privateKey(path) .keyPairId(publicKeyId) Sign URLs and cookies 1126 Amazon CloudFront Developer Guide .expirationDate(expireDate) .activeDate(activeDate) // Optional. // .ipRange("192.168.0.1/24") // Optional. .build(); } } The following example demonstrates the use of the CloudFrontUtilities class to produce signed cookies and URLs. View this code example on GitHub. import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.services.cloudfront.CloudFrontUtilities; import software.amazon.awssdk.services.cloudfront.cookie.CookiesForCannedPolicy; import software.amazon.awssdk.services.cloudfront.cookie.CookiesForCustomPolicy; import software.amazon.awssdk.services.cloudfront.model.CannedSignerRequest; import software.amazon.awssdk.services.cloudfront.model.CustomSignerRequest; import software.amazon.awssdk.services.cloudfront.url.SignedUrl; public class SigningUtilities { private static final Logger logger = LoggerFactory.getLogger(SigningUtilities.class); private static final CloudFrontUtilities cloudFrontUtilities = CloudFrontUtilities.create(); public static SignedUrl signUrlForCannedPolicy(CannedSignerRequest cannedSignerRequest) { SignedUrl signedUrl = cloudFrontUtilities.getSignedUrlWithCannedPolicy(cannedSignerRequest); logger.info("Signed URL: [{}]", signedUrl.url()); return signedUrl; } public static SignedUrl signUrlForCustomPolicy(CustomSignerRequest customSignerRequest) { SignedUrl signedUrl = cloudFrontUtilities.getSignedUrlWithCustomPolicy(customSignerRequest); logger.info("Signed URL: [{}]", signedUrl.url()); return signedUrl; } Sign URLs and cookies 1127 Amazon CloudFront Developer Guide public static CookiesForCannedPolicy getCookiesForCannedPolicy(CannedSignerRequest cannedSignerRequest) { CookiesForCannedPolicy cookiesForCannedPolicy = cloudFrontUtilities .getCookiesForCannedPolicy(cannedSignerRequest); logger.info("Cookie EXPIRES header [{}]", cookiesForCannedPolicy.expiresHeaderValue()); logger.info("Cookie KEYPAIR header [{}]", cookiesForCannedPolicy.keyPairIdHeaderValue()); logger.info("Cookie SIGNATURE header [{}]", cookiesForCannedPolicy.signatureHeaderValue()); return cookiesForCannedPolicy; } public static CookiesForCustomPolicy getCookiesForCustomPolicy(CustomSignerRequest customSignerRequest) { CookiesForCustomPolicy cookiesForCustomPolicy = cloudFrontUtilities .getCookiesForCustomPolicy(customSignerRequest); logger.info("Cookie POLICY header [{}]", cookiesForCustomPolicy.policyHeaderValue()); logger.info("Cookie KEYPAIR header [{}]", cookiesForCustomPolicy.keyPairIdHeaderValue()); logger.info("Cookie SIGNATURE header [{}]", cookiesForCustomPolicy.signatureHeaderValue()); return cookiesForCustomPolicy; } } • For API details, see CloudFrontUtilities in AWS SDK for Java 2.x API Reference. For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. CloudFront Functions examples for CloudFront The following code examples show how to use CloudFront with AWS SDKs. Examples • Add HTTP security headers to a CloudFront Functions viewer response event • Add a CORS header to a CloudFront Functions viewer response event CloudFront Functions examples 1128 Amazon CloudFront Developer Guide • Add a cache control header to a CloudFront Functions viewer response event • Add a true client IP header to a CloudFront Functions viewer request event • Add an origin header to a CloudFront Functions viewer request event • Add index.html to request URLs without a file name in a CloudFront Functions viewer request event • Normalize query string parameters in a CloudFront Functions viewer request • Redirect to a new URL in a CloudFront Functions viewer request event • Rewrite a request URI based on KeyValueStore configuration for a CloudFront Functions viewer request event • Route requests to an origin closer to the viewer in a CloudFront Functions viewer request event • Use key-value pairs in a CloudFront Functions viewer request • Validate a simple token in a CloudFront Functions viewer request Add HTTP security headers to a CloudFront Functions viewer response event The following code example shows how to add HTTP security headers to a CloudFront Functions viewer response event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. async function handler(event) { var response = event.response; var headers = response.headers; // Set HTTP security headers // Since JavaScript doesn't allow for hyphens in variable names, we use the dict["key"] notation Add HTTP security headers 1129 Amazon CloudFront Developer Guide headers['strict-transport-security'] = { value: 'max-age=63072000; includeSubdomains; preload'}; headers['content-security-policy'] = { value: "default-src 'none'; img-src 'self'; script-src 'self'; style-src 'self'; object-src 'none'; frame-ancestors 'none'"}; headers['x-content-type-options'] = { value: 'nosniff'}; headers['x-frame-options'] = {value: 'DENY'}; headers['x-xss-protection'] = {value: '1; mode=block'}; headers['referrer-policy'] = {value: 'same-origin'}; // Return the response to viewers return response; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Add a CORS header to a CloudFront Functions viewer response event The following code example shows how to add a CORS header to a CloudFront Functions viewer response event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. async function handler(event) { var request = event.request; var response = event.response; // If Access-Control-Allow-Origin CORS header is missing, add it. // Since JavaScript |
AmazonCloudFront_DevGuide-385 | AmazonCloudFront_DevGuide.pdf | 385 | an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Add a CORS header to a CloudFront Functions viewer response event The following code example shows how to add a CORS header to a CloudFront Functions viewer response event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. async function handler(event) { var request = event.request; var response = event.response; // If Access-Control-Allow-Origin CORS header is missing, add it. // Since JavaScript doesn't allow for hyphens in variable names, we use the dict["key"] notation. Add a CORS header 1130 Amazon CloudFront Developer Guide if (!response.headers['access-control-allow-origin'] && request.headers['origin']) { response.headers['access-control-allow-origin'] = {value: request.headers['origin'].value}; console.log("Access-Control-Allow-Origin was missing, adding it now."); } return response; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Add a cache control header to a CloudFront Functions viewer response event The following code example shows how to add a cache control header to a CloudFront Functions viewer response event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. async function handler(event) { var response = event.response; var headers = response.headers; if (response.statusCode >= 200 && response.statusCode < 400) { // Set the cache-control header headers['cache-control'] = {value: 'public, max-age=63072000'}; } Add a cache control header 1131 Amazon CloudFront Developer Guide // Return response to viewers return response; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Add a true client IP header to a CloudFront Functions viewer request event The following code example shows how to add a true client IP header to a CloudFront Functions viewer request event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. async function handler(event) { var request = event.request; var clientIP = event.viewer.ip; //Add the true-client-ip header to the incoming request request.headers['true-client-ip'] = {value: clientIP}; return request; } Add a true client IP header 1132 Amazon CloudFront Developer Guide For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Add an origin header to a CloudFront Functions viewer request event The following code example shows how to add an origin header to a CloudFront Functions viewer request event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. async function handler(event) { var request = event.request; var headers = request.headers; var host = request.headers.host.value; // If origin header is missing, set it equal to the host header. if (!headers.origin) headers.origin = {value:`https://${host}`}; return request; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Add an origin header 1133 Amazon CloudFront Developer Guide Add index.html to request URLs without a file name in a CloudFront Functions viewer request event The following code example shows how to add index.html to request URLs without a file name in a CloudFront Functions viewer request event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. async function handler(event) { var request = event.request; var uri = request.uri; // Check whether the URI is missing a file name. if (uri.endsWith('/')) { request.uri += 'index.html'; } // Check whether the URI is missing a file extension. else if (!uri.includes('.')) { request.uri += '/index.html'; } return request; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Add index.html to request URLs 1134 Amazon CloudFront Developer Guide Normalize query string parameters in a CloudFront Functions viewer request The following code example shows how to normalize query string parameters in a CloudFront Functions viewer request. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up |
AmazonCloudFront_DevGuide-386 | AmazonCloudFront_DevGuide.pdf | 386 | { request.uri += '/index.html'; } return request; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Add index.html to request URLs 1134 Amazon CloudFront Developer Guide Normalize query string parameters in a CloudFront Functions viewer request The following code example shows how to normalize query string parameters in a CloudFront Functions viewer request. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. function handler(event) { var qs=[]; for (var key in event.request.querystring) { if (event.request.querystring[key].multiValue) { event.request.querystring[key].multiValue.forEach((mv) => {qs.push(key + "=" + mv.value)}); } else { qs.push(key + "=" + event.request.querystring[key].value); } }; event.request.querystring = qs.sort().join('&'); return event.request; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Normalize query string parameters 1135 Amazon CloudFront Developer Guide Redirect to a new URL in a CloudFront Functions viewer request event The following code example shows how to redirect to a new URL in a CloudFront Functions viewer request event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. async function handler(event) { var request = event.request; var headers = request.headers; var host = request.headers.host.value; var country = 'DE' // Choose a country code var newurl = `https://${host}/de/index.html`; // Change the redirect URL to your choice if (headers['cloudfront-viewer-country']) { var countryCode = headers['cloudfront-viewer-country'].value; if (countryCode === country) { var response = { statusCode: 302, statusDescription: 'Found', headers: { "location": { "value": newurl } } } return response; } } return request; } Redirect to a new URL 1136 Amazon CloudFront Developer Guide For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Rewrite a request URI based on KeyValueStore configuration for a CloudFront Functions viewer request event The following code example shows how to rewrite a request URI based on KeyValueStore configuration for a CloudFront Functions viewer request event. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. import cf from 'cloudfront'; // (Optional) Replace KVS_ID with actual KVS ID const kvsId = "KVS_ID"; // enable stickiness by setting a cookie from origin or using another edge function const stickinessCookieName = "appversion"; // set to true to enable console logging const loggingEnabled = false; // function rewrites the request uri based on configuration in KVS // example config in KVS in key:value format // "latest": {"a_weightage": .8, "a_url": "v1", "b_url": "v2"} // given above key and value in KVS the request uri will be rewritten // for example http(s)://domain/latest/something/else will be rewritten as http(s)://domain/v1/something/else or http(s)://domain/v2/something/else depending on weightage // if no configuration is found, then the request is returned as is async function handler(event) { // NOTE: This example function is for a viewer request event trigger. Rewrite a request URI 1137 Amazon CloudFront Developer Guide // Choose viewer request for event trigger when you associate this function with a distribution. const request = event.request; const pathSegments = request.uri.split('/'); const key = pathSegments[1]; // if empty path segment or if there is valid stickiness cookie // then skip call to KVS and let the request continue. if (!key || hasValidSticknessCookie(request.cookies[stickinessCookieName], key)) { return event.request; } try { // get the prefix replacement from KVS const replacement = await getPathPrefixByWeightage(key); if (!replacement) { return event.request; } //Replace the first path with the replacement pathSegments[1] = replacement; log(`using prefix ${pathSegments[1]}`) const newUri = pathSegments.join('/'); log(`${request.uri} -> ${newUri}`); request.uri = newUri; return request; } catch (err) { // No change to the path if the key is not found or any other error log(`request uri: ${request.uri}, error: ${err}`); } // no change to path - return request return event.request; } // function to get the prefix from KVS async function getPathPrefixByWeightage(key) { const kvsHandle = cf.kvs(kvsId); // get the weightage config from KVS const kvsResponse = await kvsHandle.get(key); const weightageConfig = JSON.parse(kvsResponse); // no configuration - return null if (!weightageConfig || !isFinite(weightageConfig.a_weightage)) { return null; Rewrite a request URI 1138 Amazon CloudFront } Developer Guide // return the url based on weightage // return null if no url is configured if (Math.random() <= weightageConfig.a_weightage) { return weightageConfig.a_url ? weightageConfig.a_url: null; } else { return weightageConfig.b_url ? weightageConfig.b_url : null; |
AmazonCloudFront_DevGuide-387 | AmazonCloudFront_DevGuide.pdf | 387 | // no change to path - return request return event.request; } // function to get the prefix from KVS async function getPathPrefixByWeightage(key) { const kvsHandle = cf.kvs(kvsId); // get the weightage config from KVS const kvsResponse = await kvsHandle.get(key); const weightageConfig = JSON.parse(kvsResponse); // no configuration - return null if (!weightageConfig || !isFinite(weightageConfig.a_weightage)) { return null; Rewrite a request URI 1138 Amazon CloudFront } Developer Guide // return the url based on weightage // return null if no url is configured if (Math.random() <= weightageConfig.a_weightage) { return weightageConfig.a_url ? weightageConfig.a_url: null; } else { return weightageConfig.b_url ? weightageConfig.b_url : null; } } // function to check if the stickiness cookie is valid function hasValidSticknessCookie(stickinessCookie, pathSegment) { // if the value exists and it matches pathSegment return (stickinessCookie && stickinessCookie.value === pathSegment) } function log(message) { if (loggingEnabled) { console.log(message); } } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Route requests to an origin closer to the viewer in a CloudFront Functions viewer request event The following code example shows how to route requests to an origin closer to the viewer in a CloudFront Functions viewer request event. Select origin closer to the viewer 1139 Amazon CloudFront JavaScript JavaScript runtime 2.0 for CloudFront Functions Developer Guide Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. import cf from 'cloudfront'; function handler(event) { const request = event.request; const headers = request.headers; const country = headers['cloudfront-viewer-country'] && headers['cloudfront-viewer-country'].value; //List of Regions with S3 buckets containing content const countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' }; const DEFAULT_REGION = 'us-east-1'; const selectedRegion = (country && countryToRegion[country]) || DEFAULT_REGION; const domainName = `cloudfront-functions-demo-bucket-in-${selectedRegion}.s3. ${selectedRegion}.amazonaws.com`; cf.updateRequestOrigin({ "domainName": domainName, "originAccessControlConfig": { "enabled": true, "region": selectedRegion, Select origin closer to the viewer 1140 Amazon CloudFront Developer Guide "signingBehavior": "always", "signingProtocol": "sigv4", "originType": "s3" }, }); return request; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Use key-value pairs in a CloudFront Functions viewer request The following code example shows how to use key-value pairs in a CloudFront Functions viewer request. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. import cf from 'cloudfront'; // This fails if there is no key value store associated with the function const kvsHandle = cf.kvs(); // Remember to associate the KVS with your function before referencing KVS in your code. // https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/kvs-with- functions-associate.html async function handler(event) { const request = event.request; // Use the first segment of the pathname as key // For example http(s)://domain/<key>/something/else Use key-value pairs 1141 Amazon CloudFront Developer Guide const pathSegments = request.uri.split('/') const key = pathSegments[1] try { // Replace the first path of the pathname with the value of the key // For example http(s)://domain/<value>/something/else pathSegments[1] = await kvsHandle.get(key); const newUri = pathSegments.join('/'); console.log(`${request.uri} -> ${newUri}`) request.uri = newUri; } catch (err) { // No change to the pathname if the key is not found console.log(`${request.uri} | ${err}`); } return request; } For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Validate a simple token in a CloudFront Functions viewer request The following code example shows how to validate a simple token in a CloudFront Functions viewer request. JavaScript JavaScript runtime 2.0 for CloudFront Functions Note There's more on GitHub. Find the complete example and learn how to set up and run in the CloudFront Functions examples repository. import crypto from 'crypto'; import cf from 'cloudfront'; //Response when JWT is not valid. const response401 = { Validate a simple token 1142 Amazon CloudFront Developer Guide statusCode: 401, statusDescription: 'Unauthorized' }; // Remember to associate the KVS with your function before calling the const kvsKey = 'jwt.secret'. // https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/kvs-with- functions-associate.html const kvsKey = 'jwt.secret'; // set to true to enable console logging const loggingEnabled = false; function jwt_decode(token, key, noVerify, algorithm) { // check token if (!token) { throw new Error('No token supplied'); } // check segments const segments = token.split('.'); if (segments.length !== 3) { throw new Error('Not enough or too many segments'); } // All segment should be base64 const headerSeg = segments[0]; const payloadSeg = segments[1]; const signatureSeg = segments[2]; // base64 decode and parse JSON const payload = JSON.parse(_base64urlDecode(payloadSeg)); if (!noVerify) { const signingMethod |
AmazonCloudFront_DevGuide-388 | AmazonCloudFront_DevGuide.pdf | 388 | your function before calling the const kvsKey = 'jwt.secret'. // https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/kvs-with- functions-associate.html const kvsKey = 'jwt.secret'; // set to true to enable console logging const loggingEnabled = false; function jwt_decode(token, key, noVerify, algorithm) { // check token if (!token) { throw new Error('No token supplied'); } // check segments const segments = token.split('.'); if (segments.length !== 3) { throw new Error('Not enough or too many segments'); } // All segment should be base64 const headerSeg = segments[0]; const payloadSeg = segments[1]; const signatureSeg = segments[2]; // base64 decode and parse JSON const payload = JSON.parse(_base64urlDecode(payloadSeg)); if (!noVerify) { const signingMethod = 'sha256'; const signingType = 'hmac'; // Verify signature. `sign` will return base64 string. const signingInput = [headerSeg, payloadSeg].join('.'); if (!_verify(signingInput, key, signingMethod, signingType, signatureSeg)) { throw new Error('Signature verification failed'); } Validate a simple token 1143 Amazon CloudFront Developer Guide // Support for nbf and exp claims. // According to the RFC, they should be in seconds. if (payload.nbf && Date.now() < payload.nbf*1000) { throw new Error('Token not yet active'); } if (payload.exp && Date.now() > payload.exp*1000) { throw new Error('Token expired'); } } return payload; } //Function to ensure a constant time comparison to prevent //timing side channels. function _constantTimeEquals(a, b) { if (a.length != b.length) { return false; } let xor = 0; for (let i = 0; i < a.length; i++) { xor |= (a.charCodeAt(i) ^ b.charCodeAt(i)); } return 0 === xor; } function _verify(input, key, method, type, signature) { if(type === "hmac") { return _constantTimeEquals(signature, _sign(input, key, method)); } else { throw new Error('Algorithm type not recognized'); } } function _sign(input, key, method) { return crypto.createHmac(method, key).update(input).digest('base64url'); } function _base64urlDecode(str) { return Buffer.from(str, 'base64url') Validate a simple token 1144 Amazon CloudFront } Developer Guide async function handler(event) { let request = event.request; //Secret key used to verify JWT token. //Update with your own key. const secret_key = await getSecret() if(!secret_key) { return response401; } // If no JWT token, then generate HTTP redirect 401 response. if(!request.querystring.jwt) { log("Error: No JWT in the querystring"); return response401; } const jwtToken = request.querystring.jwt.value; try{ jwt_decode(jwtToken, secret_key); } catch(e) { log(e); return response401; } //Remove the JWT from the query string if valid and return. delete request.querystring.jwt; log("Valid JWT token"); return request; } // get secret from key value store async function getSecret() { // initialize cloudfront kv store and get the key value try { const kvsHandle = cf.kvs(); return await kvsHandle.get(kvsKey); } catch (err) { log(`Error reading value for key: ${kvsKey}, error: ${err}`); return null; Validate a simple token 1145 Amazon CloudFront } } function log(message) { if (loggingEnabled) { console.log(message); } } Developer Guide For a complete list of AWS SDK developer guides and code examples, see Using CloudFront with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Validate a simple token 1146 Amazon CloudFront Developer Guide Document history The following table describes the important changes made to CloudFront documentation. For notification of updates, you can subscribe to the RSS feed. Change Description Date Added CloudFront Functions support for CloudFront SaaS Manager Updates to standard logging (v2) Updates to CloudFront managed policies Added helper functions and May 2, 2025 the endpoint field for the context object. Added the {distribu tionid} partition variable to support sending access logs to AWS Glue. May 1, 2025 Added ACM permissions to April 28, 2025 the CloudFrontReadOnly Access and CloudFron tFullAccess managed policies. Added support for multi-ten ant distribution and distribut You can create a multi-tenant distribution to set common April 28, 2025 ion tenants distribution settings based on your origin type. Then, you can reuse the multi-ten ant distribution to create multiple distribution tenants that share those settings. You can then customize specific distribution tenants as you add additional websites or applications. Updates for Lambda@Edge functions Lambda@Edge functions now support advanced logging April 7, 2025 1147 Amazon CloudFront Developer Guide Anycast static IPs Added additional helper methods for origin modificat ion Updates to standard logging (v2) controls and customizing the CloudWatch log group name. You can use Anycast static IPs to enable routing of apex domains directly to your CloudFront distributions. April 4, 2025 Added the selectReq April 2, 2025 uestOriginById() and createRequestOrigi nGroup() CloudFront Functions helper methods. Added the {accounti d} partition variable and example suffix paths for access log delivery to Amazon S3. February 14, 2025 Added additional real-time log fields for standard logging You can specify the c- country and cache-beh January 31, 2025 (v2) avior-path-pattern real-time log fields when you enable standard logging (v2). Lambda@Edge supports newer runtime version Lambda@Edge now supports Lambda functions with the November 22, 2024 Media quality-aware resiliency support for CloudFront November 21, 2024 Node.js 22 runtime. You can use the media quality-aware resiliency (MQAR) feature so that CloudFront automatically chooses the origin |
AmazonCloudFront_DevGuide-389 | AmazonCloudFront_DevGuide.pdf | 389 | and createRequestOrigi nGroup() CloudFront Functions helper methods. Added the {accounti d} partition variable and example suffix paths for access log delivery to Amazon S3. February 14, 2025 Added additional real-time log fields for standard logging You can specify the c- country and cache-beh January 31, 2025 (v2) avior-path-pattern real-time log fields when you enable standard logging (v2). Lambda@Edge supports newer runtime version Lambda@Edge now supports Lambda functions with the November 22, 2024 Media quality-aware resiliency support for CloudFront November 21, 2024 Node.js 22 runtime. You can use the media quality-aware resiliency (MQAR) feature so that CloudFront automatically chooses the origin in an origin group with the highest media quality score. 1148 Amazon CloudFront Developer Guide Helper method for origin modification Added new CloudFront Functions helper method for November 21, 2024 VPC origins origin modification. Use CloudFront VPC origins to restrict access to an Application Load Balancer, Network Load Balancer, or EC2 instance origin. November 20, 2024 Updates to managed policy Updated managed policy November 20, 2024 Anycast static IPs CloudFrontFullAccess . You can request Anycast static IPs from CloudFront to use with your distributions. November 20, 2024 Added support for standard logging CloudFront supports standard logging (v2) and sending your November 20, 2024 logs to Amazon CloudWatch Logs, Amazon Data Firehose, and Amazon Simple Storage Service (Amazon S3). CloudFront now supports gRPC requests for your distribution. November 20, 2024 Added new managed policy November 20, 2024 AWSCloudFrontVPCOr iginServiceRolePol icy . Lambda@Edge now supports Lambda functions with the Python 3.13 runtime. November 13, 2024 Added support for gRPC Added new managed policy for VPC origins Lambda@Edge supports newer runtime version 1149 Amazon CloudFront Developer Guide Evaluate with AWS Config Rules Evaluate your CloudFron t configurations with AWS September 20, 2024 Config Rules. Added more troubleshooting content Added more troubleshooting content for HTTP 4xx and 5xx August 26, 2024 Added new managed cache policies error response status codes. Added new managed May 24, 2024 cache policies UseOrigin CacheControlHeaders and UseOriginCacheCont rolHeaders-QuerySt rings . Added origin access control support You can now create an origin access control (OAC) for AWS April 11, 2024 Real-time log fields for CMCD Elemental MediaPackage V2 and AWS Lambda function URL. Added 18 common media client data (CMCD) fields for real-time logging. April 9, 2024 Getting started with a basic CloudFront distribution Updated tutorial for a basic distribution that uses an March 18, 2024 Amazon S3 origin with origin access control (OAC). 1150 Amazon CloudFront Developer Guide Code examples for CloudFron t using AWS SDKs Added code examples that show how to use CloudFron February 16, 2024 t with an AWS software development kit (SDK). The examples are divided into code excerpts that show you how to call individual service functions and examples that show you how to accomplis h a specific task by calling multiple functions within the same service. The CloudFrontReadOnly Access and CloudFron tFullAccess IAM policies now support KeyValueS tore operations. Added JavaScript runtime 2.0 features for CloudFront Functions. Amazon CloudFront now supports CloudFront KeyValueStore. This feature is a secure, global, low- latency key value datastore that allows read access from within CloudFront Functions. You can enable advanced customizable logic at CloudFront edge locations. Lambda@Edge now supports Lambda functions with the Node.js 20 runtime. AWS managed policy update JavaScript runtime 2.0 CloudFront KeyValueStore Lambda@Edge supports newer runtime version December 19, 2023 November 21, 2023 November 21, 2023 November 15, 2023 1151 Amazon CloudFront Security dashboard Sorting query strings in functions AWS WAF security recommendations Support for serving stale (expired) cache content Developer Guide November 8, 2023 October 3, 2023 September 26, 2023 CloudFront creates a security dashboard when you create a distribution. Enable AWS WAF, manage geo restrictions, and view high-level data for requests, bots, and logs. CloudFront now supports query string sorting using CloudFront Functions. Amazon CloudFront now displays AWS WAF security recommendations on the CloudFront console. CloudFront supports the May 15, 2023 Stale-While-Revali date and Stale-If- Error cache control directives. Enable AWS WAF protections with one click A streamlined method for adding AWS WAF security May 10, 2023 protections to CloudFront distributions. Enable ACLs for new S3 buckets used for standard logs Added note and links to address the default ACL setting for new S3 buckets. April 11, 2023 Create an origin using Amazon S3 Object Lambda You can use an Amazon S3 Object Lambda Access Point alias as an origin for your distribution. March 31, 2023 1152 Amazon CloudFront Developer Guide Customize HTTP status and body using CloudFront You can use CloudFront Functions to update the March 29, 2023 Functions viewer response status code and replace or remove the response body. Added CORS headers wildcard options for ports You can now include wildcard configurations for ports in March 20, 2023 CORS access-control headers. Added new link for the AWS Security Hub User |
AmazonCloudFront_DevGuide-390 | AmazonCloudFront_DevGuide.pdf | 390 | new S3 buckets. April 11, 2023 Create an origin using Amazon S3 Object Lambda You can use an Amazon S3 Object Lambda Access Point alias as an origin for your distribution. March 31, 2023 1152 Amazon CloudFront Developer Guide Customize HTTP status and body using CloudFront You can use CloudFront Functions to update the March 29, 2023 Functions viewer response status code and replace or remove the response body. Added CORS headers wildcard options for ports You can now include wildcard configurations for ports in March 20, 2023 CORS access-control headers. Added new link for the AWS Security Hub User Guide Updated language and added link to the reorganized March 9, 2023 CloudFront now supports block lists ("all except") in origin request policies CloudFront adds a new managed origin request policy to forward all viewer headers except the Host header Updated restrictions on Lambda@Edge Amazon CloudFront controls in the AWS Security Hub User Guide. Use block lists in origin request policies to include all query strings, HTTP headers, or cookies, except for the ones specified, in requests that CloudFront sends to the origin. Use CloudFront's new managed origin request policy to include all headers from the viewer request, except for the Host header, in requests that CloudFront sends to the origin. Lambda@Edge supports Lambda runtime managemen t configurations set to Auto. February 22, 2023 February 22, 2023 February 16, 2023 1153 Amazon CloudFront Developer Guide Updated the IAM guidance for CloudFront Updated guide to align with the IAM best practices February 15, 2023 . For more information, see Security best practices in IAM. Enhanced security with origin access control You can now secure MediaStore origins by February 9, 2023 permitting access to only the designated CloudFront distributions. New headers for determining viewer's header structure You can now add header order and header count to January 13, 2023 help identify the viewer based on the headers that it sends. Lambda@Edge supports newer runtime version Lambda@Edge now supports Lambda functions with the January 12, 2023 Node.js 18 runtime. Remove response headers using a response headers You can now use a CloudFron t response headers policy January 3, 2023 policy Continuous deployment for safely testing configuration changes Release of CloudFron t-Viewer-JA3-Finge rprint header to remove headers that CloudFront received in the response from the origin. The specified headers are not included in the response that CloudFront sends to viewers. You can now deploy changes to your CDN configuration by testing with a subset of production traffic. You can now use the JA3 fingerprint to help determine whether the request comes from a known client. November 18, 2022 November 16, 2022 1154 Amazon CloudFront Developer Guide Added CORS headers wildcard options You can now use various wildcard configurations in November 11, 2022 some CORS access-control headers. Additional metrics for CloudFront distributions Support for Monitorin October 3, 2022 in the gSubscription CloudFront API and AWS CloudFormation. Enhanced security with origin access control You can now secure Amazon S3 origins by permitting August 24, 2022 access to only the designated CloudFront distributions. HTTP/3 support for CloudFront distributions You can now choose HTTP/3 for your CloudFront distribut August 15, 2022 ion. Add handshake details to CloudFront-Viewer-TLS You can new view information about the SSL/TLS handshake June 27, 2022 header used. New metric in Server-Timing header Added the new cdn-downs tream-fbl metric to Server-Timing headers. June 13, 2022 New header to get informati on about TLS version and cipher You can now use the May 23, 2022 CloudFront-Viewer- TLS header to get informati on about the version of TLS (or SSL) and the cipher that was used for the connectio n between the viewer and CloudFront. 1155 Amazon CloudFront Developer Guide New FunctionThrottles metric for CloudFront Functions With Amazon CloudWatc h, you can now monitor May 4, 2022 the number of times that a CloudFront Function was throttled in a given time period. CloudFront supports Lambda function URLs If you build a serverless web application by using Lambda April 6, 2022 Server-Timing header in HTTP responses functions with function URLs, you can now add CloudFront for an array of benefits. You can now enable the March 30, 2022 Server-Timing header in HTTP responses sent from CloudFront to view metrics that can help you gain insights about the behavior and performance of CloudFront. Use AWS-managed prefix list to limit inbound traffic You can now limit the inbound HTTP and HTTPS February 7, 2022 traffic to your origins from only the IP addresses that belong to CloudFront’s origin- facing servers. 1156 Amazon CloudFront New feature Developer Guide November 2, 2021 CloudFront adds support for response headers policies, which allow you to specify the HTTP headers that CloudFron t adds to HTTP responses that it sends to viewers (web browsers or other clients). You can specify the desired headers (and their |
AmazonCloudFront_DevGuide-391 | AmazonCloudFront_DevGuide.pdf | 391 | metrics that can help you gain insights about the behavior and performance of CloudFront. Use AWS-managed prefix list to limit inbound traffic You can now limit the inbound HTTP and HTTPS February 7, 2022 traffic to your origins from only the IP addresses that belong to CloudFront’s origin- facing servers. 1156 Amazon CloudFront New feature Developer Guide November 2, 2021 CloudFront adds support for response headers policies, which allow you to specify the HTTP headers that CloudFron t adds to HTTP responses that it sends to viewers (web browsers or other clients). You can specify the desired headers (and their values) without making any changes to the origin or writing any code. For more informati on, see Adding or removing HTTP headers in CloudFront responses. New CloudFront-Viewer- Address request header Lambda@Edge supports new runtime version CloudFront adds support for October 25, 2021 a new header, CloudFron , that t-Viewer-Address contains the IP address of the viewer that sent the HTTP request to CloudFront. For more information, see Adding CloudFront request headers. Lambda@Edge now supports Lambda functions with the Python 3.9 runtime. For more information, see Supported runtimes. September 22, 2021 AWS managed policy update CloudFront updated the September 8, 2021 CloudFrontReadOnlyAccess policy. For more information, see CloudFront updates to AWS managed policies. 1157 Amazon CloudFront New feature New feature New security policy Developer Guide July 14, 2021 July 7, 2021 June 23, 2021 CloudFront now supports ECDSA certificates for viewer- facing HTTPS connections. For more information, see Supported protocols and ciphers between viewers and CloudFront and Requirements for using SSL/TLS certificates with CloudFront. CloudFront now supports more ways to move an alternate domain name from one distribution to another, without contacting Support. For more information, see Move an alternate domain name to a different distribut ion. CloudFront now supports a new security policy, TLSv1.2_2021, with a smaller set of supported ciphers. For more information, see Supported protocols and ciphers between viewers and CloudFront. 1158 Amazon CloudFront New feature Lambda@Edge supports newer runtime versions Amazon CloudFront now supports CloudFront Functions, a native feature of CloudFront that enables you to write lightweight functions in JavaScript for high-scale, latency-sensitive CDN customizations. For more information, see Customizing at the edge with CloudFront Functions. Lambda@Edge now supports Lambda functions with the Node.js 14 runtime. For more information, see Supported runtimes. Developer Guide May 3, 2021 April 29, 2021 Remove documentation for RTMP distributions Amazon CloudFront deprecated real-time February 10, 2021 New pricing option messaging protocol (RTMP) distributions on December 31, 2020. Documentation for RTMP distributions is now removed from the Amazon CloudFront Developer Guide. Amazon CloudFront introduce s the CloudFront security savings bundle, a simple way to save up to 30% on the CloudFront charges on your AWS bill. For more informati on, see the Savings Bundle FAQs. February 5, 2021 1159 Amazon CloudFront New tutorial Developer Guide December 18, 2020 The Amazon CloudFront Developer Guide now includes a tutorial for using Amazon CloudFront to restrict access to an Application Load Balancer in Elastic Load Balancing. For more informati on, see Restricting access to Application Load Balancers. New option for public key management CloudFront now supports public key managemen October 22, 2020 New feature – Origin Shield t for signed URLs and signed cookies through the CloudFront console and API, without requiring access to the AWS account root user. For more information, see Specifying the signers that can create signed URLs and signed cookies. CloudFront now supports CloudFront Origin Shield, an additional layer in the CloudFront caching infrastru cture that helps to minimize your origin's load, improve its availability, and reduce its operating costs. For more information, see Using Amazon CloudFront Origin Shield. October 20, 2020 1160 Amazon CloudFront Developer Guide New compression format New TLS protocol September 14, 2020 September 3, 2020 CloudFront now supports the Brotli compression formation when you configure CloudFront to compress objects at CloudFront edge locations. You can also configure CloudFront to cache Brotli objects using a normalized Accept- Encoding header. For more information, see Serving compressed files and Compression support. CloudFront now supports the TLS 1.3 protocol for HTTPS connections between viewers and CloudFront distributions. TLS 1.3 is enabled by default in all CloudFront security policies. For more informati on, see Supported protocols and ciphers between viewers and CloudFront. 1161 Amazon CloudFront New real-time logs API support for additional metrics CloudFront now supports configurable real-time logs. With real-time logs, you can get information about requests made to a distribut ion in real time. You can use real-time logs to monitor, analyze, and take action based on content delivery performance. For more information, see Real-time logs. CloudFront now supports enabling eight additional real-time metrics with the CloudFront API. For more information, see Turning on additional metrics. Developer Guide August 31, 2020 August 28, 2020 New CloudFront HTTP headers CloudFront added |
AmazonCloudFront_DevGuide-392 | AmazonCloudFront_DevGuide.pdf | 392 | on, see Supported protocols and ciphers between viewers and CloudFront. 1161 Amazon CloudFront New real-time logs API support for additional metrics CloudFront now supports configurable real-time logs. With real-time logs, you can get information about requests made to a distribut ion in real time. You can use real-time logs to monitor, analyze, and take action based on content delivery performance. For more information, see Real-time logs. CloudFront now supports enabling eight additional real-time metrics with the CloudFront API. For more information, see Turning on additional metrics. Developer Guide August 31, 2020 August 28, 2020 New CloudFront HTTP headers CloudFront added additiona l HTTP headers for determini July 23, 2020 ng information about the viewer such as device type, geographic location, and more. For more informati on, see Adding CloudFront request headers. 1162 Amazon CloudFront New feature New security policy Developer Guide July 22, 2020 July 8, 2020 CloudFront now supports cache policies and origin request polices, which give you more granular control over the cache key and origin requests for your CloudFron t distributions. For more information, see Control the cache key and Control origin requests. CloudFront now supports a new security policy, TLSv1.2_2019, with a smaller set of supported ciphers. For more information, see Supported protocols and ciphers between viewers and CloudFront. New settings to control origin timeouts and attempts CloudFront added new settings that control origin June 5, 2020 New documentation for getting started with CloudFront by creating a secure static website June 2, 2020 timeouts and attempts. For more information, see Controlling origin timeouts and attempts. Get started with CloudFron t by creating a secure static website using Amazon S3, CloudFront, Lambda@Edge, and more, all deployed with AWS CloudFormation. For more information, see Getting started with a secure static website. 1163 Amazon CloudFront Developer Guide Lambda@Edge supports newer runtime versions February 27, 2020 Lambda@Edge now supports Lambda functions with the Node.js 12 and Python 3.8 runtimes. For more informati on, see Supported runtimes. New real-time metrics in CloudWatch Amazon CloudFrontnow offers eight additional real- December 19, 2019 New fields in access logs AWS WordPress plugin Tag-based and resource-level IAM permissions policies time metrics in Amazon CloudWatch. For more information, see Turning on additional CloudFront distribution metrics. CloudFront adds seven new fields to access logs. For more information, see Standard log file fields. You can use the AWS WordPress plugin to provide visitors to your WordPress website an accelerated viewing experience using CloudFront. (Update: as of September 30, 2022, the AWS for WordPress plugin is deprecated.) CloudFront now supports two additional ways of specifyin g IAM permission policies: tag-based and resource-level policy permissions. For more information, see Managing Access to Resources. December 12, 2019 October 30, 2019 August 8, 2019 1164 Amazon CloudFront Developer Guide Support for Python programming language Updated monitoring graphs Consolidated security content Domain validation is now required August 1, 2019 June 20, 2019 May 24, 2019 April 9, 2019 You can now use the Python programming language to develop functions in Lambda@Edge, in addition to Node.js. For example functions that cover a variety of scenarios, see Lambda@Ed ge Example Functions. Content updates to describe new ways for you to monitor Lambda functions associate d with your CloudFront distributions directly from the CloudFront console to more easily track and debug errors. For more information, see Monitoring CloudFront. A new Security chapter consolidates information about CloudFront features around and implementation of data protection, IAM, logging, compliance, and more. For more information, see Security. CloudFront now requires that you use an SSL certificate to verify that you have permissio n to use an alternate domain name with a distribution. For more information, see Using Alternate Domain Names and HTTPS. 1165 Amazon CloudFront Updated PDF filename New features New feature Developer Guide January 7, 2019 November 20, 2018 October 8, 2018 The new filename for the Amazon CloudFron t Developer Guide is: AmazonCloudFront_D evGuide. The previous name was: cf-dg. CloudFront now supports WebSocket, a TCP-based protocol that is useful when you need long-lived connectio ns between clients and servers. You can also now set up CloudFront with origin failover for scenarios that require high availability. For more information, see Using WebSocket with CloudFront Distributions and Optimizin g High Availability with CloudFront Origin Failover. CloudFront now supports detailed error logging for HTTP requests that run Lambda functions. You can store the logs in CloudWatc h and use them to help troubleshoot HTTP 5xx errors when your function returns an invalid response. For more information, see CloudWatch Metrics and CloudWatch Logs for Lambda Functions. 1166 Amazon CloudFront New feature New feature Reorganization Developer Guide August 14, 2018 July 25, 2018 June 28, 2018 You can now opt to have Lambda@Edge expose the body in a request for writable HTTP methods (POST, PUT, DELETE, and so on), so that you can |
AmazonCloudFront_DevGuide-393 | AmazonCloudFront_DevGuide.pdf | 393 | Availability with CloudFront Origin Failover. CloudFront now supports detailed error logging for HTTP requests that run Lambda functions. You can store the logs in CloudWatc h and use them to help troubleshoot HTTP 5xx errors when your function returns an invalid response. For more information, see CloudWatch Metrics and CloudWatch Logs for Lambda Functions. 1166 Amazon CloudFront New feature New feature Reorganization Developer Guide August 14, 2018 July 25, 2018 June 28, 2018 You can now opt to have Lambda@Edge expose the body in a request for writable HTTP methods (POST, PUT, DELETE, and so on), so that you can access it in your Lambda function. You can choose read-only access, or you can specify that you'll replace the body. For more information, see Accessing the Request Body by Choosing the Include Body Option. CloudFront now supports serving content compresse d by using brotli or other compression algorithms, in addition to or instead of gzip. For more information, see Serving Compressed Files. The Amazon CloudFront Developer Guide has been reorganized to simplify finding related content, and to improve scanability and navigation. 1167 Amazon CloudFront New Feature New Feature Developer Guide March 20, 2018 March 15, 2018 Lambda@Edge now enables you to further customize the delivery of content stored in an Amazon S3 bucket, by allowing you to access additional headers, including custom headers, within origin-facing events. For more information, see these examples showing personali zation of content based on viewer location and viewer device type. You can now use Amazon CloudFront to negotiate HTTPS connections to origins using Elliptic Curve Digital Signature Algorithm (ECDSA). ECDSA uses smaller keys that are faster, yet, just as secure, as the older RSA algorithm . For more information, see Supported SSL/TLS Protocols and Ciphers for Communica tion Between CloudFront and Your Origin and About RSA and ECDSA Ciphers. 1168 Amazon CloudFront New Feature New Feature Developer Guide December 21, 2017 December 14, 2017 Lambda@Edge enables you to customize error responses from your origin, by allowing you to execute Lambda functions in response to HTTP errors that Amazon CloudFron treceives from your origin. For more information, see these examples showing redirects to another location and response generation with 200 status code (OK). A new CloudFront capability, field-level encryption, helps you to further enhance the security of sensitive data, like credit card numbers or personally identifiable information (PII) like social security numbers. For more information, see Using field- level encryption to help protect sensitive data. Doc history archived Older doc history was archived. December 1, 2017 1169 |
AmazonKeyspaces-001 | AmazonKeyspaces.pdf | 1 | Developer Guide Amazon Keyspaces (for Apache Cassandra) Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon Keyspaces (for Apache Cassandra) Developer Guide Amazon Keyspaces (for Apache Cassandra): Developer Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Amazon Keyspaces (for Apache Cassandra) Table of Contents Developer Guide What is Amazon Keyspaces? ........................................................................................................... 1 How it works ................................................................................................................................................. 1 High-level architecture ........................................................................................................................... 2 Cassandra data model ............................................................................................................................ 4 Accessing Amazon Keyspaces ............................................................................................................... 5 Use cases ........................................................................................................................................................ 6 What is CQL? ................................................................................................................................................. 7 Compare Amazon Keyspaces with Cassandra ................................................................................ 9 Functional differences with Apache Cassandra .................................................................................... 10 Apache Cassandra APIs, operations, and data types ..................................................................... 11 Asynchronous creation and deletion of keyspaces and tables .................................................... 11 Authentication and authorization ..................................................................................................... 11 Batch ........................................................................................................................................................ 11 Cluster configuration ............................................................................................................................ 11 Connections ............................................................................................................................................ 11 IN keyword ............................................................................................................................................ 12 FROZEN collections ............................................................................................................................... 12 Lightweight transactions ..................................................................................................................... 13 Load balancing ...................................................................................................................................... 13 Pagination ............................................................................................................................................... 13 Partitioners ............................................................................................................................................. 14 Prepared statements ............................................................................................................................ 14 Range delete .......................................................................................................................................... 14 System tables ........................................................................................................................................ 15 Timestamps ............................................................................................................................................ 15 User-defined types (UDTs) .................................................................................................................. 15 Supported Cassandra APIs, operations, functions, and data types .................................................. 16 Cassandra API support ......................................................................................................................... 16 Cassandra control plane API support ............................................................................................... 18 Cassandra data plane API support .................................................................................................... 18 Cassandra function support ............................................................................................................... 18 Cassandra data type support ............................................................................................................. 19 Supported Cassandra consistency levels ............................................................................................... 21 Write consistency levels ...................................................................................................................... 21 iii Amazon Keyspaces (for Apache Cassandra) Developer Guide Read consistency levels ....................................................................................................................... 22 Unsupported consistency levels ........................................................................................................ 23 Migrating to Amazon Keyspaces .................................................................................................. 24 Migrating from Cassandra ........................................................................................................................ 25 Compatibility .......................................................................................................................................... 26 Estimate pricing .................................................................................................................................... 26 Migration strategy ................................................................................................................................ 35 Online migration ................................................................................................................................... 36 Offline migration .................................................................................................................................. 47 Hybrid migration ................................................................................................................................... 49 Migration tools ........................................................................................................................................... 53 Loading data using cqlsh .................................................................................................................... 54 Loading data using DSBulk ................................................................................................................. 66 Accessing Amazon Keyspaces ....................................................................................................... 79 Setting up AWS Identity and Access Management ............................................................................. 79 Sign up for an AWS account .............................................................................................................. 79 Create a user with administrative access ......................................................................................... 79 Setting up Amazon Keyspaces ................................................................................................................ 81 Using the console ....................................................................................................................................... 82 Using AWS CloudShell .............................................................................................................................. 82 Obtaining IAM permissions for AWS CloudShell ............................................................................ 83 Interacting with Amazon Keyspaces using AWS CloudShell ........................................................ 84 Create programmatic access credentials ............................................................................................... 85 Create service-specific credentials .................................................................................................... 86 Create IAM credentials for AWS authentication ............................................................................. 88 Service endpoints ....................................................................................................................................... 96 Ports and protocols .............................................................................................................................. 96 Global endpoints ................................................................................................................................... 97 AWS GovCloud (US) Region FIPS endpoints ................................................................................. 100 China Regions endpoints .................................................................................................................. 100 Using cqlsh ............................................................................................................................................. 101 Using the cqlsh-expansion ......................................................................................................... 102 How to manually configure cqlsh connections for TLS ........................................................... 107 Using the AWS CLI .................................................................................................................................. 108 Downloading and Configuring the AWS CLI ................................................................................. 108 Using the AWS CLI with Amazon Keyspaces ................................................................................. 108 iv Amazon Keyspaces (for Apache Cassandra) Developer Guide Using the API ............................................................................................................................................ 112 Using a Cassandra client driver ............................................................................................................ 113 Using a Cassandra Java client driver .............................................................................................. 114 Using a Cassandra Python client driver ......................................................................................... 126 Using a Cassandra Node.js client driver ........................................................................................ 130 Using a Cassandra .NET Core client driver .................................................................................... 133 Using a Cassandra Go client driver ................................................................................................. 135 Using a Cassandra Perl client driver ............................................................................................... 140 Configure cross-account access ............................................................................................................. 142 Configure cross-account access in a shared VPC ......................................................................... 143 Configure cross-account access without a shared VPC ............................................................... 146 Getting started ............................................................................................................................ 148 Prerequisites .............................................................................................................................................. 149 Create a keyspace .................................................................................................................................... 149 Check keyspace creation status ............................................................................................................ 153 Create a table ........................................................................................................................................... 153 Check table creation status ................................................................................................................... 162 CRUD operations ...................................................................................................................................... 162 Create .................................................................................................................................................... 163 Read ....................................................................................................................................................... 167 Update .................................................................................................................................................. 171 Delete .................................................................................................................................................... 172 Delete a table ........................................................................................................................................... 174 Delete a keyspace .................................................................................................................................... 177 Tutorials and solutions ............................................................................................................... 180 Connecting with VPC endpoints ........................................................................................................... 180 Prerequisites ........................................................................................................................................ 181 Step 1: Launch an Amazon EC2 instance ...................................................................................... 181 Step 2: Configure your Amazon EC2 instance .............................................................................. 183 Step 3: Create a VPC endpoint for Amazon Keyspaces .............................................................. 185 Step 4: Configure permissions for the VPC endpoint connection ............................................ 190 Step 5: Configure monitoring .......................................................................................................... 194 Step 6: (Optional) Best practices for connections ....................................................................... 195 Step 7: (Optional) Clean up ............................................................................................................. 197 Integrating with Apache Spark ............................................................................................................. 199 Prerequisites ........................................................................................................................................ 200 v Amazon Keyspaces (for Apache Cassandra) Developer Guide Step 1: Configure Amazon Keyspaces ............................................................................................ 200 Step 2: Configure the Apache Cassandra Spark Connector ....................................................... 202 Step 3: Create the app config file .................................................................................................. 204 Step 4: Prepare the source data and the target table ................................................................ 206 Step 5: Write and |
AmazonKeyspaces-002 | AmazonKeyspaces.pdf | 2 | a VPC endpoint for Amazon Keyspaces .............................................................. 185 Step 4: Configure permissions for the VPC endpoint connection ............................................ 190 Step 5: Configure monitoring .......................................................................................................... 194 Step 6: (Optional) Best practices for connections ....................................................................... 195 Step 7: (Optional) Clean up ............................................................................................................. 197 Integrating with Apache Spark ............................................................................................................. 199 Prerequisites ........................................................................................................................................ 200 v Amazon Keyspaces (for Apache Cassandra) Developer Guide Step 1: Configure Amazon Keyspaces ............................................................................................ 200 Step 2: Configure the Apache Cassandra Spark Connector ....................................................... 202 Step 3: Create the app config file .................................................................................................. 204 Step 4: Prepare the source data and the target table ................................................................ 206 Step 5: Write and read Amazon Keyspaces data ......................................................................... 208 Troubleshooting .................................................................................................................................. 211 Connecting from Amazon EKS .............................................................................................................. 212 Prerequisites ........................................................................................................................................ 213 Step 1: Configure the Amazon EKS cluster ................................................................................... 215 Step 2: Configure the application ................................................................................................... 220 Step 3: Create application image .................................................................................................... 223 Step 4: Deploy the application to Amazon EKS ........................................................................... 225 Step 5: (Optional) Cleanup ............................................................................................................... 230 Exporting data to Amazon S3 ............................................................................................................... 232 Prerequisites ........................................................................................................................................ 233 Step 1: Create the Amazon S3 bucket, download tools, and configure the environment .... 235 Step 2: Configure the AWS Glue job .............................................................................................. 237 Step 3: Run the export AWS Glue job from the AWS CLI .......................................................... 240 Step 4: (Optional) Schedule the export job .................................................................................. 241 Step 5: (Optional) Cleanup ............................................................................................................... 243 Managing serverless resources ................................................................................................... 245 Estimate row size ..................................................................................................................................... 246 Estimate the encoded size of columns .......................................................................................... 247 Estimate the encoded size of data values based on data type ................................................. 247 Consider the impact of Amazon Keyspaces features on row size ............................................. 249 Choose the right formula to calculate the encoded size of a row ............................................ 250 Row size calculation example .......................................................................................................... 250 Estimate capacity consumption ............................................................................................................ 252 Estimate the capacity consumption of range queries ................................................................. 253 Estimate the read capacity consumption of limit queries ......................................................... 253 Estimate the read capacity consumption of table scans ............................................................ 254 Estimate capacity consumption of LWT ........................................................................................ 255 Estimate capacity consumption of static columns ...................................................................... 256 Estimate capacity for a multi-Region table .................................................................................. 260 Estimate capacity consumption with CloudWatch ...................................................................... 261 Configure read/write capacity modes ................................................................................................. 262 vi Amazon Keyspaces (for Apache Cassandra) Developer Guide Configure on-demand capacity mode ............................................................................................ 262 Configure provisioned throughput capacity mode ...................................................................... 265 View the capacity mode of a table ................................................................................................ 267 Change capacity mode ...................................................................................................................... 268 Pre-warm a new table for on-demand capacity .......................................................................... 271 Pre-warm an existing table for on-demand capacity ................................................................. 274 Manage throughput capacity with auto scaling ................................................................................ 277 How Amazon Keyspaces automatic scaling works ....................................................................... 277 How auto scaling works for multi-Region tables ........................................................................ 279 Usage notes ......................................................................................................................................... 280 Configure and update auto scaling policies .................................................................................. 281 Use burst capacity ................................................................................................................................... 294 Working with Amazon Keyspaces features ................................................................................ 295 System keyspaces .................................................................................................................................... 296 system ................................................................................................................................................. 297 system_schema ................................................................................................................................. 298 system_schema_mcs ....................................................................................................................... 299 system_multiregion_info ......................................................................................................... 302 User-defined types (UDTs) ..................................................................................................................... 304 Configure permissions ....................................................................................................................... 305 Create a UDT ....................................................................................................................................... 309 View UDTs ............................................................................................................................................ 313 Delete a UDT ....................................................................................................................................... 316 Working with CQL queries ..................................................................................................................... 317 Use IN SELECT ................................................................................................................................... 318 Order results ........................................................................................................................................ 322 Paginate results .................................................................................................................................. 323 Working with partitioners ...................................................................................................................... 324 Change the partitioner ...................................................................................................................... 324 Client-side timestamps ........................................................................................................................... 326 Integration with AWS services ......................................................................................................... 327 Create table with client-side timestamps ...................................................................................... 328 Configure client-side timestamps ................................................................................................... 331 Use client-side timestamps in queries ........................................................................................... 333 Multi-Region replication ......................................................................................................................... 334 Benefits ................................................................................................................................................. 335 vii Amazon Keyspaces (for Apache Cassandra) Developer Guide Capacity modes and pricing ............................................................................................................. 336 How it works ....................................................................................................................................... 336 Usage notes ......................................................................................................................................... 341 Configure multi-Region replication ................................................................................................ 343 Backup and restore with point-in-time recovery .............................................................................. 371 How it works ....................................................................................................................................... 372 Use point-in-time recovery ............................................................................................................... 377 Expire data with Time to Live ............................................................................................................... 390 Integration with AWS services ......................................................................................................... 391 Create table with default TTL value .............................................................................................. 392 Update table default TTL value ...................................................................................................... 396 Create table with custom TTL ......................................................................................................... 399 Update table custom TTL ................................................................................................................. 401 Use INSERT to set custom TTL for new rows .............................................................................. 404 Use UPDATE to set custom TTL for rows and columns .............................................................. 404 Working with AWS SDKs ........................................................................................................................ 406 Working with tags ................................................................................................................................... 407 Tagging restrictions ............................................................................................................................ 408 Tag keyspaces and tables ................................................................................................................. 408 Create cost allocation reports .......................................................................................................... 418 Create AWS CloudFormation resources ............................................................................................... 419 Amazon Keyspaces and AWS CloudFormation templates .......................................................... 420 Learn more about AWS CloudFormation ....................................................................................... 420 NoSQL Workbench ................................................................................................................................... 420 Download ............................................................................................................................................. 421 Getting started .................................................................................................................................... 422 Visualize a data model ...................................................................................................................... 424 Create a data model .......................................................................................................................... 427 Edit a data model ............................................................................................................................... 429 Commit a data model ....................................................................................................................... 431 Sample data models .......................................................................................................................... 442 Release history .................................................................................................................................... 443 Code examples ............................................................................................................................. 444 Basics .......................................................................................................................................................... 449 Hello Amazon Keyspaces .................................................................................................................. 450 Learn the basics .................................................................................................................................. 455 viii Amazon Keyspaces (for Apache Cassandra) Developer Guide Actions .................................................................................................................................................. 517 Libraries and tools ....................................................................................................................... 562 Libraries and examples ........................................................................................................................... 562 |
AmazonKeyspaces-003 | AmazonKeyspaces.pdf | 3 | resources ............................................................................................... 419 Amazon Keyspaces and AWS CloudFormation templates .......................................................... 420 Learn more about AWS CloudFormation ....................................................................................... 420 NoSQL Workbench ................................................................................................................................... 420 Download ............................................................................................................................................. 421 Getting started .................................................................................................................................... 422 Visualize a data model ...................................................................................................................... 424 Create a data model .......................................................................................................................... 427 Edit a data model ............................................................................................................................... 429 Commit a data model ....................................................................................................................... 431 Sample data models .......................................................................................................................... 442 Release history .................................................................................................................................... 443 Code examples ............................................................................................................................. 444 Basics .......................................................................................................................................................... 449 Hello Amazon Keyspaces .................................................................................................................. 450 Learn the basics .................................................................................................................................. 455 viii Amazon Keyspaces (for Apache Cassandra) Developer Guide Actions .................................................................................................................................................. 517 Libraries and tools ....................................................................................................................... 562 Libraries and examples ........................................................................................................................... 562 Amazon Keyspaces (for Apache Cassandra) developer toolkit .................................................. 562 Amazon Keyspaces (for Apache Cassandra) examples ................................................................ 562 AWS Signature Version 4 (SigV4) authentication plugins .......................................................... 562 Highlighted sample and developer tool repos .................................................................................. 563 Amazon Keyspaces Protocol Buffers .............................................................................................. 563 AWS CloudFormation template to create Amazon CloudWatch dashboard for Amazon Keyspaces (for Apache Cassandra) metrics ................................................................................... 563 Using Amazon Keyspaces (for Apache Cassandra) with AWS Lambda ..................................... 563 Using Amazon Keyspaces (for Apache Cassandra) with Spring ................................................. 564 Using Amazon Keyspaces (for Apache Cassandra) with Scala ................................................... 564 Using Amazon Keyspaces (for Apache Cassandra) with AWS Glue ........................................... 564 Amazon Keyspaces (for Apache Cassandra) Cassandra query language (CQL) to AWS CloudFormation converter ................................................................................................................ 564 Amazon Keyspaces (for Apache Cassandra) helpers for Apache Cassandra driver for Java . 565 Amazon Keyspaces (for Apache Cassandra) snappy compression demo ................................. 565 Amazon Keyspaces (for Apache Cassandra) and Amazon S3 codec demo .............................. 565 Best practices ............................................................................................................................... 566 NoSQL design ........................................................................................................................................... 567 NoSQL vs. RDBMS .............................................................................................................................. 568 Two key concepts ............................................................................................................................... 568 General approach ............................................................................................................................... 569 Connections ............................................................................................................................................... 570 How they work .................................................................................................................................... 570 How to configure connections ......................................................................................................... 571 How to configure retry policies ....................................................................................................... 573 VPC endpoint connections ............................................................................................................... 573 How to monitor connections ........................................................................................................... 574 How to handle connection errors ................................................................................................... 575 Data modeling .......................................................................................................................................... 576 Partition key design ........................................................................................................................... 577 Cost optimization ..................................................................................................................................... 579 Evaluate your costs at the table level ............................................................................................ 579 Evaluate your table's capacity mode .............................................................................................. 581 ix Amazon Keyspaces (for Apache Cassandra) Developer Guide Evaluate your table's Application Auto Scaling settings ............................................................ 586 Identify your unused resources ....................................................................................................... 593 Evaluate your table usage patterns ................................................................................................ 598 Evaluate your provisioned capacity for right-sized provisioning .............................................. 599 Troubleshooting ........................................................................................................................... 609 General errors ........................................................................................................................................... 610 General errors ...................................................................................................................................... 610 Connection errors .................................................................................................................................... 612 Errors connecting to an Amazon Keyspaces endpoint ................................................................ 612 Capacity management errors ................................................................................................................ 624 Serverless capacity errors ................................................................................................................. 624 Data definition language errors ............................................................................................................ 629 Data definition language errors ...................................................................................................... 629 Monitoring Amazon Keyspaces ................................................................................................... 634 Monitoring with CloudWatch ................................................................................................................ 635 Using metrics ....................................................................................................................................... 636 Metrics and dimensions .................................................................................................................... 637 Creating alarms ................................................................................................................................... 657 Logging with CloudTrail ......................................................................................................................... 658 Configuring log file entries in CloudTrail ...................................................................................... 658 DDL information in CloudTrail ......................................................................................................... 659 DML information in CloudTrail ........................................................................................................ 660 Understanding log file entries ......................................................................................................... 661 Security ........................................................................................................................................ 672 Data protection ........................................................................................................................................ 673 Encryption at rest ............................................................................................................................... 674 Encryption in transit .......................................................................................................................... 694 Internetwork traffic privacy .............................................................................................................. 694 AWS Identity and Access Management ............................................................................................... 696 Audience ............................................................................................................................................... 696 Authenticating with identities ......................................................................................................... 697 Managing access using policies ....................................................................................................... 700 How Amazon Keyspaces works with IAM ...................................................................................... 702 Identity-based policy examples ....................................................................................................... 707 AWS managed policies ...................................................................................................................... 714 Troubleshooting .................................................................................................................................. 722 x Amazon Keyspaces (for Apache Cassandra) Developer Guide Using service-linked roles ................................................................................................................. 725 Compliance validation ............................................................................................................................ 733 Resilience ................................................................................................................................................... 734 Infrastructure security ............................................................................................................................. 735 Using interface VPC endpoints ........................................................................................................ 736 Configuration and vulnerability analysis for Amazon Keyspaces ................................................... 742 Security best practices ............................................................................................................................ 742 Preventative security best practices ............................................................................................... 743 Detective security best practices ..................................................................................................... 744 CQL language reference .............................................................................................................. 747 Language elements ................................................................................................................................. 748 Identifiers ............................................................................................................................................. 748 Constants .............................................................................................................................................. 748 Terms ..................................................................................................................................................... 749 Data types ............................................................................................................................................ 749 JSON encoding of Amazon Keyspaces data types ....................................................................... 753 DDL statements ........................................................................................................................................ 756 Keyspaces ............................................................................................................................................. 757 Tables .................................................................................................................................................... 760 Types ..................................................................................................................................................... 773 DML statements ....................................................................................................................................... 776 SELECT .................................................................................................................................................. 776 INSERT ................................................................................................................................................... 779 UPDATE ................................................................................................................................................. 781 DELETE .................................................................................................................................................. 782 Built-in functions ..................................................................................................................................... 783 Scalar functions .................................................................................................................................. 783 Quotas .......................................................................................................................................... 786 Amazon Keyspaces service quotas ....................................................................................................... 786 Increasing or decreasing throughput (for provisioned tables) ........................................................ 791 Increasing provisioned throughput ................................................................................................. 791 Decreasing provisioned throughput ............................................................................................... 792 Amazon Keyspaces encryption at rest ................................................................................................. 792 Quotas and default values for user-defined types (UDTs) in Amazon Keyspaces ....................... 792 Amazon Keyspaces UDT quotas and default values .................................................................... 792 Document history ........................................................................................................................ 794 xi Amazon Keyspaces (for Apache Cassandra) Developer Guide What is Amazon Keyspaces (for Apache Cassandra)? Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. With Amazon Keyspaces, you don’t have to provision, patch, or manage servers, and you don’t have to install, maintain, or operate software. Amazon Keyspaces is serverless, so you pay for only the resources that you use, and the service automatically scales |
AmazonKeyspaces-004 | AmazonKeyspaces.pdf | 4 | ................................................................................................. 792 Quotas and default values for user-defined types (UDTs) in Amazon Keyspaces ....................... 792 Amazon Keyspaces UDT quotas and default values .................................................................... 792 Document history ........................................................................................................................ 794 xi Amazon Keyspaces (for Apache Cassandra) Developer Guide What is Amazon Keyspaces (for Apache Cassandra)? Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. With Amazon Keyspaces, you don’t have to provision, patch, or manage servers, and you don’t have to install, maintain, or operate software. Amazon Keyspaces is serverless, so you pay for only the resources that you use, and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. Note Apache Cassandra is an open-source, wide-column datastore that is designed to handle large amounts of data. For more information, see Apache Cassandra. Amazon Keyspaces makes it easy to migrate, run, and scale Cassandra workloads in the AWS Cloud. With just a few clicks on the AWS Management Console or a few lines of code, you can create keyspaces and tables in Amazon Keyspaces, without deploying any infrastructure or installing software. With Amazon Keyspaces, you can run your existing Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today. For a list of available AWS Regions and endpoints, see Service endpoints for Amazon Keyspaces. We recommend that you start by reading the following sections: Topics • Amazon Keyspaces: How it works • Amazon Keyspaces use cases • What is Cassandra Query Language (CQL)? Amazon Keyspaces: How it works Amazon Keyspaces removes the administrative overhead of managing Cassandra. To understand why, it's helpful to begin with Cassandra architecture and then compare it to Amazon Keyspaces. How it works 1 Amazon Keyspaces (for Apache Cassandra) Developer Guide Topics • High-level architecture: Apache Cassandra vs. Amazon Keyspaces • Cassandra data model • Accessing Amazon Keyspaces from an application High-level architecture: Apache Cassandra vs. Amazon Keyspaces Traditional Apache Cassandra is deployed in a cluster made up of one or more nodes. You are responsible for managing each node and adding and removing nodes as your cluster scales. A client program accesses Cassandra by connecting to one of the nodes and issuing Cassandra Query Language (CQL) statements. CQL is similar to SQL, the popular language used in relational databases. Even though Cassandra is not a relational database, CQL provides a familiar interface for querying and manipulating data in Cassandra. The following diagram shows a simple Apache Cassandra cluster, consisting of four nodes. A production Cassandra deployment might consist of hundreds of nodes, running on hundreds of physical computers across one or more physical data centers. This can cause an operational High-level architecture 2 Amazon Keyspaces (for Apache Cassandra) Developer Guide burden for application developers who need to provision, patch, and manage servers in addition to installing, maintaining, and operating software. With Amazon Keyspaces (for Apache Cassandra), you don’t need to provision, patch, or manage servers, so you can focus on building better applications. Amazon Keyspaces offers two throughput capacity modes for reads and writes: on-demand and provisioned. You can choose your table’s throughput capacity mode to optimize the price of reads and writes based on the predictability and variability of your workload. With on-demand mode, you pay for only the reads and writes that your application actually performs. You do not need to specify your table’s throughput capacity in advance. Amazon Keyspaces accommodates your application traffic almost instantly as it ramps up or down, making it a good option for applications with unpredictable traffic. Provisioned capacity mode helps you optimize the price of throughput if you have predictable application traffic and can forecast your table’s capacity requirements in advance. With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to perform. You can increase and decrease the provisioned capacity for your table automatically by enabling automatic scaling. You can change the capacity mode of your table once per day as you learn more about your workload’s traffic patterns, or if you expect to have a large burst in traffic, such as from a major event that you anticipate will drive a lot of table traffic. For more information about read and write capacity provisioning, see the section called “Configure read/write capacity modes”. Amazon Keyspaces (for Apache Cassandra) stores three copies of your data in multiple Availability Zones for durability and high availability. In addition, you benefit from a data center and network architecture that is built to meet the requirements of the most security-sensitive organizations. Encryption at rest is automatically enabled when you create a new Amazon Keyspaces table and all client connections require Transport Layer Security (TLS). Additional AWS security features include |
AmazonKeyspaces-005 | AmazonKeyspaces.pdf | 5 | major event that you anticipate will drive a lot of table traffic. For more information about read and write capacity provisioning, see the section called “Configure read/write capacity modes”. Amazon Keyspaces (for Apache Cassandra) stores three copies of your data in multiple Availability Zones for durability and high availability. In addition, you benefit from a data center and network architecture that is built to meet the requirements of the most security-sensitive organizations. Encryption at rest is automatically enabled when you create a new Amazon Keyspaces table and all client connections require Transport Layer Security (TLS). Additional AWS security features include monitoring, AWS Identity and Access Management, and virtual private cloud (VPC) endpoints. For an overview of all available security features, see Security. The following diagram shows the architecture of Amazon Keyspaces. High-level architecture 3 Amazon Keyspaces (for Apache Cassandra) Developer Guide A client program accesses Amazon Keyspaces by connecting to a predetermined endpoint (hostname and port number) and issuing CQL statements. For a list of available endpoints, see the section called “Service endpoints”. Cassandra data model How you model your data for your business case is critical to achieving optimal performance from Amazon Keyspaces. A poor data model can significantly degrade performance. Even though CQL looks similar to SQL, the backends of Cassandra and relational databases are very different and must be approached differently. The following are some of the more significant issues to consider: Storage You can visualize your Cassandra data in tables, with each row representing a record and each column a field within that record. Table design: Query first There are no JOINs in CQL. Therefore, you should design your tables with the shape of your data and how you need to access it for your business use cases. This might result in de- normalization with duplicated data. You should design each of your tables specifically for a particular access pattern. Cassandra data model 4 Amazon Keyspaces (for Apache Cassandra) Developer Guide Partitions Your data is stored in partitions on disk. The number of partitions your data is stored in and how it is distributed across the partitions is determined by your partition key. How you define your partition key can have a significant impact upon the performance of your queries. For best practices, see the section called “Partition key design”. Primary key In Cassandra, data is stored as a key-value pair. Every Cassandra table must have a primary key, which is the unique key to each row in the table. The primary key is the composite of a required partition key and optional clustering columns. The data that comprises the primary key must be unique across all records in a table. • Partition key – The partition key portion of the primary key is required and determines which partition of your cluster the data is stored in. The partition key can be a single column, or it can be a compound value composed of two or more columns. You would use a compound partition key if a single column partition key would result in a single partition or a very few partitions having most of the data and thus bearing the majority of the disk I/O operations. • Clustering column – The optional clustering column portion of your primary key determines how the data is clustered and sorted within each partition. If you include a clustering column in your primary key, the clustering column can have one or more columns. If there are multiple columns in the clustering column, the sorting order is determined by the order that the columns are listed in the clustering column, from left to right. For more information about NoSQL design and Amazon Keyspaces, see the section called “NoSQL design”. For more information about Amazon Keyspaces and data modeling, see the section called “Data modeling”. Accessing Amazon Keyspaces from an application Amazon Keyspaces (for Apache Cassandra) implements the Apache Cassandra Query Language (CQL) API, so you can use CQL and Cassandra drivers that you already use. Updating your application is as easy as updating your Cassandra driver or cqlsh configuration to point to the Amazon Keyspaces service endpoint. For more information about the required credentials, see the section called “Create IAM credentials for AWS authentication”. Accessing Amazon Keyspaces 5 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note To help you get started, you can find end-to-end code samples of connecting to Amazon Keyspaces by using various Cassandra client drivers in the Amazon Keyspaces code example repository on GitHub. Consider the following Python program, which connects to a Cassandra cluster and queries a table. from cassandra.cluster import Cluster #TLS/SSL configuration goes here ksp = 'MyKeyspace' tbl = 'WeatherData' cluster = Cluster(['NNN.NNN.NNN.NNN'], port=NNNN) session = cluster.connect(ksp) session.execute('USE ' + ksp) rows = session.execute('SELECT * FROM ' + tbl) for row in rows: print(row) |
AmazonKeyspaces-006 | AmazonKeyspaces.pdf | 6 | credentials for AWS authentication”. Accessing Amazon Keyspaces 5 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note To help you get started, you can find end-to-end code samples of connecting to Amazon Keyspaces by using various Cassandra client drivers in the Amazon Keyspaces code example repository on GitHub. Consider the following Python program, which connects to a Cassandra cluster and queries a table. from cassandra.cluster import Cluster #TLS/SSL configuration goes here ksp = 'MyKeyspace' tbl = 'WeatherData' cluster = Cluster(['NNN.NNN.NNN.NNN'], port=NNNN) session = cluster.connect(ksp) session.execute('USE ' + ksp) rows = session.execute('SELECT * FROM ' + tbl) for row in rows: print(row) To run the same program against Amazon Keyspaces, you need to: • Add the cluster endpoint and port: For example, the host can be replaced with a service endpoint, such as cassandra.us-east-2.amazonaws.com and the port number with: 9142. • Add the TLS/SSL configuration: For more information on adding the TLS/SSL configuration to connect to Amazon Keyspaces by using a Cassandra client Python driver, see Using a Cassandra Python client driver to access Amazon Keyspaces programmatically. Amazon Keyspaces use cases The following are just some of the ways in which you can use Amazon Keyspaces: • Build applications that require low latency – Process data at high speeds for applications that require single-digit-millisecond latency, such as industrial equipment maintenance, trade monitoring, fleet management, and route optimization. Use cases 6 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Build applications using open-source technologies – Build applications on AWS using open- source Cassandra APIs and drivers that are available for a wide range of programming languages, such as Java, Python, Ruby, Microsoft .NET, Node.js, PHP, C++, Perl, and Go. For code examples, see Libraries and tools. • Move your Cassandra workloads to the cloud – Managing Cassandra tables yourself is time- consuming and expensive. With Amazon Keyspaces, you can set up, secure, and scale Cassandra tables in the AWS Cloud without managing infrastructure. For more information, see Managing serverless resources. What is Cassandra Query Language (CQL)? Cassandra Query Language (CQL) is the primary language for communicating with Apache Cassandra. Amazon Keyspaces (for Apache Cassandra) is compatible with the CQL 3.x API (backward-compatible with version 2.x). In CQL, data is stored in tables, columns, and rows. In this sense CQL is similar to Structured Query Language (SQL). These are the key concepts in CQL. • CQL elements – The fundamental elements of CQL are identifiers, constants, terms, and data types. • Data Definition Language (DDL) – DDL statements are used to manage data structures like keyspaces and tables, which are AWS resources in Amazon Keyspaces. DDL statements are control plane operations in AWS. • Data Manipulation Language (DML) – DML statements are used to manage data within tables. DML statements are used for selecting, inserting, updating, and deleting data. These are data plane operations in AWS. • Built-in functions – Amazon Keyspaces supports a variety of built-in scalar functions that you can use in CQL statements. For more information about CQL, see CQL language reference for Amazon Keyspaces (for Apache Cassandra). For functional differences with Apache Cassandra, see the section called “Functional differences with Apache Cassandra”. To run CQL queries, you can do one of the following: • Use the CQL editor in the AWS Management Console. What is CQL? 7 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Use AWS CloudShell and the cqlsh-expansion. • Use a cqlsh client. • Use an Apache 2.0 licensed Cassandra client driver. In addition to CQL, you can perform Data Definition Language (DDL) operations in Amazon Keyspaces using the AWS SDKs and the AWS Command Line Interface. For more information about using these methods to access Amazon Keyspaces, see Accessing Amazon Keyspaces (for Apache Cassandra). What is CQL? 8 Amazon Keyspaces (for Apache Cassandra) Developer Guide How does Amazon Keyspaces (for Apache Cassandra) compare to Apache Cassandra? To establish a connection to Amazon Keyspaces, you can either use a public AWS service endpoint or a private endpoint using Interface VPC endpoints (AWS PrivateLink) in the Amazon Virtual Private Cloud . Depending on the endpoint used, Amazon Keyspaces can appear to the client in one of the following ways. AWS service endpoint connection This is a connection established over any public endpoint. In this case, Amazon Keyspaces appears as a nine-node Apache Cassandra 3.11.2 cluster to the client. Interface VPC endpoint connection This is a private connection established using an interface VPC endpoint. In this case, Amazon Keyspaces appears as a three-node Apache Cassandra 3.11.2 cluster to the client. Independent of the connection type and the number of nodes that are visible to the client, Amazon Keyspaces provides virtually limitless throughput and storage. To do this, Amazon Keyspaces maps the nodes to load balancers that route your queries to one of the many underlying storage partitions. For more |
AmazonKeyspaces-007 | AmazonKeyspaces.pdf | 7 | over any public endpoint. In this case, Amazon Keyspaces appears as a nine-node Apache Cassandra 3.11.2 cluster to the client. Interface VPC endpoint connection This is a private connection established using an interface VPC endpoint. In this case, Amazon Keyspaces appears as a three-node Apache Cassandra 3.11.2 cluster to the client. Independent of the connection type and the number of nodes that are visible to the client, Amazon Keyspaces provides virtually limitless throughput and storage. To do this, Amazon Keyspaces maps the nodes to load balancers that route your queries to one of the many underlying storage partitions. For more information about connections, see the section called “How they work”. Amazon Keyspaces stores data in partitions. A partition is an allocation of storage for a table, backed by solid state drives (SSDs). Amazon Keyspaces automatically replicates your data across multiple Availability Zones within an AWS Region for durability and high availability. As your throughput or storage needs grow, Amazon Keyspaces handles the partition management for you and automatically provisions the required additional partitions. Amazon Keyspaces supports all commonly used Cassandra data-plane operations, such as creating keyspaces and tables, reading data, and writing data. Amazon Keyspaces is serverless, so you don’t have to provision, patch, or manage servers. You also don’t have to install, maintain, or operate software. As a result, in Amazon Keyspaces you don't need to use the Cassandra control plane API operations to manage cluster and node settings. Amazon Keyspaces automatically configures settings such as replication factor and consistency level to provide you with high availability, durability, and single-digit-millisecond performance. 9 Amazon Keyspaces (for Apache Cassandra) Developer Guide For even more resiliency and low-latency local reads, Amazon Keyspaces offers multi-Region replication. Topics • Functional differences: Amazon Keyspaces vs. Apache Cassandra • Supported Cassandra APIs, operations, functions, and data types • Supported Apache Cassandra read and write consistency levels and associated costs Functional differences: Amazon Keyspaces vs. Apache Cassandra The following are the functional differences between Amazon Keyspaces and Apache Cassandra. Topics • Apache Cassandra APIs, operations, and data types • Asynchronous creation and deletion of keyspaces and tables • Authentication and authorization • Batch • Cluster configuration • Connections • IN keyword • FROZEN collections • Lightweight transactions • Load balancing • Pagination • Partitioners • Prepared statements • Range delete • System tables • Timestamps • User-defined types (UDTs) Functional differences with Apache Cassandra 10 Amazon Keyspaces (for Apache Cassandra) Developer Guide Apache Cassandra APIs, operations, and data types Amazon Keyspaces supports all commonly used Cassandra data-plane operations, such as creating keyspaces and tables, reading data, and writing data. To see what is currently supported, see Supported Cassandra APIs, operations, functions, and data types. Asynchronous creation and deletion of keyspaces and tables Amazon Keyspaces performs data definition language (DDL) operations, such as creating and deleting keyspaces , tables, and types asynchronously. To learn how to monitor the creation status of resources, see the section called “Check keyspace creation status” and the section called “Check table creation status”. For a list of DDL statements in the CQL language reference, see the section called “DDL statements”. Authentication and authorization Amazon Keyspaces (for Apache Cassandra) uses AWS Identity and Access Management (IAM) for user authentication and authorization, and supports the equivalent authorization policies as Apache Cassandra. As such, Amazon Keyspaces does not support Apache Cassandra's security configuration commands. Batch Amazon Keyspaces supports unlogged batch commands with up to 30 commands in the batch. Only unconditional INSERT, UPDATE, or DELETE commands are permitted in a batch. Logged batches are not supported. Cluster configuration Amazon Keyspaces is serverless, so there are no clusters, hosts, or Java virtual machines (JVMs) to configure. Cassandra’s settings for compaction, compression, caching, garbage collection, and bloom filtering are not applicable to Amazon Keyspaces and are ignored if specified. Connections You can use existing Cassandra drivers to communicate with Amazon Keyspaces, but you need to configure the drivers differently. Amazon Keyspaces supports up to 3,000 CQL queries per TCP connection per second, but there is no limit on the number of connections a driver can establish. Apache Cassandra APIs, operations, and data types 11 Amazon Keyspaces (for Apache Cassandra) Developer Guide Most open-source Cassandra drivers establish a connection pool to Cassandra and load balance queries over that pool of connections. Amazon Keyspaces exposes 9 peer IP addresses to drivers, and the default behavior of most drivers is to establish a single connection to each peer IP address. Therefore, the maximum CQL query throughput of a driver using the default settings is 27,000 CQL queries per second. To increase this number, we recommend increasing the number of connections per IP address your driver is maintaining in its connection pool. For example, setting the maximum connections per IP address to 2 doubles the maximum throughput of your driver to 54,000 |
AmazonKeyspaces-008 | AmazonKeyspaces.pdf | 8 | pool to Cassandra and load balance queries over that pool of connections. Amazon Keyspaces exposes 9 peer IP addresses to drivers, and the default behavior of most drivers is to establish a single connection to each peer IP address. Therefore, the maximum CQL query throughput of a driver using the default settings is 27,000 CQL queries per second. To increase this number, we recommend increasing the number of connections per IP address your driver is maintaining in its connection pool. For example, setting the maximum connections per IP address to 2 doubles the maximum throughput of your driver to 54,000 CQL queries per second. As a best practice, we recommend configuring drivers to use 500 CQL queries per second per connection to allow for overhead and to improve distribution. In this scenario, planning for 18,000 CQL queries per second requires 36 connections. Configuring the driver for 4 connections across 9 endpoints provides for 36 connections performing 500 request per second. For more information about best practices for connections, see the section called “Connections”. When connecting with VPC endpoints, there might be fewer endpoints available. This means that you have to increase the number of connections in the driver configuration. For more information about best practices for VPC connections, see the section called “VPC endpoint connections”. IN keyword Amazon Keyspaces supports the IN keyword in the SELECT statement. IN is not supported with UPDATE and DELETE. When using the IN keyword in the SELECT statement, the results of the query are returned in the order of how the keys are presented in the SELECT statement. In Cassandra, the results are ordered lexicographically. When using ORDER BY, full re-ordering with disabled pagination is not supported and results are ordered within a page. Slice queries are not supported with the IN keyword. TOKENS are not supported with the IN keyword. Amazon Keyspaces processes queries with the IN keyword by creating subqueries. Each subquery counts as a connection towards the 3,000 CQL queries per TCP connection per second limit. For more information, see the section called “Use IN SELECT”. FROZEN collections The FROZEN keyword in Cassandra serializes multiple components of a collection data type into a single immutable value that is treated like a BLOB. INSERT and UPDATE statements overwrite the entire collection. IN keyword 12 Amazon Keyspaces (for Apache Cassandra) Developer Guide Amazon Keyspaces supports up to 8 levels of nesting for frozen collections by default. For more information, see the section called “Amazon Keyspaces service quotas”. Amazon Keyspaces doesn't support inequality comparisons that use the entire frozen collection in a conditional UPDATE or SELECT statement. The behavior for collections and frozen collections is the same in Amazon Keyspaces. When you're using frozen collections with client-side timestamps, in the case where the timestamp of a write operation is the same as the timestamp of an existing column that isn't expired or tombstoned, Amazon Keyspaces doesn't perform comparisons. Instead, it lets the server determine the latest writer, and the latest writer wins. For more information about frozen collections, see the section called “Collection types”. Lightweight transactions Amazon Keyspaces (for Apache Cassandra) fully supports compare and set functionality on INSERT, UPDATE, and DELETE commands, which are known as lightweight transactions (LWTs) in Apache Cassandra. As a serverless offering, Amazon Keyspaces (for Apache Cassandra) provides consistent performance at any scale, including for lightweight transactions. With Amazon Keyspaces, there is no performance penalty for using lightweight transactions. Load balancing The system.peers table entries correspond to Amazon Keyspaces load balancers. For best results, we recommend using a round robin load-balancing policy and tuning the number of connections per IP to suit your application's needs. Pagination Amazon Keyspaces paginates results based on the number of rows that it reads to process a request, not the number of rows returned in the result set. As a result, some pages might contain fewer rows than you specify in PAGE SIZE for filtered queries. In addition, Amazon Keyspaces paginates results automatically after reading 1 MB of data to provide customers with consistent, single-digit millisecond read performance. For more information, see the section called “Paginate results”. In tables with static columns, both Apache Cassandra and Amazon Keyspaces establish the partition's static column value at the start of each page in a multi-page query. When a table has Lightweight transactions 13 Amazon Keyspaces (for Apache Cassandra) Developer Guide large data rows, as a result of the Amazon Keyspaces pagination behavior, the likelihood is higher that a range read operation result could return more pages for Amazon Keyspaces than for Apache Cassandra. Consequently, there is a higher likelihood in Amazon Keyspaces that concurrent updates to the static column could result in the static column value being different in different pages of the range read result set. Partitioners The default partitioner in Amazon Keyspaces is the |
AmazonKeyspaces-009 | AmazonKeyspaces.pdf | 9 | at the start of each page in a multi-page query. When a table has Lightweight transactions 13 Amazon Keyspaces (for Apache Cassandra) Developer Guide large data rows, as a result of the Amazon Keyspaces pagination behavior, the likelihood is higher that a range read operation result could return more pages for Amazon Keyspaces than for Apache Cassandra. Consequently, there is a higher likelihood in Amazon Keyspaces that concurrent updates to the static column could result in the static column value being different in different pages of the range read result set. Partitioners The default partitioner in Amazon Keyspaces is the Cassandra-compatible Murmur3Partitioner. In addition, you have the choice of using either the Amazon Keyspaces DefaultPartitioner or the Cassandra-compatible RandomPartitioner. With Amazon Keyspaces, you can safely change the partitioner for your account without having to reload your Amazon Keyspaces data. After the configuration change has completed, which takes approximately 10 minutes, clients will see the new partitioner setting automatically the next time they connect. For more information, see the section called “Working with partitioners”. Prepared statements Amazon Keyspaces supports the use of prepared statements for data manipulation language (DML) operations, such as reading and writing data. Amazon Keyspaces does not currently support the use of prepared statements for data definition language (DDL) operations, such as creating tables and keyspaces. DDL operations must be run outside of prepared statements. Range delete Amazon Keyspaces supports deleting rows in range. A range is a contiguous set of rows within a partition. You specify a range in a DELETE operation by using a WHERE clause. You can specify the range to be an entire partition. Furthermore, you can specify a range to be a subset of contiguous rows within a partition by using relational operators (for example, '>', '<'), or by including the partition key and omitting one or more clustering columns. With Amazon Keyspaces, you can delete up to 1,000 rows within a range in a single operation. Range deletes are not isolated. Individual row deletions are visible to other operations while a range delete is in process. Partitioners 14 Amazon Keyspaces (for Apache Cassandra) Developer Guide System tables Amazon Keyspaces populates the system tables that are required by Apache 2.0 open-source Cassandra drivers. The system tables that are visible to a client contain information that's unique to the authenticated user. The system tables are fully controlled by Amazon Keyspaces and are read- only. For more information, see the section called “System keyspaces”. Read-only access to system tables is required, and you can control it with IAM access policies. For more information, see the section called “Managing access using policies”. You must define tag- based access control policies for system tables differently depending on whether you use the AWS SDK or Cassandra Query Language (CQL) API calls through Cassandra drivers and developer tools. To learn more about tag-based access control for system tables, see the section called “ Amazon Keyspaces resource access based on tags”. If you access Amazon Keyspaces using Amazon VPC endpoints, you see entries in the system.peers table for each Amazon VPC endpoint that Amazon Keyspaces has permissions to see. As a result, your Cassandra driver might issue a warning message about the control node itself in the system.peers table. You can safely ignore this warning. Timestamps In Amazon Keyspaces, cell-level timestamps that are compatible with the default timestamps in Apache Cassandra are an opt-in feature. The USING TIMESTAMP clause and the WRITETIME function are only available when client-side timestamps are turned on for a table. To learn more about client-side timestamps in Amazon Keyspaces, see the section called “Client-side timestamps”. User-defined types (UDTs) The inequality operator is not supported for UDTs in Amazon Keyspaces. To learn how to work with UDTs in Amazon Keyspaces, see the section called “User-defined types (UDTs)”. To review how many UDTs are supported per keyspace, supported levels of nesting, and other default values and quotas related to UDTs, see the section called “Quotas and default values for user-defined types (UDTs) in Amazon Keyspaces”. System tables 15 Amazon Keyspaces (for Apache Cassandra) Developer Guide Supported Cassandra APIs, operations, functions, and data types Amazon Keyspaces (for Apache Cassandra) is compatible with Cassandra Query Language (CQL) 3.11 API (backward-compatible with version 2.x). Amazon Keyspaces supports all commonly used Cassandra data-plane operations, such as creating keyspaces and tables, reading data, and writing data. The following sections list the supported functionality. Topics • Cassandra API support • Cassandra control plane API support • Cassandra data plane API support • Cassandra function support • Cassandra data type support Cassandra API support API operation Supported CREATE KEYSPACE ALTER KEYSPACE DROP KEYSPACE CREATE TABLE ALTER TABLE DROP TABLE CREATE INDEX DROP INDEX Yes Yes Yes Yes Yes Yes No No Supported Cassandra APIs, operations, functions, and data types 16 Amazon Keyspaces (for Apache |
AmazonKeyspaces-010 | AmazonKeyspaces.pdf | 10 | (backward-compatible with version 2.x). Amazon Keyspaces supports all commonly used Cassandra data-plane operations, such as creating keyspaces and tables, reading data, and writing data. The following sections list the supported functionality. Topics • Cassandra API support • Cassandra control plane API support • Cassandra data plane API support • Cassandra function support • Cassandra data type support Cassandra API support API operation Supported CREATE KEYSPACE ALTER KEYSPACE DROP KEYSPACE CREATE TABLE ALTER TABLE DROP TABLE CREATE INDEX DROP INDEX Yes Yes Yes Yes Yes Yes No No Supported Cassandra APIs, operations, functions, and data types 16 Amazon Keyspaces (for Apache Cassandra) Developer Guide API operation UNLOGGED BATCH LOGGED BATCH SELECT INSERT DELETE UPDATE USE CREATE TYPE ALTER TYPE DROP TYPE CREATE TRIGGER DROP TRIGGER CREATE FUNCTION DROP FUNCTION CREATE AGGREGATE DROP AGGREGATE CREATE MATERIALIZED VIEW ALTER MATERIALIZED VIEW DROP MATERIALIZED VIEW TRUNCATE Supported Yes No Yes Yes Yes Yes Yes Yes No Yes No No No No No No No No No No Cassandra API support 17 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra control plane API support Because Amazon Keyspaces is managed, the Cassandra control plane API operations to manage cluster and node settings are not required. As a result, the following Cassandra features are not Reason All writes are durable applicable. Feature Durable writes toggle Read repair settings GC grace seconds Bloom filter settings Compaction settings Compression settings Caching settings Security settings Cassandra data plane API support Feature Supported JSON support for SELECT and INSERT statements Static columns Time to Live (TTL) Cassandra function support For more information about the supported functions, see the section called “Built-in functions”. Cassandra control plane API support 18 Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Replaced by IAM Yes Yes Yes Amazon Keyspaces (for Apache Cassandra) Developer Guide Function Supported No Yes Yes Yes Yes Yes Yes No Yes Aggregate functions Blob conversion Cast Datetime functions Timeconversion functions TimeUuid functions Token User defined functions (UDF) Uuid Cassandra data type support Data type Supported ascii bigint blob boolean counter date decimal double Yes Yes Yes Yes Yes Yes Yes Yes Cassandra data type support 19 Amazon Keyspaces (for Apache Cassandra) Developer Guide Data type Supported float frozen inet int list map set smallint text time timestamp timeuuid tinyint tuple user-defined types (UDTs) uuid varchar varint Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Cassandra data type support 20 Amazon Keyspaces (for Apache Cassandra) Developer Guide Supported Apache Cassandra read and write consistency levels and associated costs The topics in this section describe which Apache Cassandra consistency levels are supported for read and write operations in Amazon Keyspaces (for Apache Cassandra). Topics • Write consistency levels • Read consistency levels • Unsupported consistency levels Write consistency levels Amazon Keyspaces replicates all write operations three times across multiple Availability Zones for durability and high availability. Writes are durably stored before they are acknowledged using the LOCAL_QUORUM consistency level. For each 1 KB write, you are billed 1 write capacity unit (WCU) for tables using provisioned capacity mode or 1 write request unit (WRU) for tables using on-demand mode. You can use cqlsh to set the consistency for all queries in the current session to LOCAL_QUORUM using the following code. CONSISTENCY LOCAL_QUORUM; To configure the consistency level programmatically, you can set the consistency with the appropriate Cassandra client drivers. For example, the 4.x version Java drivers allow you to set the consistency level in the app config file as shown below. basic.request.consistency = LOCAL_QUORUM If you're using a 3.x version Java Cassandra driver, you can specify the consistency level for the session by adding .withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM) as shown in the following code example. Session session = Cluster.builder() Supported Cassandra consistency levels 21 Amazon Keyspaces (for Apache Cassandra) Developer Guide .addContactPoint(endPoint) .withPort(portNumber) .withAuthProvider(new SigV4AuthProvider("us-east-2")) .withSSL() .withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM) .build() .connect(); To configure the consistency level for specific write operations, you can define the consistency when you call QueryBuilder.insertInto with a setConsistencyLevel argument when you're using the Java driver. Read consistency levels Amazon Keyspaces supports three read consistency levels: ONE, LOCAL_ONE, and LOCAL_QUORUM. During a LOCAL_QUORUM read, Amazon Keyspaces returns a response reflecting the most recent updates from all prior successful write operations. Using the consistency level ONE or LOCAL_ONE can improve the performance and availability of your read requests, but the response might not reflect the results of a recently completed write. For each 4 KB read using ONE or LOCAL_ONE consistency, you are billed 0.5 read capacity units (RCUs) for tables using provisioned capacity mode or 0.5 read request units (RRUs) for tables using on-demand mode. For each 4 KB read using LOCAL_QUORUM consistency, you are billed 1 read capacity unit (RCU) for tables using provisioned capacity mode or 1 read |
AmazonKeyspaces-011 | AmazonKeyspaces.pdf | 11 | the most recent updates from all prior successful write operations. Using the consistency level ONE or LOCAL_ONE can improve the performance and availability of your read requests, but the response might not reflect the results of a recently completed write. For each 4 KB read using ONE or LOCAL_ONE consistency, you are billed 0.5 read capacity units (RCUs) for tables using provisioned capacity mode or 0.5 read request units (RRUs) for tables using on-demand mode. For each 4 KB read using LOCAL_QUORUM consistency, you are billed 1 read capacity unit (RCU) for tables using provisioned capacity mode or 1 read request units (RRU) for tables using on-demand mode. Billing based on read consistency and read capacity throughput mode per table for each 4 KB of reads Consistency level Provisioned On-demand ONE LOCAL_ONE LOCAL_QUORUM 0.5 RCUs 0.5 RCUs 1 RCU 0.5 RRUs 0.5 RRUs 1 RRU To specify a different consistency for read operations, call QueryBuilder.select with a setConsistencyLevel argument when you're using the Java driver. Read consistency levels 22 Amazon Keyspaces (for Apache Cassandra) Developer Guide Unsupported consistency levels The following consistency levels are not supported by Amazon Keyspaces and will result in exceptions. Unsupported consistency levels Apache Cassandra Amazon Keyspaces EACH_QUORUM QUORUM ALL TWO THREE ANY SERIAL LOCAL_SERIAL Not supported Not supported Not supported Not supported Not supported Not supported Not supported Not supported Unsupported consistency levels 23 Amazon Keyspaces (for Apache Cassandra) Developer Guide Migrating to Amazon Keyspaces (for Apache Cassandra) Migrating to Amazon Keyspaces (for Apache Cassandra) presents a range of compelling benefits for businesses and organizations. Here are some key advantages that make Amazon Keyspaces an attractive choice for migration. • Scalability – Amazon Keyspaces is designed to handle massive workloads and scale seamlessly to accommodate growing data volumes and traffic. With traditional Cassandra, scaling is not performed on demand and requires planning for future peaks. With Amazon Keyspaces, you can easily scale your tables up or down based on demand, ensuring that your applications can handle sudden spikes in traffic without compromising performance. • Performance – Amazon Keyspaces offers low-latency data access, enabling applications to retrieve and process data with exceptional speed. Its distributed architecture ensures that read and write operations are distributed across multiple nodes, delivering consistent, single-digit millisecond response times even at high request rates. • Fully managed – Amazon Keyspaces is a fully managed service provided by AWS. This means that AWS handles the operational aspects of database management, including provisioning, configuration, patching, backups, and scaling. This allows you to focus more on developing your applications and less on database administration tasks. • Serverless architecture – Amazon Keyspaces is serverless. You pay only for capacity consumed with no upfront capacity provisioning required. You don't have servers to manage or instances to choose. This pay-per-request model offers cost efficiency and minimal operational overhead, as you only pay for the resources you consume without the need to provision and monitor capacity. • NoSQL flexibility with schema – Amazon Keyspaces follows a NoSQL data model, providing flexibility in schema design. With Amazon Keyspaces, you can store structured, semi-structured, and unstructured data, making it well-suited for handling diverse and evolving data types. Additionally, Amazon Keyspaces performs schema validation on write allowing for a centralized evolution of the data model. This flexibility enables faster development cycles and easier adaptation to changing business requirements. • High availability and durability – Amazon Keyspaces replicates data across multiple Availability Zones within an AWS Region, ensuring high availability and data durability. It automatically handles replication, failover, and recovery, minimizing the risk of data loss or service disruptions. Amazon Keyspaces provides an availability SLA of up to 99.999%. For even more resiliency and low-latency local reads, Amazon Keyspaces offers multi-Region replication. 24 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Security and compliance – Amazon Keyspaces integrates with AWS Identity and Access Management for fine-grained access control. It provides encryption at rest and in-transit, helping to improve the security of your data. Amazon Keyspaces has been assessed by third-party auditors for security and compliance with specific programs, including HIPAA, PCI DSS, and SOC, enabling you to meet regulatory requirements. For more information, see the section called “Compliance validation”. • Integration with AWS Ecosystem – As part of the AWS ecosystem, Amazon Keyspaces seamlessly integrates with other AWS services, for example AWS CloudFormation, Amazon CloudWatch, and AWS CloudTrail. This integration enables you to build serverless architectures, leverage infrastructure as code, and create real-time data-driven applications. For more information, see Monitoring Amazon Keyspaces. Topics • Create a migration plan for migrating from Apache Cassandra to Amazon Keyspaces • How to select the right tool for bulk uploading or migrating data to Amazon Keyspaces Create a migration plan for migrating from Apache Cassandra to Amazon Keyspaces For a successful migration from Apache Cassandra to Amazon |
AmazonKeyspaces-012 | AmazonKeyspaces.pdf | 12 | Ecosystem – As part of the AWS ecosystem, Amazon Keyspaces seamlessly integrates with other AWS services, for example AWS CloudFormation, Amazon CloudWatch, and AWS CloudTrail. This integration enables you to build serverless architectures, leverage infrastructure as code, and create real-time data-driven applications. For more information, see Monitoring Amazon Keyspaces. Topics • Create a migration plan for migrating from Apache Cassandra to Amazon Keyspaces • How to select the right tool for bulk uploading or migrating data to Amazon Keyspaces Create a migration plan for migrating from Apache Cassandra to Amazon Keyspaces For a successful migration from Apache Cassandra to Amazon Keyspaces, we recommend a review of the applicable migration concepts and best practices as well as a comparison of the available options. This topic outlines how the migration process works by introducing several key concepts and the tools and techniques available to you. You can evaluate the different migration strategies to select the one that best meets your requirements. Topics • Functional compatibility • Estimate Amazon Keyspaces pricing • Choose a migration strategy • Online migration to Amazon Keyspaces: strategies and best practices • Offline migration process: Apache Cassandra to Amazon Keyspaces • Using a hybrid migration solution: Apache Cassandra to Amazon Keyspaces Migrating from Cassandra 25 Amazon Keyspaces (for Apache Cassandra) Developer Guide Functional compatibility Consider the functional differences between Apache Cassandra and Amazon Keyspaces carefully before the migration. Amazon Keyspaces supports all commonly used Cassandra data-plane operations, such as creating keyspaces and tables, reading data, and writing data. However there are some Cassandra APIs that Amazon Keyspaces doesn't support. For more information about supported APIs, see the section called “Supported Cassandra APIs, operations, functions, and data types”. For an overview of all functional differences between Amazon Keyspaces and Apache Cassandra, see the section called “Functional differences with Apache Cassandra”. To compare the Cassandra APIs and schema that you're using with supported functionality in Amazon Keyspaces, you can run a compatibility script available in the Amazon Keyspaces toolkit on GitHub. How to use the compatibility script 1. Download the compatibility Python script from GitHub and move it to a location that has access to your existing Apache Cassandra cluster. 2. The compatibility script uses similar parameters as CQLSH. For --host and --port enter the IP address and the port you use to connect and run queries to one of the Cassandra nodes in your cluster. If your Cassandra cluster uses authentication, you also need to provide -username and - password. To run the compatibility script, you can use the following command. python toolkit-compat-tool.py --host hostname or IP -u "username" -p "password" -- port native transport port Estimate Amazon Keyspaces pricing This section provides an overview of the information you need to gather from your Apache Cassandra tables to calculate the estimated cost for Amazon Keyspaces. Each one of your tables requires different data types, needs to support different CQL queries, and maintains distinctive read/write traffic. Compatibility 26 Amazon Keyspaces (for Apache Cassandra) Developer Guide Thinking of your requirements based on tables aligns with Amazon Keyspaces table-level resource isolation and read/write throughput capacity modes. With Amazon Keyspaces, you can define read/ write capacity and automatic scaling policies for tables independently. Understanding table requirements helps you prioritize tables for migration based on functionality, cost, and migration effort. Collect the following Cassandra table metrics before a migration. This information helps to estimate the cost of your workload on Amazon Keyspaces. • Table name – The name of the fully qualified keyspace and table name. • Description – A description of the table, for example how it’s used, or what type of data is stored in it. • Average reads per second – The average number of coordinate-level reads against the table over a large time interval. • Average writes per second – The average number of coordinate-level writes against the table over a large time interval. • Average row size in bytes – The average row size in bytes. • Storage size in GBs – The raw storage size for a table. • Read consistency breakdown – The percentage of reads that use eventual consistency (LOCAL_ONE or ONE) vs. strong consistency (LOCAL_QUORUM). This table shows an example of the information about your tables that you need to pull together when planning a migration. Table name Descripti on Average reads per Average writes per Average row size in Storage size in GBs Read consisten second second bytes cy breakdown 10,000 5,000 2,200 2,000 100% LOCAL_ONE mykeyspac e.mytable Used to store shopping cart history Estimate pricing 27 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table name Descripti on Average reads per Average writes per Average row size in Storage size in GBs Read consisten second second bytes cy breakdown 20,000 1,000 850 1,000 25% LOCAL_QUO RUM 75% LOCAL_ONE mykeyspac e.mytable2 Used to store latest |
AmazonKeyspaces-013 | AmazonKeyspaces.pdf | 13 | tables that you need to pull together when planning a migration. Table name Descripti on Average reads per Average writes per Average row size in Storage size in GBs Read consisten second second bytes cy breakdown 10,000 5,000 2,200 2,000 100% LOCAL_ONE mykeyspac e.mytable Used to store shopping cart history Estimate pricing 27 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table name Descripti on Average reads per Average writes per Average row size in Storage size in GBs Read consisten second second bytes cy breakdown 20,000 1,000 850 1,000 25% LOCAL_QUO RUM 75% LOCAL_ONE mykeyspac e.mytable2 Used to store latest profile informati on How to collect table metrics This section provides step by step instructions on how to collect the necessary table metrics from your existing Cassandra cluster. These metrics include row size, table size, and read/write requests per second (RPS). They allow you to assess throughput capacity requirements for an Amazon Keyspaces table and estimate pricing. How to collect table metrics on the Cassandra source table 1. Determine row size Row size is important for determining the read capacity and write capacity utilization in Amazon Keyspaces. The following diagram shows the typical data distribution over a Cassandra token range. Estimate pricing 28 Amazon Keyspaces (for Apache Cassandra) Developer Guide You can use a row size sampler script available on GitHub to collect row size metrics for each table in your Cassandra cluster. The script exports table data from Apache Cassandra by using cqlsh and awk to calculate the min, max, average, and standard deviation of row size over a configurable sample set of table data. The row size sampler passes the arguments to cqlsh, so the same parameters can be used to connect and read from your Cassandra cluster. The following statement is an example of this. ./row-size-sampler.sh 10.22.33.44 9142 \\ -u "username" -p "password" --ssl For more information on how row size is calculated in Amazon Keyspaces, see the section called “Estimate row size”. 2. Determine table size With Amazon Keyspaces, you don't need to provision storage in advance. Amazon Keyspaces monitors the billable size of your tables continuously to determine your storage charges. Storage is billed per GB-month. Amazon Keyspaces table size is based on the raw size (uncompressed) of a single replica. To monitor the table size in Amazon Keyspaces, you can use the metric BillableTableSizeInBytes, which is displayed for each table in the AWS Management Console. To estimate the billable size of your Amazon Keyspaces table, you can use either one of these two methods: • Use the average row size and multiply by the number or rows. You can estimate the size of the Amazon Keyspaces table by multiplying the average row size by the number of rows from your Cassandra source table. Use the row size sample script from the previous section to capture the average row size. To capture the row count, you can use tools like dsbulk count to determine the total number of rows in your source table. • Use the nodetool to gather table metadata. Estimate pricing 29 Amazon Keyspaces (for Apache Cassandra) Developer Guide Nodetool is an administrative tool provided in the Apache Cassandra distribution that provides insight into the state of the Cassandra process and returns table metadata. You can use nodetool to sample metadata about table size and with that extrapolate the table size in Amazon Keyspaces. The command to use is nodetool tablestats. Tablestats returns the table's size and compression ratio. The table's size is stored as the tablelivespace for the table and you can divide it by the compression ratio. Then multiple this size value by the number of nodes. Finally divide by the replication factor (typically three). This is the complete formula for the calculation that you can use to assess table size. ((tablelivespace / compression ratio) * (total number of nodes))/ (replication factor) Let's assume that your Cassandra cluster has 12 nodes. Running the nodetool tablestats command returns a tablelivespace of 200 GB and a compression ratio of 0.5. The keyspace has a replication factor of three. This is how the calculation for this example looks like. (200 GB / 0.5) * (12 nodes)/ (replication factor of 3) = 4,800 GB / 3 = 1,600 GB is the table size estimate for Amazon Keyspaces 3. Capture the number of reads and writes To determine the capacity and scaling requirements for your Amazon Keyspaces tables, capture the read and write request rate of your Cassandra tables before the migration. Amazon Keyspaces is serverless and you only pay for what you use. In general, the price of read/write throughput in Amazon Keyspaces is based on the number and size of the requests. There are two capacity modes in Amazon Keyspaces: • On-demand – This is a flexible billing option capable of serving thousands of |
AmazonKeyspaces-014 | AmazonKeyspaces.pdf | 14 | 3 = 1,600 GB is the table size estimate for Amazon Keyspaces 3. Capture the number of reads and writes To determine the capacity and scaling requirements for your Amazon Keyspaces tables, capture the read and write request rate of your Cassandra tables before the migration. Amazon Keyspaces is serverless and you only pay for what you use. In general, the price of read/write throughput in Amazon Keyspaces is based on the number and size of the requests. There are two capacity modes in Amazon Keyspaces: • On-demand – This is a flexible billing option capable of serving thousands of requests per second without the need for capacity planning. It offers pay-per-request pricing for read and write requests so that you pay only for what you use. Estimate pricing 30 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Provisioned – If you choose provisioned throughput capacity mode, you specify the number of reads and writes per second that are required for your application. This helps you manage your Amazon Keyspaces usage to stay at or below a defined request rate to maintain predictability. Provisioned mode offers auto scaling to automatically adjust your provisioned rate to scale up or scale down to improve operational efficiency. For more information about serverless resource management, see Managing serverless resources. Because you provision read and write throughput capacity in Amazon Keyspaces separately, you need to measure the request rate for reads and writes in your existing tables independently. To gather the most accurate utilization metrics from your existing Cassandra cluster, capture the average requests per second (RPS) for coordinator-level read and write operations over an extended period of time for a table that is aggregated over all nodes in a single data center. Capturing the average RPS over a period of at least several weeks captures peaks and valleys in your traffic patterns, as shown in the following diagram. You have two options to determine the read and write request rate of your Cassandra table. • Use existing Cassandra monitoring You can use the metrics shown in the following table to observe read and write requests. Note that the metric names can change based on the monitoring tool that you're using. Estimate pricing 31 Amazon Keyspaces (for Apache Cassandra) Developer Guide Dimension Writes Reads • Use the nodetool Cassandra JMX metric org.apache.cassandra.metric s:type=ClientRequest, scope=Write,name=Latency#Co unt org.apache.cassandra.metric s:type=ClientRequest, scope=Read,name=Latency#Count Use nodetool tablestats and nodetool info to capture average read and write operations from the table. tablestats returns the total read and write count from the time the node has been initiated. nodetool info provides the up-time for a node in seconds. To receive the per second average of read and writes, divide the read and write count by the node up-time in seconds. Then, for reads you divide by the consistency level ad for writes you divide by the replication factor. These calculations are expressed in the following formulas. Formula for average reads per second: ((number of reads * number of nodes in cluster) / read consistency quorum (2)) / uptime Formula for average writes per second: ((number of writes * number of nodes in cluster) / replication factor of 3) / uptime Let's assume we have a 12 node cluster that has been up for 4 weeks. nodetool info returns 2,419,200 seconds of up-time and nodetool tablestats returns 1 billion writes and 2 billion reads. This example would result in the following calculation. Estimate pricing 32 Amazon Keyspaces (for Apache Cassandra) Developer Guide ((2 billion reads * 12 in cluster) / read consistency quorum (2)) / 2,419,200 seconds = 12 billion reads / 2,419,200 seconds = 4,960 read request per second ((1 billion writes * 12 in cluster) / replication factor of 3) / 2,419,200 seconds = 4 billion writes / 2,419,200 seconds = 1,653 write request per second 4. Determine the capacity utilization of the table To estimate the average capacity utilization, start with the average request rates and the average row size of your Cassandra source table. Amazon Keyspaces uses read capacity units (RCUs) and write capacity units (WCUs) to measure provisioned throughput capacity for reads and writes for tables. For this estimate we use these units to calculate the read and write capacity needs of the new Amazon Keyspaces table after migration. Later in this topic we'll discuss how the choice between provisioned and on-demand capacity mode affects billing. But for the estimate of capacity utilization in this example, we assume that the table is in provisioned mode. • Reads – One RCU represents one LOCAL_QUORUM read request, or two LOCAL_ONE read requests, for a row up to 4 KB in size. If you need to read a row that is larger than 4 KB, the read operation uses additional RCUs. The total number of RCUs required depends on the row size, |
AmazonKeyspaces-015 | AmazonKeyspaces.pdf | 15 | capacity needs of the new Amazon Keyspaces table after migration. Later in this topic we'll discuss how the choice between provisioned and on-demand capacity mode affects billing. But for the estimate of capacity utilization in this example, we assume that the table is in provisioned mode. • Reads – One RCU represents one LOCAL_QUORUM read request, or two LOCAL_ONE read requests, for a row up to 4 KB in size. If you need to read a row that is larger than 4 KB, the read operation uses additional RCUs. The total number of RCUs required depends on the row size, and whether you want to use LOCAL_QUORUM or LOCAL_ONE read consistency. For example, reading an 8 KB row requires 2 RCUs using LOCAL_QUORUM read consistency, and 1 RCU if you choose LOCAL_ONE read consistency. • Writes – One WCU represents one write for a row up to 1 KB in size. All writes are using LOCAL_QUORUM consistency, and there is no additional charge for using lightweight transactions (LWTs). The total number of WCUs required depends on the row size. If you need to write a row that is larger than 1 KB, the write operation uses additional WCUs. For example, if your row size is 2 KB, you require 2 WCUs to perform one write request. The following formula can be used to estimate the required RCUs and WCUs. Estimate pricing 33 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Read capacity in RCUs can be determined by multiplying reads per second by number of rows read per read multiplied by average row size divided by 4KB and rounded up to the nearest whole number. • Write capacity in WCUs can be determined by multiplying the number of requests by the average row size divided by 1KB and rounded up to the nearest whole number. This is expressed in the following formulas. Read requests per second * ROUNDUP((Average Row Size)/4096 per unit) = RCUs per second Write requests per second * ROUNDUP(Average Row Size/1024 per unit) = WCUs per second For example, if you're performing 4,960 read requests with a row size of 2.5KB on your Cassandra table, you need 4,960 RCUs in Amazon Keyspaces. If you're currently performing 1,653 write requests per second with a row size of 2.5KB on your Cassandra table, you need 4,959 WCUs per second in Amazon Keyspaces. This example is expressed in the following formulas. 4,960 read requests per second * ROUNDUP( 2.5KB /4KB bytes per unit) = 4,960 read requests per second * 1 RCU = 4,960 RCUs 1,653 write requests per second * ROUNDUP(2.5KB/1KB per unit) = 1,653 requests per second * 3 WCUs = 4,959 WCUs Using eventual consistency allows you to save up to half of the throughput capacity on each read request. Each eventually consistent read can consume up to 8KB. You can calculate eventual consistent reads by multiplying the previous calculation by 0.5 as shown in the following formula. 4,960 read requests per second * ROUNDUP( 2.5KB /4KB per unit) * .5 = 2,480 read request per second * 1 RCU = 2,480 RCUs Estimate pricing 34 Amazon Keyspaces (for Apache Cassandra) Developer Guide 5. Calculate the monthly pricing estimate for Amazon Keyspaces To estimate the monthly billing for the table based on read/write capacity throughput, you can calculate the pricing for on-demand and for provisioned mode using different formulas and compare the options for your table. Provisioned mode – Read and write capacity consumption is billed on an hourly rate based on the capacity units per second. First, divide that rate by 0.7 to represent the default autoscaling target utilization of 70%. Then multiple by 30 calendar days, 24 hours per day, and regional rate pricing. This calculation is summarized in the following formulas. (read capacity per second / .7) * 24 hours * 30 days * regional rate (write capacity per second / .7) * 24 hours * 30 days * regional rate On-demand mode – Read and write capacity are billed on a per request rate. First, multiply the request rate by 30 calendar days, and 24 hours per day. Then divide by one million request units. Finally, multiply by the regional rate. This calculation is summarized in the following formulas. ((read capacity per second * 30 * 24 * 60 * 60) / 1 Million read request units) * regional rate ((write capacity per second * 30 * 24 * 60 * 60) / 1 Million write request units) * regional rate Choose a migration strategy You can choose between the following migration strategies when migrating from Apache Cassandra to Amazon Keyspaces: • Online – This is a live migration using dual writes to start writing new data to Amazon Keyspaces and the Cassandra cluster simultaneously. This migration type is recommended for applications that |
AmazonKeyspaces-016 | AmazonKeyspaces.pdf | 16 | is summarized in the following formulas. ((read capacity per second * 30 * 24 * 60 * 60) / 1 Million read request units) * regional rate ((write capacity per second * 30 * 24 * 60 * 60) / 1 Million write request units) * regional rate Choose a migration strategy You can choose between the following migration strategies when migrating from Apache Cassandra to Amazon Keyspaces: • Online – This is a live migration using dual writes to start writing new data to Amazon Keyspaces and the Cassandra cluster simultaneously. This migration type is recommended for applications that require zero downtime during migration and read after write consistency. For more information about how to plan and implement an online migration strategy, see the section called “Online migration”. Migration strategy 35 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Offline – This migration technique involves copying a data set from Cassandra to Amazon Keyspaces during a downtime window. Offline migration can simplify the migration process, because it doesn't require changes to your application or conflict resolution between historical data and new writes. For more information about how to plan an offline migration, see the section called “Offline migration”. • Hybrid – This migration technique allows for changes to be replicated to Amazon Keyspaces in near real time, but without read after write consistency. For more information about how to plan a hybrid migration, see the section called “Hybrid migration”. After reviewing the migration techniques and best practices discussed in this topic, you can place the available options in a decision tree to design a migration strategy based on your requirements and available resources. Online migration to Amazon Keyspaces: strategies and best practices If you need to maintain application availability during a migration from Apache Cassandra to Amazon Keyspaces, you can prepare a custom online migration strategy by implementing the key components discussed in this topic. By following these best practices for online migrations, you can ensure that application availability and read-after-write consistency are maintained during the entire migration process, minimizing the impact on your users. When designing an online migration strategy from Apache Cassandra to Amazon Keyspaces, you need to consider the following key steps. 1. Writing new data • Application dual-writes: You can implement dual writes in your application using existing Cassandra client libraries and drivers. Designate one database as the leader and the other as the follower. Write failures to the follower database are recorded in a dead letter queue (DLQ) for analysis. • Messaging tier dual-writes: Alternatively, you can configure your existing messaging platform to send writes to both Cassandra and Amazon Keyspaces using an additional consumer. This creates eventually consistent views across both databases. 2. Migrating historical data Online migration 36 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Copy historical data: You can migrate historical data from Cassandra to Amazon Keyspaces using AWS Glue or custom extract, transform, and load (ETL) scripts. Handle conflict resolution between dual writes and bulk loads using techniques like lightweight transactions or timestamps. • Use Time-To-Live (TTL): For shorter data retention periods, you can use TTL in both Cassandra and Amazon Keyspaces to avoid uploading unnecessary historical data. As old data expires in Cassandra and new data is written via dual-writes, Amazon Keyspaces eventually catches up. 3. Validating data • Dual reads: Implement dual reads from both Cassandra (primary) and Amazon Keyspaces (secondary) databases, comparing results asynchronously. Differences are logged or sent to a DLQ. • Sample reads: Use Λ functions to periodically sample and compare data across both systems, logging any discrepancies to a DLQ. 4. Migrating the application • Blue-green strategy: Switch your application to treat Amazon Keyspaces as the primary and Cassandra as the secondary data store in a single step. Monitor performance and roll back if issues arise. • Canary deployment: Gradually roll out the migration to a subset of users first, incrementally increasing traffic to Amazon Keyspaces as the primary until fully migrated. 5. Decommissioning Cassandra Once your application is fully migrated to Amazon Keyspaces and data consistency is validated, you can plan to decommission your Cassandra cluster based on data retention policies. By planning an online migration strategy with these components, you can transition smoothly to the fully managed Amazon Keyspaces service with minimal downtime or disruption. The following sections go into each component in more detail. Topics • Writing new data during an online migration • Uploading historical data during an online migration • Validating data consistency during an online migration • Migrating the application during an online migration • Decommissioning Cassandra after an online migration Online migration 37 Amazon Keyspaces (for Apache Cassandra) Developer Guide Writing new data during an online migration The first step in an online migration plan is to ensure that any new data written by the application |
AmazonKeyspaces-017 | AmazonKeyspaces.pdf | 17 | transition smoothly to the fully managed Amazon Keyspaces service with minimal downtime or disruption. The following sections go into each component in more detail. Topics • Writing new data during an online migration • Uploading historical data during an online migration • Validating data consistency during an online migration • Migrating the application during an online migration • Decommissioning Cassandra after an online migration Online migration 37 Amazon Keyspaces (for Apache Cassandra) Developer Guide Writing new data during an online migration The first step in an online migration plan is to ensure that any new data written by the application is stored in both databases, your existing Cassandra cluster and Amazon Keyspaces. The goal is to provide a consistent view across the two data stores. You can do this by applying all new writes to both databases. To implement dual writes, consider one of the following two options. • Application dual writes – You can implement dual writes with minimal changes to your application code by leveraging the existing Cassandra client libraries and drivers. You can either implement dual writes in your existing application, or create a new layer in the architecture to handle dual writes. For more information and a customer case study that shows how dual writes were implemented in an existing application, see Cassandra migration case study. When implementing dual writes, you can designate one database as the leader and the other database as the follower. This allows you to keep writing to your original source, or leader database without letting write failures to the follower, or destination database disrupt the critical path of your application. Instead of retrying failed writes to the follower, you can use Amazon Simple Queue Service to record failed writes in a dead letter queue (DLQ). The DLQ lets you analyze the failed writes to the follower and determine why processing did not succeed in the destination database. For a more sophisticated dual write implementation, you can follow AWS best practices for designing a sequence of local transactions using the saga pattern. A saga pattern ensures that if a transaction fails, the saga runs compensating transactions to revert the database changes made by the previous transactions. When using dual-writes for an online migration, you can configure the dual-writes following the saga pattern so that each write is a local transaction to ensure atomic operations across heterogeneous databases. For more information about designing distributed application using recommended design patterns for the AWS Cloud, see Cloud design patterns, architectures, and implementations. Online migration 38 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Messaging tier dual writes – Instead of implementing dual writes at the application layer, you can use your existing messaging tier to perform dual writes to Cassandra and Amazon Keyspaces. To do this you can configure an additional consumer to your messaging platform to send writes to both data stores. This approach provides a simple low code strategy using the messaging tier to create two views across both databases that are eventually consistent. Uploading historical data during an online migration After implementing dual writes to ensure that new data is written to both data stores in real time, the next step in the migration plan is to evaluate how much historical data you need to copy or bulk upload from Cassandra to Amazon Keyspaces. This ensures that both, new data and historical data are going to be available in the new Amazon Keyspaces database before you’re migrating the application. Online migration 39 Amazon Keyspaces (for Apache Cassandra) Developer Guide Depending on your data retention requirements, for example how much historical data you need to preserve based on your organizations policies, you can consider one the following two options. • Bulk upload of historical data – The migration of historical data from your existing Cassandra deployment to Amazon Keyspaces can be achieved through various techniques, for example using AWS Glue or custom scripts to extract, transform, and load (ETL) the data. For more information about using AWS Glue to upload historical data, see the section called “Offline migration”. When planning the bulk upload of historical data, you need to consider how to resolve conflicts that can occur when new writes are trying to update the same data that is in the process of being uploaded. The bulk upload is expected to be eventually consistent, which means the data is going to reach all nodes eventually. If an update of the same data occurs at the same time due to a new write, you want to ensure that it's not going to be overwritten by the historical data upload. To ensure that you preserve the latest updates to your data even during the bulk import, you must add conflict resolution either into the bulk upload scripts or into the application logic for dual writes. |
AmazonKeyspaces-018 | AmazonKeyspaces.pdf | 18 | update the same data that is in the process of being uploaded. The bulk upload is expected to be eventually consistent, which means the data is going to reach all nodes eventually. If an update of the same data occurs at the same time due to a new write, you want to ensure that it's not going to be overwritten by the historical data upload. To ensure that you preserve the latest updates to your data even during the bulk import, you must add conflict resolution either into the bulk upload scripts or into the application logic for dual writes. For example, you can use the section called “Lightweight transactions” (LWT) to compare and set operations. To do this, you can add an additional field to your data-model that represents time of modification or state. Additionally, Amazon Keyspaces supports the Cassandra WRITETIME timestamp function. You can use Amazon Keyspaces client-side timestamps to preserve source database timestamps and implement last-writer-wins conflict resolution. For more information, see the section called “Client-side timestamps”. • Using Time-to-Live (TTL) – For data retention periods shorter than 30, 60, or 90 days, you can use TTL in Cassandra and Amazon Keyspaces during migration to avoid uploading unnecessary historical data to Amazon Keyspaces. TTL allows you to set a time period after which the data is automatically removed from the database. During the migration phase, instead of copying historical data to Amazon Keyspaces, you can configure the TTL settings to let the historical data expire automatically in the old system (Cassandra) while only applying the new writes to Amazon Keyspaces using the dual-write method. Over time and with old data continually expiring in the Cassandra cluster and new data written using the dual-write method, Amazon Keyspaces automatically catches up to contain the same data as Cassandra. Online migration 40 Amazon Keyspaces (for Apache Cassandra) Developer Guide This approach can significantly reduce the amount of data to be migrated, resulting in a more efficient and streamlined migration process. You can consider this approach when dealing with large datasets with varying data retention requirements. For more information about TTL, see the section called “Expire data with Time to Live”. Consider the following example of a migration from Cassandra to Amazon Keyspaces using TTL data expiration. In this example we set TTL for both databases to 60 days and show how the migration process progresses over a period of 90 days. Both databases receive the same newly written data during this period using the dual writes method. We're going to look at three different phases of the migration, each phase is 30 days long. How the migration process works for each phase is shown in the following images. 1. After the first 30 days, the Cassandra cluster and Amazon Keyspaces have been receiving new writes. The Cassandra cluster also contains historical data that has not yet reached 60 days of retention, which makes up 50% of the data in the cluster. Data that is older than 60 days is being automatically deleted in the Cassandra cluster using TTL. At this point Amazon Keyspaces contains 50% of the data stored in the Cassandra cluster, which is made up of the new writes minus the historical data. 2. After 60 days, both the Cassandra cluster and Amazon Keyspaces contain the same data written in the last 60 days. 3. Within 90 days, both Cassandra and Amazon Keyspaces contain the same data and are expiring data at the same rate. This example illustrates how to avoid the step of uploading historical data by using TTL with an expiration date set to 60 days. Online migration 41 Amazon Keyspaces (for Apache Cassandra) Developer Guide Validating data consistency during an online migration The next step in the online migration process is data validation. Dual writes are adding new data to your Amazon Keyspaces database and you have completed the migration of historical data either using bulk upload or data expiration with TTL. Now you can use the validation phase to confirm that both data stores contain in fact the same data and return the same read results. You can choose from one of the following two options to validate that both your databases contain identical data. • Dual reads – To validate that both, the source and the destination database contain the same set of newly written and historical data, you can implement dual reads. To do so you read from both your primary Cassandra and your secondary Amazon Keyspaces database similarly to the dual writes method and compare the results asynchronously. The results from the primary database are returned to the client, and the results from the secondary database are used to validate against the primary resultset. Differences found can be logged or sent to a dead letter queue (DLQ) for later reconciliation. In |
AmazonKeyspaces-019 | AmazonKeyspaces.pdf | 19 | data. • Dual reads – To validate that both, the source and the destination database contain the same set of newly written and historical data, you can implement dual reads. To do so you read from both your primary Cassandra and your secondary Amazon Keyspaces database similarly to the dual writes method and compare the results asynchronously. The results from the primary database are returned to the client, and the results from the secondary database are used to validate against the primary resultset. Differences found can be logged or sent to a dead letter queue (DLQ) for later reconciliation. In the following diagram, the application is performing a synchronous read from Cassandra, which is the primary data store) and an asynchronous read from Amazon Keyspaces, which is the secondary data store. Online migration 42 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Sample reads – An alternative solution that doesn’t require application code changes is to use AWS Lambda functions to periodically and randomly sample data from both the source Cassandra cluster and the destination Amazon Keyspaces database. These Lambda functions can be configured to run at regular intervals. The Lambda function retrieves a random subset of data from both the source and destination systems, and then performs a comparison of the sampled data. Any discrepancies or mismatches between the two datasets can be recorded and sent to a dedicated dead letter queue (DLQ) for later reconciliation. This process is illustrated in the following diagram. Online migration 43 Amazon Keyspaces (for Apache Cassandra) Developer Guide Migrating the application during an online migration In the fourth phase of an online migration, you are migrating your application and transitioning to Amazon Keyspaces as the primary data store. This means that you switch your application to read and write directly from and to Amazon Keyspaces. To ensure minimal disruption to your users, this should be a well-planned and coordinated process. Two different recommended solution for application migration are available, the blue green cut over strategy and the canary cut over strategy. The following sections outline these strategies in more detail. • Blue green strategy – Using this approach, you switch your application to treat Amazon Keyspaces as the primary data store and Cassandra as the secondary data store in a single step. You can do this using an AWS AppConfig feature flag to control the election of primary and secondary data stores across the application instance. For more information about feature flags, see Creating a feature flag configuration profile in AWS AppConfig. After making Amazon Keyspaces the primary data store, you monitor the application's behavior and performance, ensuring that Amazon Keyspaces meets your requirements and that the migration is successful. Online migration 44 Amazon Keyspaces (for Apache Cassandra) Developer Guide For example, if you implemented dual-reads for your application, during the application migration phase you transition the primary reads going from Cassandra to Amazon Keyspaces and the secondary reads from Amazon Keyspaces to Cassandra. After the transition, you continue to monitor and compare results as described in the data validation section to ensure consistency across both databases before decommissioning Cassandra. If you detect any issues, you can quickly roll back to the previous state by reverting to Cassandra as the primary data store. You only proceed to the decommissioning phase of the migration if Amazon Keyspaces is meeting all your needs as the primary data store. • Canary strategy – In this approach, you gradually roll out the migration to a subset of your users or traffic. Initially, a small percentage of your application's traffic, for example 5% of all traffic is routed to the version using Amazon Keyspaces as the primary data store, while the rest of the traffic continues to use Cassandra as the primary data store. This allows you to thoroughly test the migrated version with real-world traffic and monitor its performance, stability, and investigate potential issues. If you don't detect any issues, you can incrementally increase the percentage of traffic routed to Amazon Keyspaces until it becomes the primary data store for all users and traffic. This staged roll out minimizes the risk of widespread service disruptions and allows for a more controlled migration process. If any critical issues arise during the canary deployment, you can quickly roll back to the previous version using Cassandra as the primary data store for the affected traffic segment. You only proceed to the decommissioning phase of the migration after you have validated that Amazon Keyspaces processes 100% of your users and traffic as expected. The following diagram illustrates the individual steps of the canary strategy. Online migration 45 Amazon Keyspaces (for Apache Cassandra) Developer Guide Decommissioning Cassandra after an online migration After the application migration is complete with your application is fully running on Amazon Keyspaces and you have validated data consistency |
AmazonKeyspaces-020 | AmazonKeyspaces.pdf | 20 | arise during the canary deployment, you can quickly roll back to the previous version using Cassandra as the primary data store for the affected traffic segment. You only proceed to the decommissioning phase of the migration after you have validated that Amazon Keyspaces processes 100% of your users and traffic as expected. The following diagram illustrates the individual steps of the canary strategy. Online migration 45 Amazon Keyspaces (for Apache Cassandra) Developer Guide Decommissioning Cassandra after an online migration After the application migration is complete with your application is fully running on Amazon Keyspaces and you have validated data consistency over a period of time, you can plan to decommission your Cassandra cluster. During this phase, you can evaluate if the data remaining in your Cassandra cluster needs to be archived or can be deleted. This depends on your organization’s policies for data handling and retention. By following this strategy and considering the recommended best practices described in this topic when planning your online migration from Cassandra to Amazon Keyspaces, you can ensure a seamless transition to Amazon Keyspaces while maintaining read-after-write consistency and availability of your application. Migrating from Apache Cassandra to Amazon Keyspaces can provide numerous benefits, including reduced operational overhead, automatic scaling, improved security, and a framework that helps you to reach your compliance goals. By planning an online migration strategy with dual writes, Online migration 46 Amazon Keyspaces (for Apache Cassandra) Developer Guide historical data upload, data validation, and a gradual roll out, you can ensure a smooth transition with minimal disruption to your application and its users. Implementing the online migration strategy discussed in this topic allows you to validate the migration results, identify and address any issues, and ultimately decommission your existing Cassandra deployment in favor of the fully managed Amazon Keyspaces service. Offline migration process: Apache Cassandra to Amazon Keyspaces Offline migrations are suitable when you can afford downtime to perform the migration. It's common among enterprises to have maintenance windows for patching, large releases, or downtime for hardware upgrades or major upgrades. Offline migration can use this window to copy data and switch over the application traffic from Apache Cassandra to Amazon Keyspaces. Offline migration reduces modifications to the application because it doesn't require communication to both Cassandra and Amazon Keyspaces simultaneously. Additionally, with the data flow paused, the exact state can be copied without maintaining mutations. In this example, we use Amazon Simple Storage Service (Amazon S3) as a staging area for data during the offline migration to minimize downtime. You can automatically import the data you stored in Parquet format in Amazon S3 into an Amazon Keyspaces table using the Spark Cassandra connector and AWS Glue. The following section is going to show the high-level overview of the process. You can find code examples for this process on Github. The offline migration process from Apache Cassandra to Amazon Keyspaces using Amazon S3 and AWS Glue requires the following AWS Glue jobs. 1. An ETL job that extracts and transforms CQL data and stores it in an Amazon S3 bucket. 2. A second job that imports the data from the bucket to Amazon Keyspaces. 3. A third job to import incremental data. How to perform an offline migration to Amazon Keyspaces from Cassandra running on Amazon EC2 in a Amazon Virtual Private Cloud 1. First you use AWS Glue to export table data from Cassandra in Parquet format and save it to an Amazon S3 bucket. You need to run an AWS Glue job using a AWS Glue connector to a VPC where the Amazon EC2 instance running Cassandra resides. Then, using the Amazon S3 private endpoint, you can save data to the Amazon S3 bucket. Offline migration 47 Amazon Keyspaces (for Apache Cassandra) Developer Guide The following diagram illustrates these steps. 2. Shuffle the data in the Amazon S3 bucket to improve data randomization. Evenly imported data allows for more distributed traffic in the target table. This step is required when exporting data from Cassandra with large partitions (partitions with more than 1000 rows) to avoid hot key patterns when inserting the data into Amazon Keyspaces. Hot key issues cause WriteThrottleEvents in Amazon Keyspaces and result in increased load time. 3. Use another AWS Glue job to import data from the Amazon S3 bucket into Amazon Keyspaces. The shuffled data in the Amazon S3 bucket is stored in Parquet format. Offline migration 48 Amazon Keyspaces (for Apache Cassandra) Developer Guide For more information about the offline migration process, see the workshop Amazon Keyspaces with AWS Glue Using a hybrid migration solution: Apache Cassandra to Amazon Keyspaces The following migration solution can be considered a hybrid between online and offline migration. With this hybrid approach, data is written to the destination database in near real time without providing read after write |
AmazonKeyspaces-021 | AmazonKeyspaces.pdf | 21 | 3. Use another AWS Glue job to import data from the Amazon S3 bucket into Amazon Keyspaces. The shuffled data in the Amazon S3 bucket is stored in Parquet format. Offline migration 48 Amazon Keyspaces (for Apache Cassandra) Developer Guide For more information about the offline migration process, see the workshop Amazon Keyspaces with AWS Glue Using a hybrid migration solution: Apache Cassandra to Amazon Keyspaces The following migration solution can be considered a hybrid between online and offline migration. With this hybrid approach, data is written to the destination database in near real time without providing read after write consistency. This means that newly written data won’t be immediately available and delays are to be expected. If you need read after write consistency, see the section called “Online migration”. For a near real time migration from Apache Cassandra to Amazon Keyspaces, you can choose between two available methods. • CQLReplicator – (Recommended) CQLReplicator is an open source utility available on Github that helps you to migrate data from Apache Cassandra to Amazon Keyspaces in near real time. To determine the writes and updates to propagate to the destination database, CQLReplicator scans the Apache Cassandra token range and uses an AWS Glue job to remove duplicate events and apply writes and updates directly to Amazon Keyspaces. • Change data capture (CDC) – If you are familiar with Cassandra CDC, the Apache Cassandra built-in CDC feature that allows capturing changes by copying the commit log to a separate CDC directory is another option for implementing a hybrid migration. You can do this by replicating the data changes to Amazon Keyspaces, making CDC an alternative option for data migration scenarios. If you don't need read after write consistency, you can use either the CQLReplicator or a CDC pipeline to migrate data from Apache Cassandra to Amazon Keyspaces based on your preferences Hybrid migration 49 Amazon Keyspaces (for Apache Cassandra) Developer Guide and familiarity with the tools and AWS services used in each solution. Using these methods to migrate data in near real time can be considered a hybrid approach to migration that offers an alternative to online migration. This strategy is considered a hybrid approach, because in addition to the options outlined in this topic, you have to implement some steps of the online migration progress, for example historical data copy and the application migration strategies discussed in the online migration topic. The following sections go over the hybrid migration options in more detail. Topics • Migrate data using CQLReplicator • Migrate data using change data capture (CDC) Migrate data using CQLReplicator With CQLReplicator, you can read data from Apache Cassandra in near real time through intelligently scanning the Cassandra token ring using CQL queries. CQLReplicator doesn’t use Cassandra CDC and instead implements a caching strategy to reduce the performance penalties of full scans. To reduce the number of writes to the destination, CQLReplicator automatically removes duplicate replication events. With CQLReplicator, you can tune the replication of changes from the source database to the destination database, allowing for a near real time migration of data from Apache Cassandra to Amazon Keyspaces. The following diagram shows the typical architecture of a CQLReplicator job using AWS Glue. 1. To allow access to Apache Cassandra running in a private VPC, configure an AWS Glue connection with the connection type Network. 2. To remove duplicates and enable key caching with the CQLReplicator job, configure Amazon Simple Storage Service (Amazon S3). 3. The CQLReplicator job streams verified source database changes directly to Amazon Keyspaces. Hybrid migration 50 Amazon Keyspaces (for Apache Cassandra) Developer Guide For more information about the migration process using CQLReplicator, see the following post on the AWS Database blog Migrate Cassandra workloads to Amazon Keyspaces using CQLReplicator and the AWS prescriptive guidance Migrate Apache Cassandra workloads to Amazon Keyspaces by using AWS Glue. Migrate data using change data capture (CDC) If you're already familiar with configuring a change data capture (CDC) pipeline with Debezium, you can use this option to migrate data to Amazon Keyspaces as an alternative to using CQLReplicator. Debezium is an open-source, distributed platform for CDC, designed to monitor a database and capture row-level changes reliably. The Debezium connector for Apache Cassandra uploads changes to Amazon Managed Streaming for Apache Kafka (Amazon MSK) so that they can be consumed and processed by downstream consumers which in turn write the data to Amazon Keyspaces. For more information, see Guidance for continuous data migration from Apache Cassandra to Amazon Keyspaces. To address any potential data consistency issues, you can implement a process with Amazon MSK where a consumer compares the keys or partitions in Cassandra with those in Amazon Keyspaces. To implement this solution successfully, we recommend to consider the following. • How to parse the CDC commit log, for example how to |
AmazonKeyspaces-022 | AmazonKeyspaces.pdf | 22 | Cassandra uploads changes to Amazon Managed Streaming for Apache Kafka (Amazon MSK) so that they can be consumed and processed by downstream consumers which in turn write the data to Amazon Keyspaces. For more information, see Guidance for continuous data migration from Apache Cassandra to Amazon Keyspaces. To address any potential data consistency issues, you can implement a process with Amazon MSK where a consumer compares the keys or partitions in Cassandra with those in Amazon Keyspaces. To implement this solution successfully, we recommend to consider the following. • How to parse the CDC commit log, for example how to remove duplicate events. • How to maintain the CDC directory, for example how to delete old logs. Hybrid migration 51 Amazon Keyspaces (for Apache Cassandra) Developer Guide • How to handle partial failures in Apache Cassandra, for example if a write only succeeds in one out of three replicas. • How to handle resource allocation, for example increasing the size of the instance to account for additional CPU, memory, DISK, and IO requirements for the CDC process that occurs on a node. This pattern treats changes from Cassandra as a "hint" that a key may have changed from its previous state. To determine if there are changes to propagate to the destination database, you must first read from the source Cassandra cluster using a LOCAL_QUORUM operation to receive the latest records and then write them to Amazon Keyspaces. In the case of range deletes or range updates, you may need to perform a comparison against the entire partition to determine which write or update events need to be written to your destination database. In cases where writes are not idempotent, you also need to compare your writes with what is already in the destination database before writing to Amazon Keyspaces. The following diagram shows the typical architecture of a CDC pipeline using Debezium and Amazon MSK. Hybrid migration 52 Amazon Keyspaces (for Apache Cassandra) Developer Guide How to select the right tool for bulk uploading or migrating data to Amazon Keyspaces In this section you can review the different tools that you can use to bulk upload or migrate data to Amazon Keyspaces, and learn how to select the correct tool based on your needs. In addition, this section provides an overview and use cases of the available step-by-step tutorials that demonstrate how to import data into Amazon Keyspaces. To review the available strategies to migrate workloads from Apache Cassandra to Amazon Keyspaces, see the section called “Migrating from Cassandra”. • Migration tools • For large migrations, consider using an extract, transform, and load (ETL) tool. You can use AWS Glue to quickly and effectively perform data transformation migrations. For more information, see the section called “Offline migration”. • CQLReplicator – CQLReplicator is an open source utility available on Github that helps you to migrate data from Apache Cassandra to Amazon Keyspaces in near real time. For more information, see the section called “CQLReplicator”. • To learn more about how to use Amazon Managed Streaming for Apache Kafka to implement an online migration process with dual-writes, see Guidance for continuous data migration from Apache Cassandra to Amazon Keyspaces. • To learn how to use the Apache Cassandra Spark connector to write data to Amazon Keyspaces, see the section called “Integrating with Apache Spark”. • Get started quickly with loading data into Amazon Keyspaces by using the cqlsh COPY FROM command. cqlsh is included with Apache Cassandra and is best suited for loading small datasets or test data. For step-by-step instructions, see the section called “Loading data using cqlsh”. • You can also use the DataStax Bulk Loader for Apache Cassandra to load data into Amazon Keyspaces using the dsbulk command. DSBulk provides more robust import capabilities than cqlsh and is available from the GitHub repository. For step-by-step instructions, see the section called “Loading data using DSBulk”. General considerations for data uploads to Amazon Keyspaces • Break the data upload down into smaller components. Migration tools 53 Amazon Keyspaces (for Apache Cassandra) Developer Guide Consider the following units of migration and their potential footprint in terms of raw data size. Uploading smaller amounts of data in one or more phases may help simplify your migration. • By cluster – Migrate all of your Cassandra data at once. This approach may be fine for smaller clusters. • By keyspace or table – Break up your migration into groups of keyspaces or tables. This approach can help you migrate data in phases based on your requirements for each workload. • By data – Consider migrating data for a specific group of users or products, to bring the size of data down even more. • Prioritize what data to upload first based on simplicity. Consider if you have data that could be migrated first |
AmazonKeyspaces-023 | AmazonKeyspaces.pdf | 23 | help simplify your migration. • By cluster – Migrate all of your Cassandra data at once. This approach may be fine for smaller clusters. • By keyspace or table – Break up your migration into groups of keyspaces or tables. This approach can help you migrate data in phases based on your requirements for each workload. • By data – Consider migrating data for a specific group of users or products, to bring the size of data down even more. • Prioritize what data to upload first based on simplicity. Consider if you have data that could be migrated first more easily—for example, data that does not change during specific times, data from nightly batch jobs, data not used during offline hours, or data from internal apps. Topics • Tutorial: Loading data into Amazon Keyspaces using cqlsh • Tutorial: Loading data into Amazon Keyspaces using DSBulk Tutorial: Loading data into Amazon Keyspaces using cqlsh This tutorial guides you through the process of migrating data from Apache Cassandra to Amazon Keyspaces using the cqlsh COPY FROM command. The cqlsh COPY FROM command is useful to quickly and easily upload small datasets to Amazon Keyspaces for academic or test purposes. For more information about how to migrate production workloads, see the section called “Offline migration”. In this tutorial, you'll complete the following steps: Prerequisites – Set up an AWS account with credentials, create a JKS trust store file for the certificate, and configure cqlsh to connect to Amazon Keyspaces. 1. Create source CSV and target table – Prepare a CSV file as the source data and create the target keyspace and table in Amazon Keyspaces. 2. Prepare the data – Randomize the data in the CSV file and analyze it to determine the average and maximum row sizes. 3. Set throughput capacity – Calculate the required write capacity units (WCUs) based on the data size and desired load time, and configure the table's provisioned capacity. Loading data using cqlsh 54 Amazon Keyspaces (for Apache Cassandra) Developer Guide 4. Configure cqlsh parameters – Determine optimal values for cqlsh COPY FROM parameters like INGESTRATE, NUMPROCESSES, MAXBATCHSIZE, and CHUNKSIZE to distribute the workload evenly. 5. Run the cqlsh COPY FROM command – Run the cqlsh COPY FROM command to upload the data from the CSV file to the Amazon Keyspaces table, and monitor the progress. Troubleshooting – Resolve common issues like invalid requests, parser errors, capacity errors, and cqlsh errors during the data upload process. Topics • Prerequisites: Steps to complete before you can upload data using cqlsh COPY FROM • Step 1: Create the source CSV file and a target table for the data upload • Step 2: Prepare the source data for a successful data upload • Step 3: Set throughput capacity for the table • Step 4: Configure cqlsh COPY FROM settings • Step 5: Run the cqlsh COPY FROM command to upload data from the CSV file to the target table • Troubleshooting Prerequisites: Steps to complete before you can upload data using cqlsh COPY FROM You must complete the following tasks before you can start this tutorial. 1. If you have not already done so, sign up for an AWS account by following the steps at the section called “Setting up AWS Identity and Access Management”. 2. Create service-specific credentials by following the steps at the section called “Create service- specific credentials”. 3. Set up the Cassandra Query Language shell (cqlsh) connection and confirm that you can connect to Amazon Keyspaces by following the steps at the section called “Using cqlsh”. Loading data using cqlsh 55 Amazon Keyspaces (for Apache Cassandra) Developer Guide Step 1: Create the source CSV file and a target table for the data upload For this tutorial, we use a comma-separated values (CSV) file with the name keyspaces_sample_table.csv as the source file for the data migration. The provided sample file contains a few rows of data for a table with the name book_awards. 1. Create the source file. You can choose one of the following options: • • Download the sample CSV file (keyspaces_sample_table.csv) contained in the following archive file samplemigration.zip. Unzip the archive and take note of the path to keyspaces_sample_table.csv. To populate a CSV file with your own data stored in an Apache Cassandra database, you can populate the source CSV file by using the cqlsh COPY TO statement as shown in the following example. cqlsh localhost 9042 -u "username" -p "password" --execute "COPY mykeyspace.mytable TO 'keyspaces_sample_table.csv' WITH HEADER=true" Make sure the CSV file you create meets the following requirements: • The first row contains the column names. • The column names in the source CSV file match the column names in the target table. • The data is delimited with a comma. • All data values are valid Amazon Keyspaces data types. See |
AmazonKeyspaces-024 | AmazonKeyspaces.pdf | 24 | file with your own data stored in an Apache Cassandra database, you can populate the source CSV file by using the cqlsh COPY TO statement as shown in the following example. cqlsh localhost 9042 -u "username" -p "password" --execute "COPY mykeyspace.mytable TO 'keyspaces_sample_table.csv' WITH HEADER=true" Make sure the CSV file you create meets the following requirements: • The first row contains the column names. • The column names in the source CSV file match the column names in the target table. • The data is delimited with a comma. • All data values are valid Amazon Keyspaces data types. See the section called “Data types”. 2. Create the target keyspace and table in Amazon Keyspaces. a. Connect to Amazon Keyspaces using cqlsh, replacing the service endpoint, user name, and password in the following example with your own values. cqlsh cassandra.us-east-2.amazonaws.com 9142 -u "111122223333" - p "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" --ssl b. Create a new keyspace with the name catalog as shown in the following example. CREATE KEYSPACE catalog WITH REPLICATION = {'class': 'SingleRegionStrategy'}; c. When the new keyspace is available, use the following code to create the target table book_awards. Loading data using cqlsh 56 Amazon Keyspaces (for Apache Cassandra) Developer Guide CREATE TABLE "catalog.book_awards" ( year int, award text, rank int, category text, book_title text, author text, publisher text, PRIMARY KEY ((year, award), category, rank) ); If Apache Cassandra is your original data source, a simple way to create the Amazon Keyspaces target table with matching headers is to generate the CREATE TABLE statement from the source table, as shown in the following statement. cqlsh localhost 9042 -u "username" -p "password" --execute "DESCRIBE TABLE mykeyspace.mytable;" Then create the target table in Amazon Keyspaces with the column names and data types matching the description from the Cassandra source table. Step 2: Prepare the source data for a successful data upload Preparing the source data for an efficient transfer is a two-step process. First, you randomize the data. In the second step, you analyze the data to determine the appropriate cqlsh parameter values and required table settings to ensure that the data upload is successful. Randomize the data The cqlsh COPY FROM command reads and writes data in the same order that it appears in the CSV file. If you use the cqlsh COPY TO command to create the source file, the data is written in key-sorted order in the CSV. Internally, Amazon Keyspaces partitions data using partition keys. Although Amazon Keyspaces has built-in logic to help load balance requests for the same partition key, loading the data is faster and more efficient if you randomize the order. This is because you can take advantage of the built-in load balancing that occurs when Amazon Keyspaces is writing to different partitions. Loading data using cqlsh 57 Amazon Keyspaces (for Apache Cassandra) Developer Guide To spread the writes across the partitions evenly, you must randomize the data in the source file. You can write an application to do this or use an open-source tool, such as Shuf. Shuf is freely available on Linux distributions, on macOS (by installing coreutils in homebrew), and on Windows (by using Windows Subsystem for Linux (WSL)). One extra step is required to prevent the header row with the column names to get shuffled in this step. To randomize the source file while preserving the header, enter the following code. tail -n +2 keyspaces_sample_table.csv | shuf -o keyspace.table.csv && (head -1 keyspaces_sample_table.csv && cat keyspace.table.csv ) > keyspace.table.csv1 && mv keyspace.table.csv1 keyspace.table.csv Shuf rewrites the data to a new CSV file called keyspace.table.csv. You can now delete the keyspaces_sample_table.csv file—you no longer need it. Analyze the data Determine the average and maximum row size by analyzing the data. You do this for the following reasons: • The average row size helps to estimate the total amount of data to be transferred. • You need the average row size to provision the write capacity needed for the data upload. • You can make sure that each row is less than 1 MB in size, which is the maximum row size in Amazon Keyspaces. Note This quota refers to row size, not partition size. Unlike Apache Cassandra partitions, Amazon Keyspaces partitions can be virtually unbound in size. Partition keys and clustering columns require additional storage for metadata, which you must add to the raw size of rows. For more information, see the section called “Estimate row size”. The following code uses AWK to analyze a CSV file and print the average and maximum row size. awk -F, 'BEGIN {samp=10000;max=-1;}{if(NR>1){len=length($0);t+=len;avg=t/ NR;max=(len>max ? len : max)}}NR==samp{exit}END{printf("{lines: %d, average: %d bytes, max: %d bytes}\n",NR,avg,max);}' keyspace.table.csv Loading data using cqlsh 58 Amazon Keyspaces (for Apache Cassandra) Developer Guide Running this code results in the following output. using 10,000 samples: {lines: 10000, avg: 123 bytes, max: 225 bytes} You |
AmazonKeyspaces-025 | AmazonKeyspaces.pdf | 25 | unbound in size. Partition keys and clustering columns require additional storage for metadata, which you must add to the raw size of rows. For more information, see the section called “Estimate row size”. The following code uses AWK to analyze a CSV file and print the average and maximum row size. awk -F, 'BEGIN {samp=10000;max=-1;}{if(NR>1){len=length($0);t+=len;avg=t/ NR;max=(len>max ? len : max)}}NR==samp{exit}END{printf("{lines: %d, average: %d bytes, max: %d bytes}\n",NR,avg,max);}' keyspace.table.csv Loading data using cqlsh 58 Amazon Keyspaces (for Apache Cassandra) Developer Guide Running this code results in the following output. using 10,000 samples: {lines: 10000, avg: 123 bytes, max: 225 bytes} You use the average row size in the next step of this tutorial to provision the write capacity for the table. Step 3: Set throughput capacity for the table This tutorial shows you how to tune cqlsh to load data within a set time range. Because you know how many reads and writes you perform in advance, use provisioned capacity mode. After you finish the data transfer, you should set the capacity mode of the table to match your application’s traffic patterns. To learn more about capacity management, see Managing serverless resources. With provisioned capacity mode, you specify how much read and write capacity you want to provision to your table in advance. Write capacity is billed hourly and metered in write capacity units (WCUs). Each WCU is enough write capacity to support writing 1 KB of data per second. When you load the data, the write rate must be under the max WCUs (parameter: write_capacity_units) that are set on the target table. By default, you can provision up to 40,000 WCUs to a table and 80,000 WCUs across all the tables in your account. If you need additional capacity, you can request a quota increase in the Service Quotas console. For more information about quotas, see Quotas. Calculate the average number of WCUs required for an insert Inserting 1 KB of data per second requires 1 WCU. If your CSV file has 360,000 rows and you want to load all the data in 1 hour, you must write 100 rows per second (360,000 rows / 60 minutes / 60 seconds = 100 rows per second). If each row has up to 1 KB of data, to insert 100 rows per second, you must provision 100 WCUs to your table. If each row has 1.5 KB of data, you need two WCUs to insert one row per second. Therefore, to insert 100 rows per second, you must provision 200 WCUs. To determine how many WCUs you need to insert one row per second, divide the average row size in bytes by 1024 and round up to the nearest whole number. For example, if the average row size is 3000 bytes, you need three WCUs to insert one row per second. ROUNDUP(3000 / 1024) = ROUNDUP(2.93) = 3 WCUs Loading data using cqlsh 59 Amazon Keyspaces (for Apache Cassandra) Developer Guide Calculate data load time and capacity Now that you know the average size and number of rows in your CSV file, you can calculate how many WCUs you need to load the data in a given amount of time, and the approximate time it takes to load all the data in your CSV file using different WCU settings. For example, if each row in your file is 1 KB and you have 1,000,000 rows in your CSV file, to load the data in 1 hour, you need to provision at least 278 WCUs to your table for that hour. 1,000,000 rows * 1 KBs = 1,000,000 KBs 1,000,000 KBs / 3600 seconds =277.8 KBs / second = 278 WCUs Configure provisioned capacity settings You can set a table’s write capacity settings when you create the table or by using the ALTER TABLE CQL command. The following is the syntax for altering a table’s provisioned capacity settings with the ALTER TABLE CQL statement. ALTER TABLE mykeyspace.mytable WITH custom_properties={'capacity_mode': {'throughput_mode': 'PROVISIONED', 'read_capacity_units': 100, 'write_capacity_units': 278}} ; For the complete language reference, see the section called “ALTER TABLE”. Step 4: Configure cqlsh COPY FROM settings This section outlines how to determine the parameter values for cqlsh COPY FROM. The cqlsh COPY FROM command reads the CSV file that you prepared earlier and inserts the data into Amazon Keyspaces using CQL. The command divides up the rows and distributes the INSERT operations among a set of workers. Each worker establishes a connection with Amazon Keyspaces and sends INSERT requests along this channel. The cqlsh COPY command doesn’t have internal logic to distribute work evenly among its workers. However, you can configure it manually to make sure that the work is distributed evenly. Start by reviewing these key cqlsh parameters: • DELIMITER – If you used a delimiter other than a comma, you can set this |
AmazonKeyspaces-026 | AmazonKeyspaces.pdf | 26 | the CSV file that you prepared earlier and inserts the data into Amazon Keyspaces using CQL. The command divides up the rows and distributes the INSERT operations among a set of workers. Each worker establishes a connection with Amazon Keyspaces and sends INSERT requests along this channel. The cqlsh COPY command doesn’t have internal logic to distribute work evenly among its workers. However, you can configure it manually to make sure that the work is distributed evenly. Start by reviewing these key cqlsh parameters: • DELIMITER – If you used a delimiter other than a comma, you can set this parameter, which defaults to comma. • INGESTRATE – The target number of rows that cqlsh COPY FROM attempts to process per second. If unset, it defaults to 100,000. Loading data using cqlsh 60 Amazon Keyspaces (for Apache Cassandra) Developer Guide • NUMPROCESSES – The number of child worker processes that cqlsh creates for COPY FROM tasks. The maximum for this setting is 16, the default is num_cores - 1, where num_cores is the number of processing cores on the host running cqlsh. • MAXBATCHSIZE – The batch size determines the maximal number of rows inserted into the destination table in a single batch. If unset, cqlsh uses batches of 20 inserted rows. • CHUNKSIZE – The size of the work unit that passes to the child worker. By default, it is set to 5,000. • MAXATTEMPTS – The maximum number of times to retry a failed worker chunk. After the maximum attempt is reached, the failed records are written to a new CSV file that you can run again later after investigating the failure. Set INGESTRATE based on the number of WCUs that you provisioned to the target destination table. The INGESTRATE of the cqlsh COPY FROM command isn’t a limit—it’s a target average. This means it can (and often does) burst above the number you set. To allow for bursts and make sure that enough capacity is in place to handle the data load requests, set INGESTRATE to 90% of the table’s write capacity. INGESTRATE = WCUs * .90 Next, set the NUMPROCESSES parameter to equal to one less than the number of cores on your system. To find out what the number of cores of your system is, you can run the following code. python -c "import multiprocessing; print(multiprocessing.cpu_count())" For this tutorial, we use the following value. NUMPROCESSES = 4 Each process creates a worker, and each worker establishes a connection to Amazon Keyspaces. Amazon Keyspaces can support up to 3,000 CQL requests per second on every connection. This means that you have to make sure that each worker is processing fewer than 3,000 requests per second. As with INGESTRATE, the workers often burst above the number you set and aren’t limited by clock seconds. Therefore, to account for bursts, set your cqlsh parameters to target each worker to process 2,500 requests per second. To calculate the amount of work distributed to a worker, use the following guideline. Loading data using cqlsh 61 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Divide INGESTRATE by NUMPROCESSES. • If INGESTRATE / NUMPROCESSES > 2,500, lower the INGESTRATE to make this formula true. INGESTRATE / NUMPROCESSES <= 2,500 Before you configure the settings to optimize the upload of our sample data, let's review the cqlsh default settings and see how using them impacts the data upload process. Because cqlsh COPY FROM uses the CHUNKSIZE to create chunks of work (INSERT statements) to distribute to workers, the work is not automatically distributed evenly. Some workers might sit idle, depending on the INGESTRATE setting. To distribute work evenly among the workers and keep each worker at the optimal 2,500 requests per second rate, you must set CHUNKSIZE, MAXBATCHSIZE, and INGESTRATE by changing the input parameters. To optimize network traffic utilization during the data load, choose a value for MAXBATCHSIZE that is close to the maximum value of 30. By changing CHUNKSIZE to 100 and MAXBATCHSIZE to 25, the 10,000 rows are spread evenly among the four workers (10,000 / 2500 = 4). The following code example illustrates this. INGESTRATE = 10,000 NUMPROCESSES = 4 CHUNKSIZE = 100 MAXBATCHSIZE. = 25 Work Distribution: Connection 1 / Worker 1 : 2,500 Requests per second Connection 2 / Worker 2 : 2,500 Requests per second Connection 3 / Worker 3 : 2,500 Requests per second Connection 4 / Worker 4 : 2,500 Requests per second To summarize, use the following formulas when setting cqlsh COPY FROM parameters: • INGESTRATE = write_capacity_units * .90 • NUMPROCESSES = num_cores -1 (default) • INGESTRATE / NUMPROCESSES = 2,500 (This must be a true statement.) • MAXBATCHSIZE = 30 (Defaults to 20. Amazon Keyspaces accepts batches up to 30.) • CHUNKSIZE = (INGESTRATE / NUMPROCESSES) / MAXBATCHSIZE Loading data using |
AmazonKeyspaces-027 | AmazonKeyspaces.pdf | 27 | 1 / Worker 1 : 2,500 Requests per second Connection 2 / Worker 2 : 2,500 Requests per second Connection 3 / Worker 3 : 2,500 Requests per second Connection 4 / Worker 4 : 2,500 Requests per second To summarize, use the following formulas when setting cqlsh COPY FROM parameters: • INGESTRATE = write_capacity_units * .90 • NUMPROCESSES = num_cores -1 (default) • INGESTRATE / NUMPROCESSES = 2,500 (This must be a true statement.) • MAXBATCHSIZE = 30 (Defaults to 20. Amazon Keyspaces accepts batches up to 30.) • CHUNKSIZE = (INGESTRATE / NUMPROCESSES) / MAXBATCHSIZE Loading data using cqlsh 62 Amazon Keyspaces (for Apache Cassandra) Developer Guide Now that you have calculated NUMPROCESSES, INGESTRATE, and CHUNKSIZE, you’re ready to load your data. Step 5: Run the cqlsh COPY FROM command to upload data from the CSV file to the target table To run the cqlsh COPY FROM command, complete the following steps. 1. Connect to Amazon Keyspaces using cqlsh. 2. Choose your keyspace with the following code. USE catalog; 3. Set write consistency to LOCAL_QUORUM. To ensure data durability, Amazon Keyspaces doesn’t allow other write consistency settings. See the following code. CONSISTENCY LOCAL_QUORUM; 4. Prepare your cqlsh COPY FROM syntax using the following code example. COPY book_awards FROM './keyspace.table.csv' WITH HEADER=true AND INGESTRATE=calculated ingestrate AND NUMPROCESSES=calculated numprocess AND MAXBATCHSIZE=20 AND CHUNKSIZE=calculated chunksize; 5. Run the statement prepared in the previous step. cqlsh echoes back all the settings that you've configured. a. Make sure that the settings match your input. See the following example. Reading options from the command line: {'chunksize': '120', 'header': 'true', 'ingestrate': '36000', 'numprocesses': '15', 'maxbatchsize': '20'} Using 15 child processes b. Review the number of rows transferred and the current average rate, as shown in the following example. Processed: 57834 rows; Rate: 6561 rows/s; Avg. rate: 31751 rows/s Loading data using cqlsh 63 Amazon Keyspaces (for Apache Cassandra) Developer Guide c. When cqlsh has finished uploading the data, review the summary of the data load statistics (the number of files read, runtime, and skipped rows) as shown in the following example. 15556824 rows imported from 1 files in 8 minutes and 8.321 seconds (0 skipped). In this final step of the tutorial, you have uploaded the data to Amazon Keyspaces. Important Now that you have transferred your data, adjust the capacity mode settings of your target table to match your application’s regular traffic patterns. You incur charges at the hourly rate for your provisioned capacity until you change it. Troubleshooting After the data upload has completed, check to see if rows were skipped. To do so, navigate to the source directory of the source CSV file and search for a file with the following name. import_yourcsvfilename.err.timestamp.csv cqlsh writes any skipped rows of data into a file with that name. If the file exists in your source directory and has data in it, these rows didn't upload to Amazon Keyspaces. To retry these rows, first check for any errors that were encountered during the upload and adjust the data accordingly. To retry these rows, you can rerun the process. Common errors The most common reasons why rows aren’t loaded are capacity errors and parsing errors. Invalid request errors when uploading data to Amazon Keyspaces In the following example, the source table contains a counter column, which results in logged batch calls from the cqlsh COPY command. Logged batch calls are not supported by Amazon Keyspaces. Loading data using cqlsh 64 Amazon Keyspaces (for Apache Cassandra) Developer Guide Failed to import 10 rows: InvalidRequest - Error from server: code=2200 [Invalid query] message=“Only UNLOGGED Batches are supported at this time.“, will retry later, attempt 22 of 25 To resolve this error, use DSBulk to migrate the data. For more information, see the section called “Loading data using DSBulk”. Parser errors when uploading data to Amazon Keyspaces The following example shows a skipped row due to a ParseError. Failed to import 1 rows: ParseError - Invalid ... – To resolve this error, you need to make sure that the data to be imported matches the table schema in Amazon Keyspaces. Review the import file for parsing errors. You can try using a single row of data using an INSERT statement to isolate the error. Capacity errors when uploading data to Amazon Keyspaces Failed to import 1 rows: WriteTimeout - Error from server: code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 2, 'write_type': 'SIMPLE', 'consistency': 'LOCAL_QUORUM'}, will retry later, attempt 1 of 100 Amazon Keyspaces uses the ReadTimeout and WriteTimeout exceptions to indicate when a write request fails due to insufficient throughput capacity. To help diagnose insufficient capacity exceptions, Amazon Keyspaces publishes WriteThrottleEvents and ReadThrottledEvents metrics in Amazon CloudWatch. For more information, see the section called “Monitoring |
AmazonKeyspaces-028 | AmazonKeyspaces.pdf | 28 | statement to isolate the error. Capacity errors when uploading data to Amazon Keyspaces Failed to import 1 rows: WriteTimeout - Error from server: code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 2, 'write_type': 'SIMPLE', 'consistency': 'LOCAL_QUORUM'}, will retry later, attempt 1 of 100 Amazon Keyspaces uses the ReadTimeout and WriteTimeout exceptions to indicate when a write request fails due to insufficient throughput capacity. To help diagnose insufficient capacity exceptions, Amazon Keyspaces publishes WriteThrottleEvents and ReadThrottledEvents metrics in Amazon CloudWatch. For more information, see the section called “Monitoring with CloudWatch”. cqlsh errors when uploading data to Amazon Keyspaces To help troubleshoot cqlsh errors, rerun the failing command with the --debug flag. When using an incompatible version of cqlsh, you see the following error. AttributeError: 'NoneType' object has no attribute 'is_up' Failed to import 3 rows: AttributeError - 'NoneType' object has no attribute 'is_up', given up after 1 attempts Loading data using cqlsh 65 Amazon Keyspaces (for Apache Cassandra) Developer Guide Confirm that the correct version of cqlsh is installed by running the following command. cqlsh --version You should see something like the following for output. cqlsh 5.0.1 If you're using Windows, replace all instances of cqlsh with cqlsh.bat. For example, to check the version of cqlsh in Windows, run the following command. cqlsh.bat --version The connection to Amazon Keyspaces fails after the cqlsh client receives three consecutive errors of any type from the server. The cqlsh client fails with the following message. Failed to import 1 rows: NoHostAvailable - , will retry later, attempt 3 of 100 To resolve this error, you need to make sure that the data to be imported matches the table schema in Amazon Keyspaces. Review the import file for parsing errors. You can try using a single row of data by using an INSERT statement to isolate the error. The client automatically attempts to reestablish the connection. Tutorial: Loading data into Amazon Keyspaces using DSBulk This step-by-step tutorial guides you through migrating data from Apache Cassandra to Amazon Keyspaces using the DataStax Bulk Loader (DSBulk) available on GitHub. Using DSBulk is useful to upload datasets to Amazon Keyspaces for academic or test purposes. For more information about how to migrate production workloads, see the section called “Offline migration”. In this tutorial, you complete the following steps. Prerequisites – Set up an AWS account with credentials, create a JKS trust store file for the certificate, configure cqlsh, download and install DSBulk, and configure an application.conf file. 1. Create source CSV and target table – Prepare a CSV file as the source data and create the target keyspace and table in Amazon Keyspaces. Loading data using DSBulk 66 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2. Prepare the data – Randomize the data in the CSV file and analyze it to determine the average and maximum row sizes. 3. Set throughput capacity – Calculate the required write capacity units (WCUs) based on the data size and desired load time, and configure the table's provisioned capacity. 4. Configure DSBulk settings – Create a DSBulk configuration file with settings like authentication, SSL/TLS, consistency level, and connection pool size. 5. Run the DSBulk load command – Run the DSBulk load command to upload the data from the CSV file to the Amazon Keyspaces table, and monitor the progress. Topics • Prerequisites: Steps you have to complete before you can upload data with DSBulk • Step 1: Create the source CSV file and a target table for the data upload using DSBulk • Step 2: Prepare the data to upload using DSBulk • Step 3: Set the throughput capacity for the target table • Step 4: Configure DSBulk settings to upload data from the CSV file to the target table • Step 5: Run the DSBulk load command to upload data from the CSV file to the target table Prerequisites: Steps you have to complete before you can upload data with DSBulk You must complete the following tasks before you can start this tutorial. 1. If you have not already done so, sign up for an AWS account by following the steps at the section called “Setting up AWS Identity and Access Management”. 2. Create credentials by following the steps at the section called “Create IAM credentials for AWS authentication”. 3. Create a JKS trust store file. a. Download the Starfield digital certificate using the following command and save sf- class2-root.crt locally or in your home directory. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Loading data using DSBulk 67 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note You can also use the Amazon digital certificate to connect to Amazon Keyspaces and can continue to do so if your client is connecting to Amazon Keyspaces successfully. The Starfield certificate provides additional backwards compatibility for |
AmazonKeyspaces-029 | AmazonKeyspaces.pdf | 29 | Identity and Access Management”. 2. Create credentials by following the steps at the section called “Create IAM credentials for AWS authentication”. 3. Create a JKS trust store file. a. Download the Starfield digital certificate using the following command and save sf- class2-root.crt locally or in your home directory. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Loading data using DSBulk 67 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note You can also use the Amazon digital certificate to connect to Amazon Keyspaces and can continue to do so if your client is connecting to Amazon Keyspaces successfully. The Starfield certificate provides additional backwards compatibility for clients using older certificate authorities. b. Convert the Starfield digital certificate into a trustStore file. openssl x509 -outform der -in sf-class2-root.crt -out temp_file.der keytool -import -alias cassandra -keystore cassandra_truststore.jks -file temp_file.der In this step, you need to create a password for the keystore and trust this certificate. The interactive command looks like this. Enter keystore password: Re-enter new password: Owner: OU=Starfield Class 2 Certification Authority, O="Starfield Technologies, Inc.", C=US Issuer: OU=Starfield Class 2 Certification Authority, O="Starfield Technologies, Inc.", C=US Serial number: 0 Valid from: Tue Jun 29 17:39:16 UTC 2004 until: Thu Jun 29 17:39:16 UTC 2034 Certificate fingerprints: MD5: 32:4A:4B:BB:C8:63:69:9B:BE:74:9A:C6:DD:1D:46:24 SHA1: AD:7E:1C:28:B0:64:EF:8F:60:03:40:20:14:C3:D0:E3:37:0E:B5:8A SHA256: 14:65:FA:20:53:97:B8:76:FA:A6:F0:A9:95:8E:55:90:E4:0F:CC:7F:AA:4F:B7:C2:C8:67:75:21:FB:5F:B6:58 Signature algorithm name: SHA1withRSA Subject Public Key Algorithm: 2048-bit RSA key Version: 3 Extensions: #1: ObjectId: 2.5.29.35 Criticality=false AuthorityKeyIdentifier [ KeyIdentifier [ 0000: BF 5F B7 D1 CE DD 1F 86 F4 5B 55 AC DC D7 10 C2 ._.......[U..... 0010: 0E A9 88 E7 .... ] Loading data using DSBulk 68 Amazon Keyspaces (for Apache Cassandra) Developer Guide [OU=Starfield Class 2 Certification Authority, O="Starfield Technologies, Inc.", C=US] SerialNumber: [ 00] ] #2: ObjectId: 2.5.29.19 Criticality=false BasicConstraints:[ CA:true PathLen:2147483647 ] #3: ObjectId: 2.5.29.14 Criticality=false SubjectKeyIdentifier [ KeyIdentifier [ 0000: BF 5F B7 D1 CE DD 1F 86 F4 5B 55 AC DC D7 10 C2 ._.......[U..... 0010: 0E A9 88 E7 .... ] ] Trust this certificate? [no]: y 4. Set up the Cassandra Query Language shell (cqlsh) connection and confirm that you can connect to Amazon Keyspaces by following the steps at the section called “Using cqlsh”. 5. Download and install DSBulk. a. To download DSBulk, you can use the following code. curl -OL https://downloads.datastax.com/dsbulk/dsbulk-1.8.0.tar.gz b. Then unpack the tar file and add DSBulk to your PATH as shown in the following example. tar -zxvf dsbulk-1.8.0.tar.gz # add the DSBulk directory to the path export PATH=$PATH:./dsbulk-1.8.0/bin c. Create an application.conf file to store settings to be used by DSBulk. You can save the following example as ./dsbulk_keyspaces.conf. Replace localhost with the contact point of your local Cassandra cluster if you are not on the local node, for example the DNS name or IP address. Take note of the file name and path, as you're going to need to specify this later in the dsbulk load command. datastax-java-driver { basic.contact-points = [ "localhost"] advanced.auth-provider { class = software.aws.mcs.auth.SigV4AuthProvider Loading data using DSBulk 69 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws-region = us-east-1 } } d. To enable SigV4 support, download the shaded jar file from GitHub and place it in the DSBulk lib folder as shown in the following example. curl -O -L https://github.com/aws/aws-sigv4-auth-cassandra-java-driver-plugin/ releases/download/4.0.6-shaded-v2/aws-sigv4-auth-cassandra-java-driver- plugin-4.0.6-shaded.jar Step 1: Create the source CSV file and a target table for the data upload using DSBulk For this tutorial, we use a comma-separated values (CSV) file with the name keyspaces_sample_table.csv as the source file for the data migration. The provided sample file contains a few rows of data for a table with the name book_awards. 1. Create the source file. You can choose one of the following options: • • Download the sample CSV file (keyspaces_sample_table.csv) contained in the following archive file samplemigration.zip. Unzip the archive and take note of the path to keyspaces_sample_table.csv. To populate a CSV file with your own data stored in an Apache Cassandra database, you can populate the source CSV file by using dsbulk unload as shown in the following example. dsbulk unload -k mykeyspace -t mytable -f ./my_application.conf > keyspaces_sample_table.csv Make sure the CSV file you create meets the following requirements: • The first row contains the column names. • The column names in the source CSV file match the column names in the target table. • The data is delimited with a comma. • All data values are valid Amazon Keyspaces data types. See the section called “Data types”. 2. Create the target keyspace and table in Amazon Keyspaces. Loading data using DSBulk 70 Amazon Keyspaces (for Apache Cassandra) Developer Guide a. Connect to Amazon Keyspaces using cqlsh, replacing the service endpoint, user name, and password in the following example with your own values. cqlsh cassandra.us-east-2.amazonaws.com 9142 -u "111122223333" - p "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" --ssl b. Create a new keyspace with the name catalog as shown in the following example. CREATE KEYSPACE |
AmazonKeyspaces-030 | AmazonKeyspaces.pdf | 30 | names in the target table. • The data is delimited with a comma. • All data values are valid Amazon Keyspaces data types. See the section called “Data types”. 2. Create the target keyspace and table in Amazon Keyspaces. Loading data using DSBulk 70 Amazon Keyspaces (for Apache Cassandra) Developer Guide a. Connect to Amazon Keyspaces using cqlsh, replacing the service endpoint, user name, and password in the following example with your own values. cqlsh cassandra.us-east-2.amazonaws.com 9142 -u "111122223333" - p "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" --ssl b. Create a new keyspace with the name catalog as shown in the following example. CREATE KEYSPACE catalog WITH REPLICATION = {'class': 'SingleRegionStrategy'}; c. After the new keyspace has a status of available, use the following code to create the target table book_awards. To learn more about asynchronous resource creation and how to check if a resource is available, see the section called “Check keyspace creation status”. CREATE TABLE catalog.book_awards ( year int, award text, rank int, category text, book_title text, author text, publisher text, PRIMARY KEY ((year, award), category, rank) ); If Apache Cassandra is your original data source, a simple way to create the Amazon Keyspaces target table with matching headers is to generate the CREATE TABLE statement from the source table as shown in the following statement. cqlsh localhost 9042 -u "username" -p "password" --execute "DESCRIBE TABLE mykeyspace.mytable;" Then create the target table in Amazon Keyspaces with the column names and data types matching the description from the Cassandra source table. Loading data using DSBulk 71 Amazon Keyspaces (for Apache Cassandra) Developer Guide Step 2: Prepare the data to upload using DSBulk Preparing the source data for an efficient transfer is a two-step process. First, you randomize the data. In the second step, you analyze the data to determine the appropriate dsbulk parameter values and required table settings. Randomize the data The dsbulk command reads and writes data in the same order that it appears in the CSV file. If you use the dsbulk command to create the source file, the data is written in key-sorted order in the CSV. Internally, Amazon Keyspaces partitions data using partition keys. Although Amazon Keyspaces has built-in logic to help load balance requests for the same partition key, loading the data is faster and more efficient if you randomize the order. This is because you can take advantage of the built-in load balancing that occurs when Amazon Keyspaces is writing to different partitions. To spread the writes across the partitions evenly, you must randomize the data in the source file. You can write an application to do this or use an open-source tool, such as Shuf. Shuf is freely available on Linux distributions, on macOS (by installing coreutils in homebrew), and on Windows (by using Windows Subsystem for Linux (WSL)). One extra step is required to prevent the header row with the column names to get shuffled in this step. To randomize the source file while preserving the header, enter the following code. tail -n +2 keyspaces_sample_table.csv | shuf -o keyspace.table.csv && (head -1 keyspaces_sample_table.csv && cat keyspace.table.csv ) > keyspace.table.csv1 && mv keyspace.table.csv1 keyspace.table.csv Shuf rewrites the data to a new CSV file called keyspace.table.csv. You can now delete the keyspaces_sample_table.csv file—you no longer need it. Analyze the data Determine the average and maximum row size by analyzing the data. You do this for the following reasons: • The average row size helps to estimate the total amount of data to be transferred. • You need the average row size to provision the write capacity needed for the data upload. • You can make sure that each row is less than 1 MB in size, which is the maximum row size in Amazon Keyspaces. Loading data using DSBulk 72 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note This quota refers to row size, not partition size. Unlike Apache Cassandra partitions, Amazon Keyspaces partitions can be virtually unbound in size. Partition keys and clustering columns require additional storage for metadata, which you must add to the raw size of rows. For more information, see the section called “Estimate row size”. The following code uses AWK to analyze a CSV file and print the average and maximum row size. awk -F, 'BEGIN {samp=10000;max=-1;}{if(NR>1){len=length($0);t+=len;avg=t/ NR;max=(len>max ? len : max)}}NR==samp{exit}END{printf("{lines: %d, average: %d bytes, max: %d bytes}\n",NR,avg,max);}' keyspace.table.csv Running this code results in the following output. using 10,000 samples: {lines: 10000, avg: 123 bytes, max: 225 bytes} Make sure that your maximum row size doesn't exceed 1 MB. If it does, you have to break up the row or compress the data to bring the row size below 1 MB. In the next step of this tutorial, you use the average row size to provision the write capacity for the table. Step 3: Set the throughput capacity for the target table |
AmazonKeyspaces-031 | AmazonKeyspaces.pdf | 31 | awk -F, 'BEGIN {samp=10000;max=-1;}{if(NR>1){len=length($0);t+=len;avg=t/ NR;max=(len>max ? len : max)}}NR==samp{exit}END{printf("{lines: %d, average: %d bytes, max: %d bytes}\n",NR,avg,max);}' keyspace.table.csv Running this code results in the following output. using 10,000 samples: {lines: 10000, avg: 123 bytes, max: 225 bytes} Make sure that your maximum row size doesn't exceed 1 MB. If it does, you have to break up the row or compress the data to bring the row size below 1 MB. In the next step of this tutorial, you use the average row size to provision the write capacity for the table. Step 3: Set the throughput capacity for the target table This tutorial shows you how to tune DSBulk to load data within a set time range. Because you know how many reads and writes you perform in advance, use provisioned capacity mode. After you finish the data transfer, you should set the capacity mode of the table to match your application’s traffic patterns. To learn more about capacity management, see Managing serverless resources. With provisioned capacity mode, you specify how much read and write capacity you want to provision to your table in advance. Write capacity is billed hourly and metered in write capacity units (WCUs). Each WCU is enough write capacity to support writing 1 KB of data per second. When you load the data, the write rate must be under the max WCUs (parameter: write_capacity_units) that are set on the target table. By default, you can provision up to 40,000 WCUs to a table and 80,000 WCUs across all the tables in your account. If you need additional capacity, you can request a quota increase in the Service Quotas console. For more information about quotas, see Quotas. Loading data using DSBulk 73 Amazon Keyspaces (for Apache Cassandra) Developer Guide Calculate the average number of WCUs required for an insert Inserting 1 KB of data per second requires 1 WCU. If your CSV file has 360,000 rows and you want to load all the data in 1 hour, you must write 100 rows per second (360,000 rows / 60 minutes / 60 seconds = 100 rows per second). If each row has up to 1 KB of data, to insert 100 rows per second, you must provision 100 WCUs to your table. If each row has 1.5 KB of data, you need two WCUs to insert one row per second. Therefore, to insert 100 rows per second, you must provision 200 WCUs. To determine how many WCUs you need to insert one row per second, divide the average row size in bytes by 1024 and round up to the nearest whole number. For example, if the average row size is 3000 bytes, you need three WCUs to insert one row per second. ROUNDUP(3000 / 1024) = ROUNDUP(2.93) = 3 WCUs Calculate data load time and capacity Now that you know the average size and number of rows in your CSV file, you can calculate how many WCUs you need to load the data in a given amount of time, and the approximate time it takes to load all the data in your CSV file using different WCU settings. For example, if each row in your file is 1 KB and you have 1,000,000 rows in your CSV file, to load the data in 1 hour, you need to provision at least 278 WCUs to your table for that hour. 1,000,000 rows * 1 KBs = 1,000,000 KBs 1,000,000 KBs / 3600 seconds =277.8 KBs / second = 278 WCUs Configure provisioned capacity settings You can set a table’s write capacity settings when you create the table or by using the ALTER TABLE command. The following is the syntax for altering a table’s provisioned capacity settings with the ALTER TABLE command. ALTER TABLE catalog.book_awards WITH custom_properties={'capacity_mode': {'throughput_mode': 'PROVISIONED', 'read_capacity_units': 100, 'write_capacity_units': 278}} ; For the complete language reference, see the section called “CREATE TABLE” and the section called “ALTER TABLE”. Loading data using DSBulk 74 Amazon Keyspaces (for Apache Cassandra) Developer Guide Step 4: Configure DSBulk settings to upload data from the CSV file to the target table This section outlines the steps required to configure DSBulk for data upload to Amazon Keyspaces. You configure DSBulk by using a configuration file. You specify the configuration file directly from the command line. 1. Create a DSBulk configuration file for the migration to Amazon Keyspaces, in this example we use the file name dsbulk_keyspaces.conf. Specify the following settings in the DSBulk configuration file. a. PlainTextAuthProvider – Create the authentication provider with the PlainTextAuthProvider class. ServiceUserName and ServicePassword should match the user name and password you obtained when you generated the service-specific credentials by following the steps at the section called “Create programmatic access credentials”. b. local-datacenter – Set the value for local-datacenter to the AWS Region that you're connecting to. For example, |
AmazonKeyspaces-032 | AmazonKeyspaces.pdf | 32 | file. You specify the configuration file directly from the command line. 1. Create a DSBulk configuration file for the migration to Amazon Keyspaces, in this example we use the file name dsbulk_keyspaces.conf. Specify the following settings in the DSBulk configuration file. a. PlainTextAuthProvider – Create the authentication provider with the PlainTextAuthProvider class. ServiceUserName and ServicePassword should match the user name and password you obtained when you generated the service-specific credentials by following the steps at the section called “Create programmatic access credentials”. b. local-datacenter – Set the value for local-datacenter to the AWS Region that you're connecting to. For example, if the application is connecting to cassandra.us- east-2.amazonaws.com, then set the local data center to us-east-2. For all available AWS Regions, see the section called “Service endpoints”. To avoid replicas, set slow- replica-avoidance to false. c. SSLEngineFactory – To configure SSL/TLS, initialize the SSLEngineFactory by adding a section in the configuration file with a single line that specifies the class with class = DefaultSslEngineFactory. Provide the path to cassandra_truststore.jks and the password that you created previously. d. consistency – Set the consistency level to LOCAL QUORUM. Other write consistency levels are not supported, for more information see the section called “Supported Cassandra consistency levels”. e. The number of connections per pool is configurable in the Java driver. For this example, set advanced.connection.pool.local.size to 3. The following is the complete sample configuration file. datastax-java-driver { basic.contact-points = [ "cassandra.us-east-2.amazonaws.com:9142"] advanced.auth-provider { class = PlainTextAuthProvider Loading data using DSBulk 75 Amazon Keyspaces (for Apache Cassandra) Developer Guide username = "ServiceUserName" password = "ServicePassword" } basic.load-balancing-policy { local-datacenter = "us-east-2" slow-replica-avoidance = false } basic.request { consistency = LOCAL_QUORUM default-idempotence = true } advanced.ssl-engine-factory { class = DefaultSslEngineFactory truststore-path = "./cassandra_truststore.jks" truststore-password = "my_password" hostname-validation = false } advanced.connection.pool.local.size = 3 } 2. Review the parameters for the DSBulk load command. a. executor.maxPerSecond – The maximum number of rows that the load command attempts to process concurrently per second. If unset, this setting is disabled with -1. Set executor.maxPerSecond based on the number of WCUs that you provisioned to the target destination table. The executor.maxPerSecond of the load command isn’t a limit – it’s a target average. This means it can (and often does) burst above the number you set. To allow for bursts and make sure that enough capacity is in place to handle the data load requests, set executor.maxPerSecond to 90% of the table’s write capacity. executor.maxPerSecond = WCUs * .90 In this tutorial, we set executor.maxPerSecond to 5. Note If you are using DSBulk 1.6.0 or higher, you can use dsbulk.engine.maxConcurrentQueries instead. Loading data using DSBulk 76 Amazon Keyspaces (for Apache Cassandra) Developer Guide b. Configure these additional parameters for the DSBulk load command. • batch-mode – This parameter tells the system to group operations by partition key. We recommend to disable batch mode, because it can result in hot key scenarios and cause WriteThrottleEvents. • driver.advanced.retry-policy-max-retries – This determines how many times to retry a failed query. If unset, the default is 10. You can adjust this value as needed. • driver.basic.request.timeout – The time in minutes the system waits for a query to return. If unset, the default is "5 minutes". You can adjust this value as needed. Step 5: Run the DSBulk load command to upload data from the CSV file to the target table In the final step of this tutorial, you upload the data into Amazon Keyspaces. To run the DSBulk load command, complete the following steps. 1. Run the following code to upload the data from your csv file to your Amazon Keyspaces table. Make sure to update the path to the application configuration file you created earlier. dsbulk load -f ./dsbulk_keyspaces.conf --connector.csv.url keyspace.table.csv -header true --batch.mode DISABLED --executor.maxPerSecond 5 -- driver.basic.request.timeout "5 minutes" --driver.advanced.retry-policy.max- retries 10 -k catalog -t book_awards 2. The output includes the location of a log file that details successful and unsuccessful operations. The file is stored in the following directory. Operation directory: /home/user_name/logs/UNLOAD_20210308-202317-801911 3. The log file entries will include metrics, as in the following example. Check to make sure that the number of rows is consistent with the number of rows in your csv file. total | failed | rows/s | p50ms | p99ms | p999ms 200 | 0 | 200 | 21.63 | 21.89 | 21.89 Loading data using DSBulk 77 Amazon Keyspaces (for Apache Cassandra) Developer Guide Important Now that you have transferred your data, adjust the capacity mode settings of your target table to match your application’s regular traffic patterns. You incur charges at the hourly rate for your provisioned capacity until you change it. For more information, see the section called “Configure read/write capacity modes”. Loading data using DSBulk 78 Amazon Keyspaces (for Apache Cassandra) Developer Guide Accessing Amazon Keyspaces (for Apache Cassandra) You |
AmazonKeyspaces-033 | AmazonKeyspaces.pdf | 33 | | rows/s | p50ms | p99ms | p999ms 200 | 0 | 200 | 21.63 | 21.89 | 21.89 Loading data using DSBulk 77 Amazon Keyspaces (for Apache Cassandra) Developer Guide Important Now that you have transferred your data, adjust the capacity mode settings of your target table to match your application’s regular traffic patterns. You incur charges at the hourly rate for your provisioned capacity until you change it. For more information, see the section called “Configure read/write capacity modes”. Loading data using DSBulk 78 Amazon Keyspaces (for Apache Cassandra) Developer Guide Accessing Amazon Keyspaces (for Apache Cassandra) You can access Amazon Keyspaces using the console, AWS CloudShell, programmatically by running a cqlsh client, the AWS SDK, or by using an Apache 2.0 licensed Cassandra driver. Amazon Keyspaces supports drivers and clients that are compatible with Apache Cassandra 3.11.2. Before accessing Amazon Keyspaces, you must complete setting up AWS Identity and Access Management and then grant an IAM identity access permissions to Amazon Keyspaces. Setting up AWS Identity and Access Management Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Setting up AWS Identity and Access Management 79 Amazon Keyspaces (for Apache Cassandra) Developer Guide Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. Create a user with administrative access 80 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Setting up Amazon Keyspaces Access to Amazon Keyspaces resources is managed using IAM. Using IAM, you can attach policies to IAM users, roles, and federated identities that grant read and write permissions to specific resources in Amazon Keyspaces. To get started with granting permissions to an IAM identity, you can use one of the AWS managed policies for Amazon Keyspaces: • AmazonKeyspacesFullAccess – this policy grants permissions to access all resources in Amazon Keyspaces with full access to all features. • AmazonKeyspacesReadOnlyAccess_v2 – this policy grants read-only permissions to Amazon Keyspaces. For a detailed explanation of the actions defined in the managed policies, see the section called “AWS managed policies”. To limit the scope of actions that an IAM identity can perform or limit the resources that the identity |
AmazonKeyspaces-034 | AmazonKeyspaces.pdf | 34 | and write permissions to specific resources in Amazon Keyspaces. To get started with granting permissions to an IAM identity, you can use one of the AWS managed policies for Amazon Keyspaces: • AmazonKeyspacesFullAccess – this policy grants permissions to access all resources in Amazon Keyspaces with full access to all features. • AmazonKeyspacesReadOnlyAccess_v2 – this policy grants read-only permissions to Amazon Keyspaces. For a detailed explanation of the actions defined in the managed policies, see the section called “AWS managed policies”. To limit the scope of actions that an IAM identity can perform or limit the resources that the identity can access, you can create a custom policy that uses the AmazonKeyspacesFullAccess managed policy as a template and remove all permissions that you don't need. You can also limit access to specific keyspaces or tables. For more information about how to restrict actions or limit access to specific resources in Amazon Keyspaces, see the section called “How Amazon Keyspaces works with IAM”. To access Amazon Keyspaces after you have created the AWS account and created a policy that grants an IAM identity access to Amazon Keyspaces, continue to one of the following sections: • Using the console • Using AWS CloudShell Setting up Amazon Keyspaces 81 Amazon Keyspaces (for Apache Cassandra) Developer Guide Accessing Amazon Keyspaces using the console You can access the console for Amazon Keyspaces at https://console.aws.amazon.com/keyspaces/ home. For more information about AWS Management Console access, see Controlling IAM users access to the AWS Management Console in the IAM User Guide. You can use the console to do the following in Amazon Keyspaces: • Create, delete, and manage keyspaces and tables. • Monitor important table metrics on a table's Monitor tab: • Billable table size (Bytes) • Capacity metrics • Run queries using the CQL editor, for example insert, update, and delete data. • Change the partitioner configuration of the account. • View performance and error metrics for the account on the dashboard. To learn how to create an Amazon Keyspaces keyspace and table and set it up with sample application data, see Getting started with Amazon Keyspaces (for Apache Cassandra). Using AWS CloudShell to access Amazon Keyspaces AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. You can run AWS CLI commands against AWS services using your preferred shell (Bash, PowerShell or Z shell). To work with Amazon Keyspaces using cqlsh, you must install the cqlsh-expansion. For cqlsh-expansion installation instructions, see the section called “Using the cqlsh-expansion”. You launch AWS CloudShell from the AWS Management Console, and the AWS credentials you used to sign in to the console are automatically available in a new shell session. This pre- authentication of AWS CloudShell users allows you to skip configuring credentials when interacting with AWS services such as Amazon Keyspaces using cqlsh or AWS CLI version 2 (pre-installed on the shell's compute environment). Using the console 82 Amazon Keyspaces (for Apache Cassandra) Developer Guide Obtaining IAM permissions for AWS CloudShell Using the access management resources provided by AWS Identity and Access Management, administrators can grant permissions to IAM users so they can access AWS CloudShell and use the environment's features. The quickest way for an administrator to grant access to users is through an AWS managed policy. An AWS managed policy is a standalone policy that's created and administered by AWS. The following AWS managed policy for CloudShell can be attached to IAM identities: • AWSCloudShellFullAccess: Grants permission to use AWS CloudShell with full access to all features. If you want to limit the scope of actions that an IAM user can perform with AWS CloudShell, you can create a custom policy that uses the AWSCloudShellFullAccess managed policy as a template. For more information about limiting the actions that are available to users in CloudShell, see Managing AWS CloudShell access and usage with IAM policies in the AWS CloudShell User Guide. Note Your IAM identity also requires a policy that grants permission to make calls to Amazon Keyspaces. You can use an AWS managed policy to give your IAM identity access you Amazon Keyspaces, or start with the managed policy as a template and remove the permissions that you don't need. You can also limit access to specific keyspaces and tables to create a custom policy. The following managed policy for Amazon Keyspaces can be attached to IAM identities: • AmazonKeyspacesFullAccess – This policy grants permission to use Amazon Keyspaces with full access to all features. For a detailed explanation of the actions defined in the managed policy, see the section called “AWS managed policies”. For more information about how to restrict actions or limit access to specific resources in Amazon Keyspaces, see the section called “How Amazon Keyspaces works with IAM”. Obtaining IAM permissions for AWS CloudShell 83 Amazon Keyspaces |
AmazonKeyspaces-035 | AmazonKeyspaces.pdf | 35 | You can also limit access to specific keyspaces and tables to create a custom policy. The following managed policy for Amazon Keyspaces can be attached to IAM identities: • AmazonKeyspacesFullAccess – This policy grants permission to use Amazon Keyspaces with full access to all features. For a detailed explanation of the actions defined in the managed policy, see the section called “AWS managed policies”. For more information about how to restrict actions or limit access to specific resources in Amazon Keyspaces, see the section called “How Amazon Keyspaces works with IAM”. Obtaining IAM permissions for AWS CloudShell 83 Amazon Keyspaces (for Apache Cassandra) Developer Guide Interacting with Amazon Keyspaces using AWS CloudShell After you launch AWS CloudShell from the AWS Management Console, you can immediately start to interact with Amazon Keyspaces using cqlsh or the command line interface. If you haven't already installed the cqlsh-expansion, see the section called “Using the cqlsh-expansion” for detailed steps. Note When using the cqlsh-expansion in AWS CloudShell, you don't need to configure credentials before making calls, because you're already authenticated within the shell. Connect to Amazon Keyspaces and create a new keyspace. Then read from a system table to confirm that the keyspace was created using AWS CloudShell 1. From the AWS Management Console, you can launch CloudShell by choosing the following options available on the navigation bar: • Choose the CloudShell icon. • Start typing "cloudshell" in Search box and then choose the CloudShell option. 2. You can establish a connection to Amazon Keyspaces using the following command. Make sure to replace cassandra.us-east-1.amazonaws.com with the correct endpoint for your Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl If the connection is successful, you should see output similar to the following example. Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142 [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh current consistency level is ONE. cqlsh> 3. Create a new keyspace with the name mykeyspace. You can use the following command to do that. Interacting with Amazon Keyspaces using AWS CloudShell 84 Amazon Keyspaces (for Apache Cassandra) Developer Guide CREATE KEYSPACE mykeyspace WITH REPLICATION = {'class': 'SingleRegionStrategy'}; 4. To confirm that the keyspace was created, you can read from a system table using the following command. SELECT * FROM system_schema_mcs.keyspaces WHERE keyspace_name = 'mykeyspace'; If the call is successful, the command line displays a response from the service similar to the following output: keyspace_name | durable_writes | replication ----------------+---------------- +------------------------------------------------------------------------------------- mykeyspace | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} (1 rows) Create credentials for programmatic access to Amazon Keyspaces To provide users and applications with credentials for programmatic access to Amazon Keyspaces resources, you can do either of the following: • Create service-specific credentials that are similar to the traditional username and password that Cassandra uses for authentication and access management. AWS service-specific credentials are associated with a specific AWS Identity and Access Management (IAM) user and can only be used for the service they were created for. For more information, see Using IAM with Amazon Keyspaces (for Apache Cassandra) in the IAM User Guide. Warning IAM users have long-term credentials, which presents a security risk. To help mitigate this risk, we recommend that you provide these users with only the permissions they require to perform the task and that you remove these users when they are no longer needed. Create programmatic access credentials 85 Amazon Keyspaces (for Apache Cassandra) Developer Guide • For enhanced security, we recommend to create IAM identities that are used across all AWS services and use temporary credentials. The Amazon Keyspaces SigV4 authentication plugin for Cassandra client drivers enables you to authenticate calls to Amazon Keyspaces using IAM access keys instead of user name and password. To learn more about how the Amazon Keyspaces SigV4 plugin enables IAM users, roles, and federated identities to authenticate in Amazon Keyspaces API requests, see AWS Signature Version 4 process (SigV4). You can download the SigV4 plugins from the following locations. • Java: https://github.com/aws/aws-sigv4-auth-cassandra-java-driver-plugin. • Node.js: https://github.com/aws/aws-sigv4-auth-cassandra-nodejs-driver-plugin. • Python: https://github.com/aws/aws-sigv4-auth-cassandra-python-driver-plugin. • Go: https://github.com/aws/aws-sigv4-auth-cassandra-gocql-driver-plugin. For code samples that show how to establish connections using the SigV4 authentication plugin, see the section called “Using a Cassandra client driver”. Topics • Create service-specific credentials for programmatic access to Amazon Keyspaces • Create and configure AWS credentials for Amazon Keyspaces Create service-specific credentials for programmatic access to Amazon Keyspaces Service-specific credentials are similar to the traditional username and password that Cassandra uses for authentication and access management. Service-specific credentials enable IAM users to access a specific AWS service. These long-term credentials can't be used to access other AWS services. They are associated with a specific IAM user and can't be used by other IAM users. Important Service-specific credentials are long-term credentials associated with a specific IAM user and can only be used for the service |
AmazonKeyspaces-036 | AmazonKeyspaces.pdf | 36 | for programmatic access to Amazon Keyspaces • Create and configure AWS credentials for Amazon Keyspaces Create service-specific credentials for programmatic access to Amazon Keyspaces Service-specific credentials are similar to the traditional username and password that Cassandra uses for authentication and access management. Service-specific credentials enable IAM users to access a specific AWS service. These long-term credentials can't be used to access other AWS services. They are associated with a specific IAM user and can't be used by other IAM users. Important Service-specific credentials are long-term credentials associated with a specific IAM user and can only be used for the service they were created for. To give IAM roles or federated identities permissions to access all your AWS resources using temporary credentials, you should use AWS authentication with the SigV4 authentication plugin for Amazon Keyspaces. Create service-specific credentials 86 Amazon Keyspaces (for Apache Cassandra) Developer Guide Use one of the following procedures to generate service-specific credentials. Console Create service-specific credentials using the console 1. Sign in to the AWS Management Console and open the AWS Identity and Access Management console at https://console.aws.amazon.com/iam/home. 2. In the navigation pane, choose Users, and then choose the user that you created earlier that has Amazon Keyspaces permissions (policy attached). 3. Choose Security Credentials. Under Credentials for Amazon Keyspaces, choose Generate credentials to generate the service-specific credentials. Your service-specific credentials are now available. This is the only time you can download or view the password. You cannot recover it later. However, you can reset your password at any time. Save the user and password in a secure location, because you'll need them later. CLI Create service-specific credentials using the AWS CLI Before generating service-specific credentials, you need to download, install, and configure the AWS Command Line Interface (AWS CLI): 1. Download the AWS CLI at http://aws.amazon.com/cli. Note The AWS CLI runs on Windows, macOS, or Linux. 2. Follow the instructions for Installing the AWS CLI and Configuring the AWS CLI in the AWS Command Line Interface User Guide. 3. Using the AWS CLI, run the following command to generate service-specific credentials for the user alice, so that she can access Amazon Keyspaces. aws iam create-service-specific-credential \ --user-name alice \ --service-name cassandra.amazonaws.com Create service-specific credentials 87 Amazon Keyspaces (for Apache Cassandra) Developer Guide The output looks like the following. { "ServiceSpecificCredential": { "CreateDate": "2019-10-09T16:12:04Z", "ServiceName": "cassandra.amazonaws.com", "ServiceUserName": "alice-at-111122223333", "ServicePassword": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "ServiceSpecificCredentialId": "ACCAYFI33SINPGJEBYESF", "UserName": "alice", "Status": "Active" } } In the output, note the values for ServiceUserName and ServicePassword. Save these values in a secure location, because you'll need them later. Important This is the only time that the ServicePassword will be available to you. Create and configure AWS credentials for Amazon Keyspaces To access Amazon Keyspaces programmatically with the AWS CLI, the AWS SDK, or with Cassandra client drivers and the SigV4 plugin, you need an IAM user or role with access keys. When you use AWS programmatically, you provide your AWS access keys so that AWS can verify your identity in programmatic calls. Your access keys consist of an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/ bPxRfiCYEXAMPLEKEY). This topic walks you through the required steps in this process. Security best practices recommend that you create IAM users with limited permissions and instead associate IAM roles with the permissions needed to perform specific tasks. IAM users can then temporarily assume IAM roles to perform the required tasks. For example, IAM users in your account using the Amazon Keyspaces console can switch to a role to temporarily use the permissions of the role in the console. The users give up their original permissions and take on the permissions assigned to the role. When the users exit the role, their original permissions are restored. The credentials the users use to assume the role are temporary. On the contrary, IAM users have long-term credentials, which presents a security risk if instead of assuming roles Create IAM credentials for AWS authentication 88 Amazon Keyspaces (for Apache Cassandra) Developer Guide they have permissions directly assigned to them. To help mitigate this risk, we recommend that you provide these users with only the permissions they require to perform the task and that you remove these users when they are no longer needed. For more information about roles, see Common scenarios for roles: Users, applications, and services in the IAM User Guide. Topics • Credentials required by the AWS CLI, the AWS SDK, or the Amazon Keyspaces SigV4 plugin for Cassandra client drivers • Create temporary credentials to connect to Amazon Keyspaces using an IAM role and the SigV4 plugin • Create an IAM user for programmatic access to Amazon Keyspaces in your AWS account • Create new access keys for an IAM user • Store access keys for programmatic access Credentials required by the AWS CLI, |
AmazonKeyspaces-037 | AmazonKeyspaces.pdf | 37 | when they are no longer needed. For more information about roles, see Common scenarios for roles: Users, applications, and services in the IAM User Guide. Topics • Credentials required by the AWS CLI, the AWS SDK, or the Amazon Keyspaces SigV4 plugin for Cassandra client drivers • Create temporary credentials to connect to Amazon Keyspaces using an IAM role and the SigV4 plugin • Create an IAM user for programmatic access to Amazon Keyspaces in your AWS account • Create new access keys for an IAM user • Store access keys for programmatic access Credentials required by the AWS CLI, the AWS SDK, or the Amazon Keyspaces SigV4 plugin for Cassandra client drivers The following credentials are required to authenticate the IAM user or role: AWS_ACCESS_KEY_ID Specifies an AWS access key associated with an IAM user or role. The access key aws_access_key_id is required to connect to Amazon Keyspaces programmatically. AWS_SECRET_ACCESS_KEY Specifies the secret key associated with the access key. This is essentially the "password" for the access key. The aws_secret_access_key is required to connect to Amazon Keyspaces programmatically. AWS_SESSION_TOKEN – Optional Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS Security Token Service operations. For more information, see the section called “Create temporary credentials to connect to Amazon Keyspaces”. Create IAM credentials for AWS authentication 89 Amazon Keyspaces (for Apache Cassandra) Developer Guide If you are connecting with an IAM user, the aws_session_token is not required. Create temporary credentials to connect to Amazon Keyspaces using an IAM role and the SigV4 plugin The recommended way to access Amazon Keyspaces programmatically is by using temporary credentials to authenticate with the SigV4 plugin. In many scenarios, you don't need long-term access keys that never expire (as you have with an IAM user). Instead, you can create an IAM role and generate temporary security credentials. Temporary security credentials consist of an access key ID and a secret access key, but they also include a security token that indicates when the credentials expire. To learn more about how to use IAM roles instead of long-term access keys, see Switching to an IAM role (AWS API). To get started with temporary credentials, you first need to create an IAM role. Create an IAM role that grants read-only access to Amazon Keyspaces 1. Sign in to the AWS Management Console and open the IAM console at https:// console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles, then Create role. 3. On the Create role page, under Select type of trusted entity, choose AWS service. Under Choose a use case, choose Amazon EC2, then choose Next. 4. On the Add permissions page, under Permissions policies, choose Amazon Keyspaces Read Only Access from the policy list, then choose Next. 5. On the Name, review, and create page, enter a name for the role, and review the Select trusted entities and Add permissions sections. You can also add optional tags for the role on this page. When you are done, select Create role. Remember this name because you’ll need it when you launch your Amazon EC2 instance. To use temporary security credentials in code, you programmatically call an AWS Security Token Service API like AssumeRole and extract the resulting credentials and session token from your IAM role that you created in the previous step. You then use those values as credentials for subsequent calls to AWS. The following example shows pseudocode for how to use temporary security credentials: assumeRoleResult = AssumeRole(role-arn); Create IAM credentials for AWS authentication 90 Amazon Keyspaces (for Apache Cassandra) Developer Guide tempCredentials = new SessionAWSCredentials( assumeRoleResult.AccessKeyId, assumeRoleResult.SecretAccessKey, assumeRoleResult.SessionToken); cassandraRequest = CreateAmazoncassandraClient(tempCredentials); For an example that implements temporary credentials using the Python driver to access Amazon Keyspaces, see ???. For details about how to call AssumeRole, GetFederationToken, and other API operations, see the AWS Security Token Service API Reference. For information on getting the temporary security credentials and session token from the result, see the documentation for the SDK that you're working with. You can find the documentation for all the AWS SDKs on the main AWS documentation page, in the SDKs and Toolkits section. Create an IAM user for programmatic access to Amazon Keyspaces in your AWS account To obtain credentials for programmatic access to Amazon Keyspaces with the AWS CLI, the AWS SDK, or the SigV4 plugin, you need to first create an IAM user or role. The process of creating a IAM user and configuring that IAM user to have programmatic access to Amazon Keyspaces is shown in the following steps: 1. Create the user in the AWS Management Console, the AWS CLI, Tools for Windows PowerShell, or using an AWS API operation. If you create the user in the AWS Management Console, then the credentials are created automatically. 2. If |
AmazonKeyspaces-038 | AmazonKeyspaces.pdf | 38 | Keyspaces in your AWS account To obtain credentials for programmatic access to Amazon Keyspaces with the AWS CLI, the AWS SDK, or the SigV4 plugin, you need to first create an IAM user or role. The process of creating a IAM user and configuring that IAM user to have programmatic access to Amazon Keyspaces is shown in the following steps: 1. Create the user in the AWS Management Console, the AWS CLI, Tools for Windows PowerShell, or using an AWS API operation. If you create the user in the AWS Management Console, then the credentials are created automatically. 2. If you create the user programmatically, then you must create an access key (access key ID and a secret access key) for that user in an additional step. 3. Give the user permissions to access Amazon Keyspaces. For information about the permissions that you need in order to create an IAM user, see Permissions required to access IAM resources. Console Create an IAM user with programmatic access (console) 1. Sign in to the AWS Management Console and open the IAM console at https:// console.aws.amazon.com/iam/. Create IAM credentials for AWS authentication 91 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2. 3. In the navigation pane, choose Users and then choose Add users. Type the user name for the new user. This is the sign-in name for AWS. Note User names can be a combination of up to 64 letters, digits, and these characters: plus (+), equal (=), comma (,), period (.), at sign (@), underscore (_), and hyphen (-). Names must be unique within an account. They are not distinguished by case. For example, you cannot create two users named TESTUSER and testuser. 4. Select Access key - Programmatic access to create an access key for the new user. You can view or download the access key when you get to the Final page. Choose Next: Permissions. 5. On the Set permissions page, choose Attach existing policies directly to assign permissions to the new user. This option displays the list of AWS managed and customer managed policies available in your account. You can enter keyspaces into the search field to display only the policies that are related to Amazon Keyspaces. For Amazon Keyspaces, the available managed policies are AmazonKeyspacesFullAccess and AmazonKeyspacesReadOnlyAccess. For more information about each policy, see the section called “AWS managed policies”. For testing purposes and to follow the connection tutorials, select the AmazonKeyspacesReadOnlyAccess policy for the new IAM user. Note: As a best practice, we recommend that you follow the principle of least privilege and create custom policies that limit access to specific resources and only allow the required actions. For more information about IAM policies and to view example policies for Amazon Keyspaces, see the section called “Amazon Keyspaces identity-based policies”. After you have created custom permission policies, attach your policies to roles and then let users assume the appropriate roles temporarily. Choose Next: Tags. 6. On the Add tags (optional) page you can add tags for the user, or choose Next: Review. 7. On the Review page you can see all of the choices you made up to this point. When you're ready to proceed, choose Create user. Create IAM credentials for AWS authentication 92 Amazon Keyspaces (for Apache Cassandra) Developer Guide 8. To view the user's access keys (access key IDs and secret access keys), choose Show next to the password and access key. To save the access keys, choose Download .csv and then save the file to a safe location. Important This is your only opportunity to view or download the secret access keys, and you need this information before they can use the SigV4 plugin. Save the user's new access key ID and secret access key in a safe and secure place. You will not have access to the secret keys again after this step. CLI Create an IAM user with programmatic access (AWS CLI) 1. Create a user with the following AWS CLI code. • aws iam create-user 2. Give the user programmatic access. This requires access keys, that can be generated in the following ways. • AWS CLI: aws iam create-access-key • Tools for Windows PowerShell: New-IAMAccessKey • IAM API: CreateAccessKey Important This is your only opportunity to view or download the secret access keys, and you need this information before they can use the SigV4 plugin. Save the user's new access key ID and secret access key in a safe and secure place. You will not have access to the secret keys again after this step. 3. Attach the AmazonKeyspacesReadOnlyAccess policy to the user that defines the user's permissions. Note: As a best practice, we recommend that you manage user permissions by adding the user to a group and attaching a policy to the group instead of attaching directly to a |
AmazonKeyspaces-039 | AmazonKeyspaces.pdf | 39 | This is your only opportunity to view or download the secret access keys, and you need this information before they can use the SigV4 plugin. Save the user's new access key ID and secret access key in a safe and secure place. You will not have access to the secret keys again after this step. 3. Attach the AmazonKeyspacesReadOnlyAccess policy to the user that defines the user's permissions. Note: As a best practice, we recommend that you manage user permissions by adding the user to a group and attaching a policy to the group instead of attaching directly to a user. Create IAM credentials for AWS authentication 93 Amazon Keyspaces (for Apache Cassandra) Developer Guide • AWS CLI: aws iam attach-user-policy Create new access keys for an IAM user If you already have an IAM user, you can create new access keys at any time. For more information about key management, for example how to update access keys, see Managing access keys for IAM users. To create access keys for an IAM user (console) 1. Sign in to the AWS Management Console and open the IAM console at https:// console.aws.amazon.com/iam/. 2. In the navigation pane, choose Users. 3. Choose the name of the user whose access keys you want to create. 4. On the Summary page of the user, choose the Security credentials tab. 5. In the Access keys section, choose Create access key. To view the new access key pair, choose Show. Your credentials will look something like this: • Access key ID: AKIAIOSFODNN7EXAMPLE • Secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Note You will not have access to the secret access key again after this dialog box closes. 6. To download the key pair, choose Download .csv file. Store the keys in a secure location. 7. After you download the .csv file, choose Close. When you create an access key, the key pair is active by default, and you can use the pair right away. Create IAM credentials for AWS authentication 94 Amazon Keyspaces (for Apache Cassandra) Developer Guide Store access keys for programmatic access As a best practice, we recommend that you don't embed access keys directly into code. The AWS SDKs and the AWS Command Line Tools enable you to put access keys in known locations so that you do not have to keep them in code. Put access keys in one of the following locations: • Environment variables – On a multitenant system, choose user environment variables, not system environment variables. • CLI credentials file – The credentials and config file are updated when you run the command aws configure. The credentials file is located at ~/.aws/credentials on Linux, macOS, or Unix, or at C:\Users\USERNAME\.aws\credentials on Windows. This file can contain the credential details for the default profile and any named profiles. • CLI configuration file – The credentials and config file are updated when you run the command aws configure. The config file is located at ~/.aws/config on Linux, macOS, or Unix, or at C:\Users\USERNAME\.aws\config on Windows. This file contains the configuration settings for the default profile and any named profiles. Storing access keys as environment variables is a prerequisite for the the section called “Authentication plugin for Java 4.x”. Note that this includes the default AWS Region. The client searches for credentials using the default credentials provider chain, and access keys stored as environment variables take precedent over all other locations, for example configuration files. For more information, see Configuration settings and precedence. The following examples show how you can configure environment variables for the default user. Linux, macOS, or Unix $ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE $ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY $ export AWS_SESSION_TOKEN=AQoDYXdzEJr...<remainder of security token> $ export AWS_DEFAULT_REGION=aws-region Setting the environment variable changes the value used until the end of your shell session, or until you set the variable to a different value. You can make the variables persistent across future sessions by setting them in your shell's startup script. Windows Command Prompt C:\> setx AWS_ACCESS_KEY_ID AKIAIOSFODNN7EXAMPLE Create IAM credentials for AWS authentication 95 Amazon Keyspaces (for Apache Cassandra) Developer Guide C:\> setx AWS_SECRET_ACCESS_KEY wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY C:\> setx AWS_SESSION_TOKEN AQoDYXdzEJr...<remainder of security token> C:\> setx AWS_DEFAULT_REGION aws-region Using set to set an environment variable changes the value used until the end of the current command prompt session, or until you set the variable to a different value. Using setx to set an environment variable changes the value used in both the current command prompt session and all command prompt sessions that you create after running the command. It does not affect other command shells that are already running at the time you run the command. PowerShell PS C:\> $Env:AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE" PS C:\> $Env:AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" PS C:\> $Env:AWS_SESSION_TOKEN="AQoDYXdzEJr...<remainder of security token>" PS C:\> $Env:AWS_DEFAULT_REGION="aws-region" If you set an environment variable at the PowerShell prompt as shown in the previous examples, it saves |
AmazonKeyspaces-040 | AmazonKeyspaces.pdf | 40 | the end of the current command prompt session, or until you set the variable to a different value. Using setx to set an environment variable changes the value used in both the current command prompt session and all command prompt sessions that you create after running the command. It does not affect other command shells that are already running at the time you run the command. PowerShell PS C:\> $Env:AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE" PS C:\> $Env:AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" PS C:\> $Env:AWS_SESSION_TOKEN="AQoDYXdzEJr...<remainder of security token>" PS C:\> $Env:AWS_DEFAULT_REGION="aws-region" If you set an environment variable at the PowerShell prompt as shown in the previous examples, it saves the value for only the duration of the current session. To make the environment variable setting persistent across all PowerShell and Command Prompt sessions, store it by using the System application in Control Panel. Alternatively, you can set the variable for all future PowerShell sessions by adding it to your PowerShell profile. See the PowerShell documentation for more information about storing environment variables or persisting them across sessions. Service endpoints for Amazon Keyspaces Topics • Ports and protocols • Global endpoints • AWS GovCloud (US) Region FIPS endpoints • China Regions endpoints Ports and protocols You can access Amazon Keyspaces programmatically by running a cqlsh client, with an Apache 2.0 licensed Cassandra driver, or by using the AWS CLI and the AWS SDK. Service endpoints 96 Amazon Keyspaces (for Apache Cassandra) Developer Guide The following table shows the ports and protocols for the different access mechanisms. Programmatic Access CQLSH Cassandra Driver AWS CLI AWS SDK Port 9142 9142 443 443 Protocol TLS TLS HTTPS HTTPS For TLS connections, Amazon Keyspaces uses the Starfield CA to authenticate against the server. For more information, see the section called “How to manually configure cqlsh connections for TLS” or the Before you begin section of your driver in the the section called “Using a Cassandra client driver” chapter. Global endpoints Amazon Keyspaces is available in the following AWS Regions. This table shows the available service endpoint for each Region. Region Endpoint us-east-2 cassandra.us-east-2.amazonaws.com us-east-1 cassandra.us-east-1.amazonaws.com cassandra-fips.us-east-1.amazonaws.com us- west-1 cassandra.us-west-1.amazonaws.com Protocol HTTPS and TLS HTTPS and TLS TLS HTTPS and TLS Region Name US East (Ohio) US East (N. Virginia) US West (N. Californi a) Global endpoints 97 Amazon Keyspaces (for Apache Cassandra) Developer Guide Region Name Region Endpoint Protocol US West (Oregon) us- west-2 cassandra.us-west-2.amazonaws.com cassandra-fips.us-west-2.amazonaws.com Africa (Cape Town) Asia Pacific (Hong Kong) Asia Pacific (Mumbai) Asia Pacific (Seoul) Asia Pacific af-south- 1 ap- east-1 ap- south-1 ap- northe ast-2 ap- southe (Singapor ast-1 e) Asia Pacific (Sydney) Asia Pacific (Tokyo) ap- southe ast-2 ap- northe ast-1 cassandra.af-south-1.amazonaws.com cassandra.ap-east-1.amazonaws.com cassandra.ap-south-1.amazonaws.com cassandra.ap-northeast-2.amazonaws.com cassandra.ap-southeast-1.amazonaws.com cassandra.ap-southeast-2.amazonaws.com cassandra.ap-northeast-1.amazonaws.com HTTPS and TLS TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS Global endpoints 98 Amazon Keyspaces (for Apache Cassandra) Developer Guide Region Name Region Endpoint Protocol Canada (Central) ca-centra l-1 Europe (Frankfur eu- central-1 t) Europe (Ireland) eu- west-1 Europe (London) eu- west-2 Europe (Paris) eu- west-3 Europe (Stockhol eu- north-1 cassandra.ca-central-1.amazonaws.com cassandra.eu-central-1.amazonaws.com cassandra.eu-west-1.amazonaws.com cassandra.eu-west-2.amazonaws.com cassandra.eu-west-3.amazonaws.com cassandra.eu-north-1.amazonaws.com me- south-1 cassandra.me-south-1.amazonaws.com sa-east-1 cassandra.sa-east-1.amazonaws.com m) Middle East (Bahrain) South America (São Paulo) AWS GovCloud (US-East) us-gov- east-1 cassandra.us-gov-east-1.amazonaws.com HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS HTTPS and TLS Global endpoints 99 Amazon Keyspaces (for Apache Cassandra) Developer Guide Region Name Region Endpoint cassandra.us-gov-west-1.amazonaws.com AWS GovCloud us-gov- west-1 (US- West) Protocol HTTPS and TLS AWS GovCloud (US) Region FIPS endpoints Available FIPS endpoints in the AWS GovCloud (US) Region. For more information, see Amazon Keyspaces in the AWS GovCloud (US) User Guide. Region name AWS GovCloud (US-East) AWS GovCloud (US-West) Region FIPS endpoint us-gov-ea st-1 us-gov-we st-1 cassandra.us-gov-east-1.amazonaws.com cassandra.us-gov-west-1.amazonaws.com Protocol HTTPS and TLS HTTPS and TLS China Regions endpoints The following Amazon Keyspaces endpoints are available in the AWS China Regions. To access these endpoints, you have to sign up for a separate set of account credentials unique to the China Regions. For more information, see China Signup, Accounts, and Credentials. Region name China (Beijing) Region Endpoint cn-north-1 cassandra.cn-north-1.amazonaws.com.cn Protocol HTTPS and TLS AWS GovCloud (US) Region FIPS endpoints 100 Amazon Keyspaces (for Apache Cassandra) Region name Region Endpoint China (Ningxia) cn-northw est-1 cassandra.cn-northwest-1.amazonaws.com.cn Developer Guide Protocol HTTPS and TLS Using cqlsh to connect to Amazon Keyspaces To connect to Amazon Keyspaces using cqlsh, you can use the cqlsh-expansion. This is a toolkit that contains common Apache Cassandra tooling like cqlsh and helpers that are preconfigured for Amazon Keyspaces while maintaining full compatibility with Apache Cassandra. The cqlsh-expansion integrates the SigV4 authentication plugin and allows you to connect using IAM access keys instead of user name and password. You only need to install the cqlsh scripts to make a connection and not |
AmazonKeyspaces-041 | AmazonKeyspaces.pdf | 41 | (for Apache Cassandra) Region name Region Endpoint China (Ningxia) cn-northw est-1 cassandra.cn-northwest-1.amazonaws.com.cn Developer Guide Protocol HTTPS and TLS Using cqlsh to connect to Amazon Keyspaces To connect to Amazon Keyspaces using cqlsh, you can use the cqlsh-expansion. This is a toolkit that contains common Apache Cassandra tooling like cqlsh and helpers that are preconfigured for Amazon Keyspaces while maintaining full compatibility with Apache Cassandra. The cqlsh-expansion integrates the SigV4 authentication plugin and allows you to connect using IAM access keys instead of user name and password. You only need to install the cqlsh scripts to make a connection and not the full Apache Cassandra distribution, because Amazon Keyspaces is serverless. This lightweight install package includes the cqlsh-expansion and the classic cqlsh scripts that you can install on any platform that supports Python. Note Murmur3Partitioner is the recommended partitioner for Amazon Keyspaces and the cqlsh-expansion. The cqlsh-expansion doesn't support the Amazon Keyspaces DefaultPartitioner. For more information, see the section called “Working with partitioners”. For general information about cqlsh, see cqlsh: the CQL shell. Topics • Using the cqlsh-expansion to connect to Amazon Keyspaces • How to manually configure cqlsh connections for TLS Using cqlsh 101 Amazon Keyspaces (for Apache Cassandra) Developer Guide Using the cqlsh-expansion to connect to Amazon Keyspaces Installing and configuring the cqlsh-expansion 1. To install the cqlsh-expansion Python package, you can run a pip command. This installs the cqlsh-expansion scripts on your machine using a pip install along with a file containing a list of dependencies. The --user flag tells pip to use the Python user install directory for your platform. On a Unix based system, that should be the ~/.local/ directory. You need Python 3 to install the cqlsh-expansion, to find out your Python version, use Python --version. To install, you can run the following command. python3 -m pip install --user cqlsh-expansion The output should look similar to this. Collecting cqlsh-expansion Downloading cqlsh_expansion-0.9.6-py3-none-any.whl (153 kB) ######################################## 153.7/153.7 KB 3.3 MB/s eta 0:00:00 Collecting cassandra-driver Downloading cassandra_driver-3.28.0-cp310-cp310- manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.1 MB) ######################################## 19.1/19.1 MB 44.5 MB/s eta 0:00:00 Requirement already satisfied: six>=1.12.0 in /usr/lib/python3/dist-packages (from cqlsh-expansion) (1.16.0) Collecting boto3 Downloading boto3-1.29.2-py3-none-any.whl (135 kB) ######################################## 135.8/135.8 KB 17.2 MB/s eta 0:00:00 Collecting cassandra-sigv4>=4.0.2 Downloading cassandra_sigv4-4.0.2-py2.py3-none-any.whl (9.8 kB) Collecting botocore<1.33.0,>=1.32.2 Downloading botocore-1.32.2-py3-none-any.whl (11.4 MB) ######################################## 11.4/11.4 MB 60.9 MB/s eta 0:00:00 Collecting s3transfer<0.8.0,>=0.7.0 Downloading s3transfer-0.7.0-py3-none-any.whl (79 kB) ######################################## 79.8/79.8 KB 13.1 MB/s eta 0:00:00 Collecting jmespath<2.0.0,>=0.7.1 Downloading jmespath-1.0.1-py3-none-any.whl (20 kB) Collecting geomet<0.3,>=0.1 Downloading geomet-0.2.1.post1-py3-none-any.whl (18 kB) Collecting python-dateutil<3.0.0,>=2.1 Using the cqlsh-expansion 102 Amazon Keyspaces (for Apache Cassandra) Developer Guide Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) ######################################## 247.7/247.7 KB 33.1 MB/s eta 0:00:00 Requirement already satisfied: urllib3<2.1,>=1.25.4 in /usr/lib/python3/dist- packages (from botocore<1.33.0,>=1.32.2->boto3->cqlsh-expansion) (1.26.5) Requirement already satisfied: click in /usr/lib/python3/dist-packages (from geomet<0.3,>=0.1->cassandra-driver->cqlsh-expansion) (8.0.3) Installing collected packages: python-dateutil, jmespath, geomet, cassandra-driver, botocore, s3transfer, boto3, cassandra-sigv4, cqlsh-expansion WARNING: The script geomet is installed in '/home/ubuntu/.local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. WARNING: The scripts cqlsh, cqlsh-expansion and cqlsh-expansion.init are installed in '/home/ubuntu/.local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed boto3-1.29.2 botocore-1.32.2 cassandra-driver-3.28.0 cassandra-sigv4-4.0.2 cqlsh-expansion-0.9.6 geomet-0.2.1.post1 jmespath-1.0.1 python-dateutil-2.8.2 s3transfer-0.7.0 If the install directory is not in the PATH, you need to add it following the instructions of your operating system. Below is one example for Ubuntu Linux. export PATH="$PATH:/home/ubuntu/.local/bin" To confirm that the package is installed, you can run the following command. cqlsh-expansion --version The output should look like this. cqlsh 6.1.0 2. To configure the cqlsh-expansion, you can run a post-install script to automatically complete the following steps: 1. Create the .cassandra directory in the user home directory if it doesn't already exist. 2. Copy a preconfigured cqlshrc configuration file into the .cassandra directory. 3. Copy the Starfield digital certificate into the .cassandra directory. Amazon Keyspaces uses this certificate to configure the secure connection with Transport Layer Security (TLS). Using the cqlsh-expansion 103 Amazon Keyspaces (for Apache Cassandra) Developer Guide Encryption in transit provides an additional layer of data protection by encrypting your data as it travels to and from Amazon Keyspaces. To review the script first, you can access it in the Github repo at post_install.py. To use the script, you can run the following command. cqlsh-expansion.init Note The directory and file created by the post-install script are not removed when you uninstall the cqlsh-expansion using pip uninstall, and have to be deleted manually. Connecting to Amazon Keyspaces using the cqlsh-expansion 1. Configure your AWS Region and add it as a user environment variable. To add your default Region as an environment variable on a Unix based system, you can run the following command. For this example, we use US East (N. Virginia). export AWS_DEFAULT_REGION=us-east-1 For more information about how to set environment variables, including |
AmazonKeyspaces-042 | AmazonKeyspaces.pdf | 42 | post_install.py. To use the script, you can run the following command. cqlsh-expansion.init Note The directory and file created by the post-install script are not removed when you uninstall the cqlsh-expansion using pip uninstall, and have to be deleted manually. Connecting to Amazon Keyspaces using the cqlsh-expansion 1. Configure your AWS Region and add it as a user environment variable. To add your default Region as an environment variable on a Unix based system, you can run the following command. For this example, we use US East (N. Virginia). export AWS_DEFAULT_REGION=us-east-1 For more information about how to set environment variables, including for other platforms, see How to set environment variables. 2. Find your service endpoint. Choose the appropriate service endpoint for your Region. To review the available endpoints for Amazon Keyspaces, see the section called “Service endpoints”. For this example, we use the endpoint cassandra.us-east-1.amazonaws.com. 3. Configure the authentication method. Connecting with IAM access keys (IAM users, roles, and federated identities) is the recommended method for enhanced security. Before you can connect with IAM access keys, you need to complete the following steps: Using the cqlsh-expansion 104 Amazon Keyspaces (for Apache Cassandra) Developer Guide a. Create an IAM user, or follow the best practice and create an IAM role that IAM users can assume. For more information on how to create IAM access keys, see the section called “Create IAM credentials for AWS authentication”. b. Create an IAM policy that grants the role (or IAM user) at least read-only access to Amazon Keyspaces. For more information about the permissions required for the IAM user or role to connect to Amazon Keyspaces, see the section called “Accessing Amazon Keyspaces tables”. c. Add the access keys of the IAM user to the user's environment variables as shown in the following example. export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY For more information about how to set environment variables, including for other platforms, see How to set environment variables. Note If you're connecting from an Amazon EC2 instance, you also need to configure an outbound rule in the security group that allows traffic from the instance to Amazon Keyspaces. For more information about how to view and edit EC2 outbound rules, see Add rules to a security group in the Amazon EC2 User Guide. 4. Connect to Amazon Keyspaces using the cqlsh-expansion and SigV4 authentication. To connect to Amazon Keyspaces with the cqlsh-expansion, you can use the following command. Make sure to replace the service endpoint with the correct endpoint for your Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl If the connection is successful, you should see output similar to the following example. Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142 [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh current consistency level is ONE. Using the cqlsh-expansion 105 Amazon Keyspaces (for Apache Cassandra) Developer Guide cqlsh> If you encounter a connection error, see the section called “Cqlsh connection errors” for troubleshooting information. • Connect to Amazon Keyspaces with service-specific credentials. To connect with the traditional username and password combination that Cassandra uses for authentication, you must first create service-specific credentials for Amazon Keyspaces as described in the section called “Create service-specific credentials”. You also have to give that user permissions to access Amazon Keyspaces, for more information see the section called “Accessing Amazon Keyspaces tables”. After you have created service-specific credentials and permissions for the user, you must update the cqlshrc file, typically found in the user directory path ~/.cassandra/. In the cqlshrc file, go to the Cassandra [authentication] section and comment out the SigV4 module and class under [auth_provider] using the ";" character as shown in the following example. [auth_provider] ; module = cassandra_sigv4.auth ; classname = SigV4AuthProvider After you have updated the cqlshrc file, you can connect to Amazon Keyspaces with service-specific credentials using the following command. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 -u myUserName - p myPassword --ssl Cleanup • To remove the cqlsh-expansion package you can use the pip uninstall command. pip3 uninstall cqlsh-expansion Using the cqlsh-expansion 106 Amazon Keyspaces (for Apache Cassandra) Developer Guide The pip3 uninstall command doesn't remove the directory and related files created by the post-install script. To remove the folder and files created by the post-install script, you can delete the .cassandra directory. How to manually configure cqlsh connections for TLS Amazon Keyspaces only accepts secure connections using Transport Layer Security (TLS). You can use the cqlsh-expansion utility that automatically downloads the certificate for you and installs a preconfigured cqlshrc configuration file. For more information, see the section called “Using the cqlsh-expansion” on this page. If you want to download the certificate and configure the connection manually, you can do so using the following steps. 1. Download the Starfield digital certificate using the following command and save sf-class2- root.crt locally or in your home directory. curl |
AmazonKeyspaces-043 | AmazonKeyspaces.pdf | 43 | script, you can delete the .cassandra directory. How to manually configure cqlsh connections for TLS Amazon Keyspaces only accepts secure connections using Transport Layer Security (TLS). You can use the cqlsh-expansion utility that automatically downloads the certificate for you and installs a preconfigured cqlshrc configuration file. For more information, see the section called “Using the cqlsh-expansion” on this page. If you want to download the certificate and configure the connection manually, you can do so using the following steps. 1. Download the Starfield digital certificate using the following command and save sf-class2- root.crt locally or in your home directory. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Note You can also use the Amazon digital certificate to connect to Amazon Keyspaces and can continue to do so if your client is connecting to Amazon Keyspaces successfully. The Starfield certificate provides additional backwards compatibility for clients using older certificate authorities. 2. Open the cqlshrc configuration file in the Cassandra home directory, for example ${HOME}/.cassandra/cqlshrc and add the following lines. [connection] port = 9142 factory = cqlshlib.ssl.ssl_transport_factory [ssl] validate = true certfile = path_to_file/sf-class2-root.crt How to manually configure cqlsh connections for TLS 107 Amazon Keyspaces (for Apache Cassandra) Developer Guide Using the AWS CLI to connect to Amazon Keyspaces You can use the AWS Command Line Interface (AWS CLI) to control multiple AWS services from the command line and automate them through scripts. With Amazon Keyspaces you can use the AWS CLI for data definition language (DDL) operations, such as creating a table. In addition, you can use infrastructure as code (IaC) services and tools such as AWS CloudFormation and Terraform. Before you can use the AWS CLI with Amazon Keyspaces, you must get an access key ID and secret access key. For more information, see the section called “Create IAM credentials for AWS authentication”. For a complete listing of all the commands available for Amazon Keyspaces in the AWS CLI, see the AWS CLI Command Reference. Topics • Downloading and Configuring the AWS CLI • Using the AWS CLI with Amazon Keyspaces Downloading and Configuring the AWS CLI The AWS CLI is available at https://aws.amazon.com/cli. It runs on Windows, macOS, or Linux. After downloading the AWS CLI, follow these steps to install and configure it: 1. Go to the AWS Command Line Interface User Guide 2. Follow the instructions for Installing the AWS CLI and Configuring the AWS CLI Using the AWS CLI with Amazon Keyspaces The command line format consists of a Amazon Keyspaces operation name followed by the parameters for that operation. The AWS CLI supports a shorthand syntax for the parameter values, as well as JSON. The following Amazon Keyspaces examples use AWS CLI shorthand syntax. For more information, see Using shorthand syntax with the AWS CLI. The following command creates a keyspace with the name catalog. aws keyspaces create-keyspace --keyspace-name 'catalog' Using the AWS CLI 108 Amazon Keyspaces (for Apache Cassandra) Developer Guide The command returns the resource Amazon Resource Name (ARN) in the output. { "resourceArn": "arn:aws:cassandra:us-east-1:111222333444:/keyspace/catalog/" } To confirm that the keyspace catalog exists, you can use the following command. aws keyspaces get-keyspace --keyspace-name 'catalog' The output of the command returns the following values. { "keyspaceName": "catalog", "resourceArn": "arn:aws:cassandra:us-east-1:111222333444:/keyspace/catalog/" } The following command creates a table with the name book_awards. The partition key of the table consists of the columns year and award and the clustering key consists of the columns category and rank, both clustering columns use the ascending sort order. (For easier readability, long commands in this section are broken into separate lines.) aws keyspaces create-table --keyspace-name 'catalog' --table-name 'book_awards' --schema-definition 'allColumns=[{name=year,type=int}, {name=award,type=text},{name=rank,type=int}, {name=category,type=text}, {name=author,type=text}, {name=book_title,type=text},{name=publisher,type=text}], partitionKeys=[{name=year}, {name=award}],clusteringKeys=[{name=category,orderBy=ASC},{name=rank,orderBy=ASC}]' This command results in the following output. { "resourceArn": "arn:aws:cassandra:us-east-1:111222333444:/keyspace/catalog/table/ book_awards" } To confirm the metadata and properties of the table, you can use the following command. aws keyspaces get-table --keyspace-name 'catalog' --table-name 'book_awards' Using the AWS CLI with Amazon Keyspaces 109 Amazon Keyspaces (for Apache Cassandra) Developer Guide This command returns the following output. { "keyspaceName": "catalog", "tableName": "book_awards", "resourceArn": "arn:aws:cassandra:us-east-1:111222333444:/keyspace/catalog/table/ book_awards", "creationTimestamp": 1645564368.628, "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "year", "type": "int" }, { "name": "award", "type": "text" }, { "name": "category", "type": "text" }, { "name": "rank", "type": "int" }, { "name": "author", "type": "text" }, { "name": "book_title", "type": "text" }, { "name": "publisher", "type": "text" } ], "partitionKeys": [ { "name": "year" }, Using the AWS CLI with Amazon Keyspaces 110 Amazon Keyspaces (for Apache Cassandra) Developer Guide { "name": "award" } ], "clusteringKeys": [ { "name": "category", "orderBy": "ASC" }, { "name": "rank", "orderBy": "ASC" } ], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": 1645564368.628 }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "ttl": { "status": "ENABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" } } When creating tables with complex schemas, it can be helpful to load |
AmazonKeyspaces-044 | AmazonKeyspaces.pdf | 44 | }, { "name": "book_title", "type": "text" }, { "name": "publisher", "type": "text" } ], "partitionKeys": [ { "name": "year" }, Using the AWS CLI with Amazon Keyspaces 110 Amazon Keyspaces (for Apache Cassandra) Developer Guide { "name": "award" } ], "clusteringKeys": [ { "name": "category", "orderBy": "ASC" }, { "name": "rank", "orderBy": "ASC" } ], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": 1645564368.628 }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "ttl": { "status": "ENABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" } } When creating tables with complex schemas, it can be helpful to load the table's schema definition from a JSON file. The following is an example of this. Download the schema definition example JSON file from schema_definition.zip and extract schema_definition.json, taking note of the path to the file. In this example, the schema definition JSON file is located in the current directory. For different file path options, see How to load parameters from a file. aws keyspaces create-table --keyspace-name 'catalog' Using the AWS CLI with Amazon Keyspaces 111 Amazon Keyspaces (for Apache Cassandra) Developer Guide --table-name 'book_awards' --schema-definition 'file:// schema_definition.json' The following examples show how to create a simple table with the name myTable with additional options. Note that the commands are broken down into separate rows to improve readability. This command shows how to create a table and: • set the capacity mode of the table • enable Point-in-time recovery for the table • set the default Time to Live (TTL) value for the table to one year • add two tags for the table aws keyspaces create-table --keyspace-name 'catalog' --table-name 'myTable' --schema-definition 'allColumns=[{name=id,type=int},{name=name,type=text}, {name=date,type=timestamp}],partitionKeys=[{name=id}]' --capacity-specification 'throughputMode=PROVISIONED,readCapacityUnits=5,writeCapacityUnits=5' --point-in-time-recovery 'status=ENABLED' --default-time-to-live '31536000' --tags 'key=env,value=test' 'key=dpt,value=sec' This example shows how to create a new table that uses a customer managed key for encryption and has TTL enabled to allow you to set expiration dates for columns and rows. To run this sample, you must replace the resource ARN for the customer managed AWS KMS key with your own key and ensure Amazon Keyspaces has access to it. aws keyspaces create-table --keyspace-name 'catalog' --table-name 'myTable' --schema-definition 'allColumns=[{name=id,type=int},{name=name,type=text}, {name=date,type=timestamp}],partitionKeys=[{name=id}]' --encryption-specification 'type=CUSTOMER_MANAGED_KMS_KEY,kmsKeyIdentifier=arn:aws:kms:us- east-1:111222333444:key/11111111-2222-3333-4444-555555555555' --ttl 'status=ENABLED' Using the API to connect to Amazon Keyspaces You can use the AWS SDK and the AWS Command Line Interface (AWS CLI) to work interactively with Amazon Keyspaces. You can use the API for data language definition (DDL) operations, such Using the API 112 Amazon Keyspaces (for Apache Cassandra) Developer Guide as creating a keyspace or a table. In addition, you can use infrastructure as code (IaC) services and tools such as AWS CloudFormation and Terraform. Before you can use the AWS CLI with Amazon Keyspaces, you must get an access key ID and secret access key. For more information, see the section called “Create IAM credentials for AWS authentication”. For a complete listing of all operations available for Amazon Keyspaces in the API, see Amazon Keyspaces API Reference. Using a Cassandra client driver to access Amazon Keyspaces programmatically You can use many third-party, open-source Cassandra drivers to connect to Amazon Keyspaces. Amazon Keyspaces is compatible with Cassandra drivers that support Apache Cassandra version 3.11.2. These are the drivers and latest versions that we’ve tested and recommend to use with Amazon Keyspaces: • Java v3.3 • Java v4.17 • Python Cassandra-driver 3.29.1 • Node.js cassandra driver -v 4.7.2 • GO using GOCQL v1.6 • .NET CassandraCSharpDriver -v 3.20.1 For more information about Cassandra drivers, see Apache Cassandra Client drivers. Note To help you get started, you can view and download end-to-end code examples that establish connections to Amazon Keyspaces with popular drivers. See Amazon Keyspaces examples on GitHub. The tutorials in this chapter include a simple CQL query to confirm that the connection to Amazon Keyspaces has been successfully established. To learn how to work with keyspaces and tables after Using a Cassandra client driver 113 Amazon Keyspaces (for Apache Cassandra) Developer Guide you connect to an Amazon Keyspaces endpoint, see CQL language reference. For a step-by-step tutorial that shows how to connect to Amazon Keyspaces from an Amazon VPC endpoint, see the section called “Connecting with VPC endpoints”. Topics • Using a Cassandra Java client driver to access Amazon Keyspaces programmatically • Using a Cassandra Python client driver to access Amazon Keyspaces programmatically • Using a Cassandra Node.js client driver to access Amazon Keyspaces programmatically • Using a Cassandra .NET Core client driver to access Amazon Keyspaces programmatically • Using a Cassandra Go client driver to access Amazon Keyspaces programmatically • Using a Cassandra Perl client driver to access Amazon Keyspaces programmatically Using a Cassandra Java client driver to access Amazon Keyspaces programmatically This section shows you how to connect to Amazon Keyspaces by using a Java client driver. Note Java 17 and the DataStax Java Driver 4.17 are currently only |
AmazonKeyspaces-045 | AmazonKeyspaces.pdf | 45 | • Using a Cassandra Python client driver to access Amazon Keyspaces programmatically • Using a Cassandra Node.js client driver to access Amazon Keyspaces programmatically • Using a Cassandra .NET Core client driver to access Amazon Keyspaces programmatically • Using a Cassandra Go client driver to access Amazon Keyspaces programmatically • Using a Cassandra Perl client driver to access Amazon Keyspaces programmatically Using a Cassandra Java client driver to access Amazon Keyspaces programmatically This section shows you how to connect to Amazon Keyspaces by using a Java client driver. Note Java 17 and the DataStax Java Driver 4.17 are currently only in Beta support. For more information, see https://docs.datastax.com/en/developer/java-driver/4.17/ upgrade_guide/. To provide users and applications with credentials for programmatic access to Amazon Keyspaces resources, you can do either of the following: • Create service-specific credentials that are associated with a specific AWS Identity and Access Management (IAM) user. • For enhanced security, we recommend to create IAM access keys for IAM identities that are used across all AWS services. The Amazon Keyspaces SigV4 authentication plugin for Cassandra client drivers enables you to authenticate calls to Amazon Keyspaces using IAM access keys instead of user name and password. For more information, see the section called “Create IAM credentials for AWS authentication”. Using a Cassandra Java client driver 114 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note For an example how to use Amazon Keyspaces with Spring Boot, see https://github.com/ aws-samples/amazon-keyspaces-examples/tree/main/java/datastax-v4/spring. Topics • Before you begin • Step-by-step tutorial to connect to Amazon Keyspaces using the DataStax Java driver for Apache Cassandra using service-specific credentials • Step-by-step tutorial to connect to Amazon Keyspaces using the 4.x DataStax Java driver for Apache Cassandra and the SigV4 authentication plugin • Connect to Amazon Keyspaces using the 3.x DataStax Java driver for Apache Cassandra and the SigV4 authentication plugin Before you begin To connect to Amazon Keyspaces, you need to complete the following tasks before you can start. 1. Amazon Keyspaces requires the use of Transport Layer Security (TLS) to help secure connections with clients. a. Download the Starfield digital certificate using the following command and save sf- class2-root.crt locally or in your home directory. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Note You can also use the Amazon digital certificate to connect to Amazon Keyspaces and can continue to do so if your client is connecting to Amazon Keyspaces successfully. The Starfield certificate provides additional backwards compatibility for clients using older certificate authorities. b. Convert the Starfield digital certificate into a trustStore file. openssl x509 -outform der -in sf-class2-root.crt -out temp_file.der Using a Cassandra Java client driver 115 Amazon Keyspaces (for Apache Cassandra) Developer Guide keytool -import -alias cassandra -keystore cassandra_truststore.jks -file temp_file.der In this step, you need to create a password for the keystore and trust this certificate. The interactive command looks like this. Enter keystore password: Re-enter new password: Owner: OU=Starfield Class 2 Certification Authority, O="Starfield Technologies, Inc.", C=US Issuer: OU=Starfield Class 2 Certification Authority, O="Starfield Technologies, Inc.", C=US Serial number: 0 Valid from: Tue Jun 29 17:39:16 UTC 2004 until: Thu Jun 29 17:39:16 UTC 2034 Certificate fingerprints: MD5: 32:4A:4B:BB:C8:63:69:9B:BE:74:9A:C6:DD:1D:46:24 SHA1: AD:7E:1C:28:B0:64:EF:8F:60:03:40:20:14:C3:D0:E3:37:0E:B5:8A SHA256: 14:65:FA:20:53:97:B8:76:FA:A6:F0:A9:95:8E:55:90:E4:0F:CC:7F:AA:4F:B7:C2:C8:67:75:21:FB:5F:B6:58 Signature algorithm name: SHA1withRSA Subject Public Key Algorithm: 2048-bit RSA key Version: 3 Extensions: #1: ObjectId: 2.5.29.35 Criticality=false AuthorityKeyIdentifier [ KeyIdentifier [ 0000: BF 5F B7 D1 CE DD 1F 86 F4 5B 55 AC DC D7 10 C2 ._.......[U..... 0010: 0E A9 88 E7 .... ] [OU=Starfield Class 2 Certification Authority, O="Starfield Technologies, Inc.", C=US] SerialNumber: [ 00] ] #2: ObjectId: 2.5.29.19 Criticality=false BasicConstraints:[ CA:true PathLen:2147483647 ] #3: ObjectId: 2.5.29.14 Criticality=false SubjectKeyIdentifier [ KeyIdentifier [ 0000: BF 5F B7 D1 CE DD 1F 86 F4 5B 55 AC DC D7 10 C2 ._.......[U..... 0010: 0E A9 88 E7 .... Using a Cassandra Java client driver 116 Amazon Keyspaces (for Apache Cassandra) Developer Guide ] ] Trust this certificate? [no]: y 2. Attach the trustStore file in the JVM arguments: -Djavax.net.ssl.trustStore=path_to_file/cassandra_truststore.jks -Djavax.net.ssl.trustStorePassword=my_password Step-by-step tutorial to connect to Amazon Keyspaces using the DataStax Java driver for Apache Cassandra using service-specific credentials The following step-by-step tutorial walks you through connecting to Amazon Keyspaces using a Java driver for Cassandra using service-specific credentials. Specifically, you'll use the 4.0 version of the DataStax Java driver for Apache Cassandra. Topics • Step 1: Prerequisites • Step 2: Configure the driver • Step 3: Run the sample application Step 1: Prerequisites To follow this tutorial, you need to generate service-specific credentials and add the DataStax Java driver for Apache Cassandra to your Java project. • Generate service-specific credentials for your Amazon Keyspaces IAM user by completing the steps in the section called “Create service-specific credentials”. If you prefer to use IAM access keys for authentication, see the section called “Authentication plugin for Java 4.x”. • Add the DataStax Java driver for Apache Cassandra to your Java project. Ensure |
AmazonKeyspaces-046 | AmazonKeyspaces.pdf | 46 | Apache Cassandra. Topics • Step 1: Prerequisites • Step 2: Configure the driver • Step 3: Run the sample application Step 1: Prerequisites To follow this tutorial, you need to generate service-specific credentials and add the DataStax Java driver for Apache Cassandra to your Java project. • Generate service-specific credentials for your Amazon Keyspaces IAM user by completing the steps in the section called “Create service-specific credentials”. If you prefer to use IAM access keys for authentication, see the section called “Authentication plugin for Java 4.x”. • Add the DataStax Java driver for Apache Cassandra to your Java project. Ensure that you're using a version of the driver that supports Apache Cassandra 3.11.2. For more information, see the DataStax Java driver for Apache Cassandra documentation. Step 2: Configure the driver You can specify settings for the DataStax Java Cassandra driver by creating a configuration file for your application. This configuration file overrides the default settings and tells the driver to Using a Cassandra Java client driver 117 Amazon Keyspaces (for Apache Cassandra) Developer Guide connect to the Amazon Keyspaces service endpoint using port 9142. For a list of available service endpoints, see the section called “Service endpoints”. Create a configuration file and save the file in the application's resources folder—for example, src/main/resources/application.conf. Open application.conf and add the following configuration settings. 1. Authentication provider – Create the authentication provider with the PlainTextAuthProvider class. ServiceUserName and ServicePassword should match the user name and password you obtained when you generated the service-specific credentials by following the steps in Create service-specific credentials for programmatic access to Amazon Keyspaces. Note You can use short-term credentials by using the authentication plugin for the DataStax Java driver for Apache Cassandra instead of hardcoding credentials in your driver configuration file. To learn more, follow the instructions for the the section called “Authentication plugin for Java 4.x”. 2. Local data center – Set the value for local-datacenter to the Region you're connecting to. For example, if the application is connecting to cassandra.us-east-2.amazonaws.com, then set the local data center to us-east-2. For all available AWS Regions, see ???. Set slow- replica-avoidance = false to load balance against fewer nodes. 3. SSL/TLS – Initialize the SSLEngineFactory by adding a section in the configuration file with a single line that specifies the class with class = DefaultSslEngineFactory. Provide the path to the trustStore file and the password that you created previously. Amazon Keyspaces doesn't support hostname-validation of peers, so set this option to false. datastax-java-driver { basic.contact-points = [ "cassandra.us-east-2.amazonaws.com:9142"] advanced.auth-provider{ class = PlainTextAuthProvider username = "ServiceUserName" password = "ServicePassword" } basic.load-balancing-policy { Using a Cassandra Java client driver 118 Amazon Keyspaces (for Apache Cassandra) Developer Guide local-datacenter = "us-east-2" slow-replica-avoidance = false } advanced.ssl-engine-factory { class = DefaultSslEngineFactory truststore-path = "./src/main/resources/cassandra_truststore.jks" truststore-password = "my_password" hostname-validation = false } } Note Instead of adding the path to the trustStore in the configuration file, you can also add the trustStore path directly in the application code or you can add the path to the trustStore to your JVM arguments. Step 3: Run the sample application This code example shows a simple command line application that creates a connection pool to Amazon Keyspaces by using the configuration file we created earlier. It confirms that the connection is established by running a simple query. package <your package>; // add the following imports to your project import com.datastax.oss.driver.api.core.CqlSession; import com.datastax.oss.driver.api.core.config.DriverConfigLoader; import com.datastax.oss.driver.api.core.cql.ResultSet; import com.datastax.oss.driver.api.core.cql.Row; public class App { public static void main( String[] args ) { //Use DriverConfigLoader to load your configuration file DriverConfigLoader loader = DriverConfigLoader.fromClasspath("application.conf"); try (CqlSession session = CqlSession.builder() .withConfigLoader(loader) Using a Cassandra Java client driver 119 Amazon Keyspaces (for Apache Cassandra) .build()) { Developer Guide ResultSet rs = session.execute("select * from system_schema.keyspaces"); Row row = rs.one(); System.out.println(row.getString("keyspace_name")); } } } Note Use a try block to establish the connection to ensure that it's always closed. If you don't use a try block, remember to close your connection to avoid leaking resources. Step-by-step tutorial to connect to Amazon Keyspaces using the 4.x DataStax Java driver for Apache Cassandra and the SigV4 authentication plugin The following section describes how to use the SigV4 authentication plugin for the open-source 4.x DataStax Java driver for Apache Cassandra to access Amazon Keyspaces (for Apache Cassandra). The plugin is available from the GitHub repository. The SigV4 authentication plugin allows you to use IAM credentials for users or roles when connecting to Amazon Keyspaces. Instead of requiring a user name and password, this plugin signs API requests using access keys. For more information, see the section called “Create IAM credentials for AWS authentication”. Step 1: Prerequisites To follow this tutorial, you need to complete the following tasks. • If you haven't already done so, create credentials for your IAM user or role following the steps at the section called |
AmazonKeyspaces-047 | AmazonKeyspaces.pdf | 47 | Cassandra to access Amazon Keyspaces (for Apache Cassandra). The plugin is available from the GitHub repository. The SigV4 authentication plugin allows you to use IAM credentials for users or roles when connecting to Amazon Keyspaces. Instead of requiring a user name and password, this plugin signs API requests using access keys. For more information, see the section called “Create IAM credentials for AWS authentication”. Step 1: Prerequisites To follow this tutorial, you need to complete the following tasks. • If you haven't already done so, create credentials for your IAM user or role following the steps at the section called “Create IAM credentials for AWS authentication”. This tutorial assumes that the access keys are stored as environment variables. For more information, see the section called “Manage access keys”. • Add the DataStax Java driver for Apache Cassandra to your Java project. Ensure that you're using a version of the driver that supports Apache Cassandra 3.11.2. For more information, see the DataStax Java Driver for Apache Cassandra documentation. Using a Cassandra Java client driver 120 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Add the authentication plugin to your application. The authentication plugin supports version 4.x of the DataStax Java driver for Apache Cassandra. If you’re using Apache Maven, or a build system that can use Maven dependencies, add the following dependencies to your pom.xml file. Important Replace the version of the plugin with the latest version as shown at GitHub repository. <dependency> <groupId>software.aws.mcs</groupId> <artifactId>aws-sigv4-auth-cassandra-java-driver-plugin</artifactId> <version>4.0.9</version> </dependency> Step 2: Configure the driver You can specify settings for the DataStax Java Cassandra driver by creating a configuration file for your application. This configuration file overrides the default settings and tells the driver to connect to the Amazon Keyspaces service endpoint using port 9142. For a list of available service endpoints, see the section called “Service endpoints”. Create a configuration file and save the file in the application's resources folder—for example, src/main/resources/application.conf. Open application.conf and add the following configuration settings. 1. Authentication provider – Set the advanced.auth-provider.class to a new instance of software.aws.mcs.auth.SigV4AuthProvider. The SigV4AuthProvider is the authentication handler provided by the plugin for performing SigV4 authentication. 2. 3. Local data center – Set the value for local-datacenter to the Region you're connecting to. For example, if the application is connecting to cassandra.us-east-2.amazonaws.com, then set the local data center to us-east-2. For all available AWS Regions, see ???. Set slow- replica-avoidance = false to load balance against all available nodes. Idempotence – Set the default idempotence for the application to true to configure the driver to always retry failed read/write/prepare/execute requests. This is a best practice for distributed applications that helps to handle transient failures by retrying failed requests. Using a Cassandra Java client driver 121 Amazon Keyspaces (for Apache Cassandra) Developer Guide 4. SSL/TLS – Initialize the SSLEngineFactory by adding a section in the configuration file with a single line that specifies the class with class = DefaultSslEngineFactory. Provide the path to the trustStore file and the password that you created previously. Amazon Keyspaces doesn't support hostname-validation of peers, so set this option to false. 5. Connections – Create at least 3 local connections per endpoint by setting local.size = 3. This is a best practice that helps your application to handle overhead and traffic bursts. For more information about how to calculate how many local connections per endpoint your application needs based on expected traffic patterns, see the section called “How to configure connections”. 6. Retry policy – Implement the Amazon Keyspaces retry policy AmazonKeyspacesExponentialRetryPolicy instead of the DefaultRetryPolicy that comes with the Cassandra driver. This allows you to configure the number of retry attempts for the AmazonKeyspacesExponentialRetryPolicy that meets your needs. By default, the number of retry attempts for the AmazonKeyspacesExponentialRetryPolicy is set to 3. For more information, see the section called “How to configure retry policies”. 7. Prepared statements – Set prepare-on-all-nodes to false to optimize network usage. datastax-java-driver { basic { contact-points = [ "cassandra.us-east-2.amazonaws.com:9142"] request { timeout = 2 seconds consistency = LOCAL_QUORUM page-size = 1024 default-idempotence = true } load-balancing-policy { local-datacenter = "us-east-2" class = DefaultLoadBalancingPolicy slow-replica-avoidance = false } } advanced { auth-provider { class = software.aws.mcs.auth.SigV4AuthProvider aws-region = us-east-2 } ssl-engine-factory { class = DefaultSslEngineFactory Using a Cassandra Java client driver 122 Amazon Keyspaces (for Apache Cassandra) Developer Guide truststore-path = "./src/main/resources/cassandra_truststore.jks" truststore-password = "my_password" hostname-validation = false } connection { connect-timeout = 5 seconds max-requests-per-connection = 512 pool { local.size = 3 } } retry-policy { class = com.aws.ssa.keyspaces.retry.AmazonKeyspacesExponentialRetryPolicy max-attempts = 3 min-wait = 10 mills max-wait = 100 mills } prepared-statements { prepare-on-all-nodes = false } } } Note Instead of adding the path to the trustStore in the configuration file, you can also add the trustStore path directly in the application code or you can add the path to the |
AmazonKeyspaces-048 | AmazonKeyspaces.pdf | 48 | = DefaultSslEngineFactory Using a Cassandra Java client driver 122 Amazon Keyspaces (for Apache Cassandra) Developer Guide truststore-path = "./src/main/resources/cassandra_truststore.jks" truststore-password = "my_password" hostname-validation = false } connection { connect-timeout = 5 seconds max-requests-per-connection = 512 pool { local.size = 3 } } retry-policy { class = com.aws.ssa.keyspaces.retry.AmazonKeyspacesExponentialRetryPolicy max-attempts = 3 min-wait = 10 mills max-wait = 100 mills } prepared-statements { prepare-on-all-nodes = false } } } Note Instead of adding the path to the trustStore in the configuration file, you can also add the trustStore path directly in the application code or you can add the path to the trustStore to your JVM arguments. Step 3: Run the application This code example shows a simple command line application that creates a connection pool to Amazon Keyspaces by using the configuration file we created earlier. It confirms that the connection is established by running a simple query. package <your package>; // add the following imports to your project import com.datastax.oss.driver.api.core.CqlSession; import com.datastax.oss.driver.api.core.config.DriverConfigLoader; import com.datastax.oss.driver.api.core.cql.ResultSet; import com.datastax.oss.driver.api.core.cql.Row; Using a Cassandra Java client driver 123 Amazon Keyspaces (for Apache Cassandra) Developer Guide public class App { public static void main( String[] args ) { //Use DriverConfigLoader to load your configuration file DriverConfigLoader loader = DriverConfigLoader.fromClasspath("application.conf"); try (CqlSession session = CqlSession.builder() .withConfigLoader(loader) .build()) { ResultSet rs = session.execute("select * from system_schema.keyspaces"); Row row = rs.one(); System.out.println(row.getString("keyspace_name")); } } } Note Use a try block to establish the connection to ensure that it's always closed. If you don't use a try block, remember to close your connection to avoid leaking resources. Connect to Amazon Keyspaces using the 3.x DataStax Java driver for Apache Cassandra and the SigV4 authentication plugin The following section describes how to use the SigV4 authentication plugin for the 3.x open-source DataStax Java driver for Apache Cassandra to access Amazon Keyspaces. The plugin is available from the GitHub repository. The SigV4 authentication plugin allows you to use IAM credentials for users and roles when connecting to Amazon Keyspaces. Instead of requiring a user name and password, this plugin signs API requests using access keys. For more information, see the section called “Create IAM credentials for AWS authentication”. Step 1: Prerequisites To run this code sample, you first need to complete the following tasks. Using a Cassandra Java client driver 124 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Create credentials for your IAM user or role following the steps at the section called “Create IAM credentials for AWS authentication”. This tutorial assumes that the access keys are stored as environment variables. For more information, see the section called “Manage access keys”. • Follow the steps at the section called “Before you begin” to download the Starfield digital certificate, convert it to a trustStore file, and attach the trustStore file in the JVM arguments to your application. • Add the DataStax Java driver for Apache Cassandra to your Java project. Ensure that you're using a version of the driver that supports Apache Cassandra 3.11.2. For more information, see the DataStax Java Driver for Apache Cassandra documentation. • Add the authentication plugin to your application. The authentication plugin supports version 3.x of the DataStax Java driver for Apache Cassandra. If you’re using Apache Maven, or a build system that can use Maven dependencies, add the following dependencies to your pom.xml file. Replace the version of the plugin with the latest version as shown at GitHub repository. <dependency> <groupId>software.aws.mcs</groupId> <artifactId>aws-sigv4-auth-cassandra-java-driver-plugin_3</artifactId> <version>3.0.3</version> </dependency> Step 2: Run the application This code example shows a simple command line application that creates a connection pool to Amazon Keyspaces. It confirms that the connection is established by running a simple query. package <your package>; // add the following imports to your project import software.aws.mcs.auth.SigV4AuthProvider; import com.datastax.driver.core.Cluster; import com.datastax.driver.core.ResultSet; import com.datastax.driver.core.Row; import com.datastax.driver.core.Session; public class App { public static void main( String[] args ) { Using a Cassandra Java client driver 125 Amazon Keyspaces (for Apache Cassandra) Developer Guide String endPoint = "cassandra.us-east-2.amazonaws.com"; int portNumber = 9142; Session session = Cluster.builder() .addContactPoint(endPoint) .withPort(portNumber) .withAuthProvider(new SigV4AuthProvider("us-east-2")) .withSSL() .build() .connect(); ResultSet rs = session.execute("select * from system_schema.keyspaces"); Row row = rs.one(); System.out.println(row.getString("keyspace_name")); } } Usage notes: For a list of available endpoints, see the section called “Service endpoints”. See the following repository for helpful Java driver policies, examples, and best practices when using the Java Driver with Amazon Keyspaces: https://github.com/aws-samples/amazon- keyspaces-java-driver-helpers. Using a Cassandra Python client driver to access Amazon Keyspaces programmatically In this section, we show you how to connect to Amazon Keyspaces using a Python client driver. To provide users and applications with credentials for programmatic access to Amazon Keyspaces resources, you can do either of the following: • Create service-specific credentials that are associated with a specific AWS Identity and Access Management (IAM) user. • For enhanced security, we recommend to create IAM access keys for IAM users or |
AmazonKeyspaces-049 | AmazonKeyspaces.pdf | 49 | for helpful Java driver policies, examples, and best practices when using the Java Driver with Amazon Keyspaces: https://github.com/aws-samples/amazon- keyspaces-java-driver-helpers. Using a Cassandra Python client driver to access Amazon Keyspaces programmatically In this section, we show you how to connect to Amazon Keyspaces using a Python client driver. To provide users and applications with credentials for programmatic access to Amazon Keyspaces resources, you can do either of the following: • Create service-specific credentials that are associated with a specific AWS Identity and Access Management (IAM) user. • For enhanced security, we recommend to create IAM access keys for IAM users or roles that are used across all AWS services. The Amazon Keyspaces SigV4 authentication plugin for Cassandra client drivers enables you to authenticate calls to Amazon Keyspaces using IAM access keys instead of user name and password. For more information, see the section called “Create IAM credentials for AWS authentication”. Using a Cassandra Python client driver 126 Amazon Keyspaces (for Apache Cassandra) Developer Guide Topics • Before you begin • Connect to Amazon Keyspaces using the Python driver for Apache Cassandra and service-specific credentials • Connect to Amazon Keyspaces using the DataStax Python driver for Apache Cassandra and the SigV4 authentication plugin Before you begin You need to complete the following task before you can start. Amazon Keyspaces requires the use of Transport Layer Security (TLS) to help secure connections with clients. To connect to Amazon Keyspaces using TLS, you need to download an Amazon digital certificate and configure the Python driver to use TLS. Download the Starfield digital certificate using the following command and save sf-class2- root.crt locally or in your home directory. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Note You can also use the Amazon digital certificate to connect to Amazon Keyspaces and can continue to do so if your client is connecting to Amazon Keyspaces successfully. The Starfield certificate provides additional backwards compatibility for clients using older certificate authorities. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Connect to Amazon Keyspaces using the Python driver for Apache Cassandra and service-specific credentials The following code example shows you how to connect to Amazon Keyspaces with a Python client driver and service-specific credentials. from cassandra.cluster import Cluster Using a Cassandra Python client driver 127 Amazon Keyspaces (for Apache Cassandra) Developer Guide from ssl import SSLContext, PROTOCOL_TLSv1_2 , CERT_REQUIRED from cassandra.auth import PlainTextAuthProvider ssl_context = SSLContext(PROTOCOL_TLSv1_2 ) ssl_context.load_verify_locations('path_to_file/sf-class2-root.crt') ssl_context.verify_mode = CERT_REQUIRED auth_provider = PlainTextAuthProvider(username='ServiceUserName', password='ServicePassword') cluster = Cluster(['cassandra.us-east-2.amazonaws.com'], ssl_context=ssl_context, auth_provider=auth_provider, port=9142) session = cluster.connect() r = session.execute('select * from system_schema.keyspaces') print(r.current_rows) Usage notes: 1. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. 2. Ensure that the ServiceUserName and ServicePassword match the user name and password you obtained when you generated the service-specific credentials by following the steps to Create service-specific credentials for programmatic access to Amazon Keyspaces. 3. For a list of available endpoints, see the section called “Service endpoints”. Connect to Amazon Keyspaces using the DataStax Python driver for Apache Cassandra and the SigV4 authentication plugin The following section shows how to use the SigV4 authentication plugin for the open-source DataStax Python driver for Apache Cassandra to access Amazon Keyspaces (for Apache Cassandra). If you haven't already done so, begin with creating credentials for your IAM role following the steps at the section called “Create IAM credentials for AWS authentication”. This tutorial uses temporary credentials, which requires an IAM role. For more information about temporary credentials, see the section called “Create temporary credentials to connect to Amazon Keyspaces”. Then, add the Python SigV4 authentication plugin to your environment from the GitHub repository. pip install cassandra-sigv4 Using a Cassandra Python client driver 128 Amazon Keyspaces (for Apache Cassandra) Developer Guide The following code example shows how to connect to Amazon Keyspaces by using the open-source DataStax Python driver for Cassandra and the SigV4 authentication plugin. The plugin depends on the AWS SDK for Python (Boto3). It uses boto3.session to obtain temporary credentials. from cassandra.cluster import Cluster from ssl import SSLContext, PROTOCOL_TLSv1_2 , CERT_REQUIRED from cassandra.auth import PlainTextAuthProvider import boto3 from cassandra_sigv4.auth import SigV4AuthProvider ssl_context = SSLContext(PROTOCOL_TLSv1_2) ssl_context.load_verify_locations('path_to_file/sf-class2-root.crt') ssl_context.verify_mode = CERT_REQUIRED # use this if you want to use Boto to set the session parameters. boto_session = boto3.Session(aws_access_key_id="AKIAIOSFODNN7EXAMPLE", aws_secret_access_key="wJalrXUtnFEMI/K7MDENG/ bPxRfiCYEXAMPLEKEY", aws_session_token="AQoDYXdzEJr...<remainder of token>", region_name="us-east-2") auth_provider = SigV4AuthProvider(boto_session) # Use this instead of the above line if you want to use the Default Credentials and not bother with a session. # auth_provider = SigV4AuthProvider() cluster = Cluster(['cassandra.us-east-2.amazonaws.com'], ssl_context=ssl_context, auth_provider=auth_provider, port=9142) session = cluster.connect() r = session.execute('select * from system_schema.keyspaces') print(r.current_rows) Usage notes: 1. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. 2. Ensure that the aws_access_key_id, aws_secret_access_key, and the aws_session_token match the Access Key, Secret Access Key, and Session Token you obtained using boto3.session. For more information, see Credentials in the AWS SDK for Python (Boto3). Using a Cassandra Python client |
AmazonKeyspaces-050 | AmazonKeyspaces.pdf | 50 | = SigV4AuthProvider(boto_session) # Use this instead of the above line if you want to use the Default Credentials and not bother with a session. # auth_provider = SigV4AuthProvider() cluster = Cluster(['cassandra.us-east-2.amazonaws.com'], ssl_context=ssl_context, auth_provider=auth_provider, port=9142) session = cluster.connect() r = session.execute('select * from system_schema.keyspaces') print(r.current_rows) Usage notes: 1. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. 2. Ensure that the aws_access_key_id, aws_secret_access_key, and the aws_session_token match the Access Key, Secret Access Key, and Session Token you obtained using boto3.session. For more information, see Credentials in the AWS SDK for Python (Boto3). Using a Cassandra Python client driver 129 Amazon Keyspaces (for Apache Cassandra) Developer Guide 3. For a list of available endpoints, see the section called “Service endpoints”. Using a Cassandra Node.js client driver to access Amazon Keyspaces programmatically This section shows you how to connect to Amazon Keyspaces by using a Node.js client driver. To provide users and applications with credentials for programmatic access to Amazon Keyspaces resources, you can do either of the following: • Create service-specific credentials that are associated with a specific AWS Identity and Access Management (IAM) user. • For enhanced security, we recommend to create IAM access keys for IAM users or roles that are used across all AWS services. The Amazon Keyspaces SigV4 authentication plugin for Cassandra client drivers enables you to authenticate calls to Amazon Keyspaces using IAM access keys instead of user name and password. For more information, see the section called “Create IAM credentials for AWS authentication”. Topics • Before you begin • Connect to Amazon Keyspaces using the Node.js DataStax driver for Apache Cassandra and service-specific credentials • Connect to Amazon Keyspaces using the DataStax Node.js driver for Apache Cassandra and the SigV4 authentication plugin Before you begin You need to complete the following task before you can start. Amazon Keyspaces requires the use of Transport Layer Security (TLS) to help secure connections with clients. To connect to Amazon Keyspaces using TLS, you need to download an Amazon digital certificate and configure the Python driver to use TLS. Download the Starfield digital certificate using the following command and save sf-class2- root.crt locally or in your home directory. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Using a Cassandra Node.js client driver 130 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note You can also use the Amazon digital certificate to connect to Amazon Keyspaces and can continue to do so if your client is connecting to Amazon Keyspaces successfully. The Starfield certificate provides additional backwards compatibility for clients using older certificate authorities. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Connect to Amazon Keyspaces using the Node.js DataStax driver for Apache Cassandra and service-specific credentials Configure your driver to use the Starfield digital certificate for TLS and authenticate using service- specific credentials. For example: const cassandra = require('cassandra-driver'); const fs = require('fs'); const auth = new cassandra.auth.PlainTextAuthProvider('ServiceUserName', 'ServicePassword'); const sslOptions1 = { ca: [ fs.readFileSync('path_to_file/sf-class2-root.crt', 'utf-8')], host: 'cassandra.us-west-2.amazonaws.com', rejectUnauthorized: true }; const client = new cassandra.Client({ contactPoints: ['cassandra.us-west-2.amazonaws.com'], localDataCenter: 'us-west-2', authProvider: auth, sslOptions: sslOptions1, protocolOptions: { port: 9142 } }); const query = 'SELECT * FROM system_schema.keyspaces'; client.execute(query) .then( result => console.log('Row from Keyspaces %s', result.rows[0])) .catch( e=> console.log(`${e}`)); Using a Cassandra Node.js client driver 131 Amazon Keyspaces (for Apache Cassandra) Developer Guide Usage notes: 1. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. 2. Ensure that the ServiceUserName and ServicePassword match the user name and password you obtained when you generated the service-specific credentials by following the steps to Create service-specific credentials for programmatic access to Amazon Keyspaces. 3. For a list of available endpoints, see the section called “Service endpoints”. Connect to Amazon Keyspaces using the DataStax Node.js driver for Apache Cassandra and the SigV4 authentication plugin The following section shows how to use the SigV4 authentication plugin for the open-source DataStax Node.js driver for Apache Cassandra to access Amazon Keyspaces (for Apache Cassandra). If you haven't already done so, create credentials for your IAM user or role following the steps at the section called “Create IAM credentials for AWS authentication”. Add the Node.js SigV4 authentication plugin to your application from the GitHub repository. The plugin supports version 4.x of the DataStax Node.js driver for Cassandra and depends on the AWS SDK for Node.js. It uses AWSCredentialsProvider to obtain credentials. $ npm install aws-sigv4-auth-cassandra-plugin --save This code example shows how to set a Region-specific instance of SigV4AuthProvider as the authentication provider. const cassandra = require('cassandra-driver'); const fs = require('fs'); const sigV4 = require('aws-sigv4-auth-cassandra-plugin'); const auth = new sigV4.SigV4AuthProvider({ region: 'us-west-2', accessKeyId:'AKIAIOSFODNN7EXAMPLE', secretAccessKey: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'}); const sslOptions1 = { ca: [ fs.readFileSync('path_to_filecassandra/sf-class2-root.crt', 'utf-8')], Using a Cassandra Node.js client driver 132 Amazon Keyspaces (for Apache Cassandra) Developer Guide host: 'cassandra.us-west-2.amazonaws.com', rejectUnauthorized: true }; const client = new cassandra.Client({ contactPoints: ['cassandra.us-west-2.amazonaws.com'], localDataCenter: 'us-west-2', authProvider: auth, sslOptions: sslOptions1, protocolOptions: { port: 9142 } }); |
AmazonKeyspaces-051 | AmazonKeyspaces.pdf | 51 | the AWS SDK for Node.js. It uses AWSCredentialsProvider to obtain credentials. $ npm install aws-sigv4-auth-cassandra-plugin --save This code example shows how to set a Region-specific instance of SigV4AuthProvider as the authentication provider. const cassandra = require('cassandra-driver'); const fs = require('fs'); const sigV4 = require('aws-sigv4-auth-cassandra-plugin'); const auth = new sigV4.SigV4AuthProvider({ region: 'us-west-2', accessKeyId:'AKIAIOSFODNN7EXAMPLE', secretAccessKey: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'}); const sslOptions1 = { ca: [ fs.readFileSync('path_to_filecassandra/sf-class2-root.crt', 'utf-8')], Using a Cassandra Node.js client driver 132 Amazon Keyspaces (for Apache Cassandra) Developer Guide host: 'cassandra.us-west-2.amazonaws.com', rejectUnauthorized: true }; const client = new cassandra.Client({ contactPoints: ['cassandra.us-west-2.amazonaws.com'], localDataCenter: 'us-west-2', authProvider: auth, sslOptions: sslOptions1, protocolOptions: { port: 9142 } }); const query = 'SELECT * FROM system_schema.keyspaces'; client.execute(query).then( result => console.log('Row from Keyspaces %s', result.rows[0])) .catch( e=> console.log(`${e}`)); Usage notes: 1. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. 2. Ensure that the accessKeyId and secretAccessKey match the Access Key and Secret Access Key you obtained using AWSCredentialsProvider. For more information, see Setting Credentials in Node.js in the AWS SDK for JavaScript in Node.js. 3. To store access keys outside of code, see best practices at the section called “Manage access keys”. 4. For a list of available endpoints, see the section called “Service endpoints”. Using a Cassandra .NET Core client driver to access Amazon Keyspaces programmatically This section shows you how to connect to Amazon Keyspaces by using a .NET Core client driver. The setup steps will vary depending on your environment and operating system, you might have to modify them accordingly. Amazon Keyspaces requires the use of Transport Layer Security (TLS) to help secure connections with clients. To connect to Amazon Keyspaces using TLS, you need to download a Starfield digital certificate and configure your driver to use TLS. Using a Cassandra .NET Core client driver 133 Amazon Keyspaces (for Apache Cassandra) Developer Guide 1. Download the Starfield certificate and save it to a local directory, taking note of the path. Following is an example using PowerShell. $client = new-object System.Net.WebClient $client.DownloadFile("https://certs.secureserver.net/repository/sf-class2- root.crt","path_to_file\sf-class2-root.crt") 2. Install the CassandraCSharpDriver through nuget, using the nuget console. PM> Install-Package CassandraCSharpDriver 3. The following example uses a .NET Core C# console project to connect to Amazon Keyspaces and run a query. using Cassandra; using System; using System.Collections.Generic; using System.Linq; using System.Net.Security; using System.Runtime.ConstrainedExecution; using System.Security.Cryptography.X509Certificates; using System.Text; using System.Threading.Tasks; namespace CSharpKeyspacesExample { class Program { public Program(){} static void Main(string[] args) { X509Certificate2Collection certCollection = new X509Certificate2Collection(); X509Certificate2 amazoncert = new X509Certificate2(@"path_to_file\sf- class2-root.crt"); var userName = "ServiceUserName"; var pwd = "ServicePassword"; certCollection.Add(amazoncert); var awsEndpoint = "cassandra.us-east-2.amazonaws.com" ; var cluster = Cluster.Builder() Using a Cassandra .NET Core client driver 134 Amazon Keyspaces (for Apache Cassandra) Developer Guide .AddContactPoints(awsEndpoint) .WithPort(9142) .WithAuthProvider(new PlainTextAuthProvider(userName, pwd)) .WithSSL(new SSLOptions().SetCertificateCollection(certCollection)) .Build(); var session = cluster.Connect(); var rs = session.Execute("SELECT * FROM system_schema.tables;"); foreach (var row in rs) { var name = row.GetValue<String>("keyspace_name"); Console.WriteLine(name); } } } } Usage notes: a. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. b. Ensure that the ServiceUserName and ServicePassword match the user name and password you obtained when you generated the service-specific credentials by following the steps to Create service-specific credentials for programmatic access to Amazon Keyspaces. c. For a list of available endpoints, see the section called “Service endpoints”. Using a Cassandra Go client driver to access Amazon Keyspaces programmatically This section shows you how to connect to Amazon Keyspaces by using a Go Cassandra client driver. To provide users and applications with credentials for programmatic access to Amazon Keyspaces resources, you can do either of the following: • Create service-specific credentials that are associated with a specific AWS Identity and Access Management (IAM) user. • For enhanced security, we recommend to create IAM access keys for IAM principals that are used across all AWS services. The Amazon Keyspaces SigV4 authentication plugin for Cassandra client drivers enables you to authenticate calls to Amazon Keyspaces using IAM access keys instead of Using a Cassandra Go client driver 135 Amazon Keyspaces (for Apache Cassandra) Developer Guide user name and password. For more information, see the section called “Create IAM credentials for AWS authentication”. Topics • Before you begin • Connect to Amazon Keyspaces using the Gocql driver for Apache Cassandra and service-specific credentials • Connect to Amazon Keyspaces using the Go driver for Apache Cassandra and the SigV4 authentication plugin Before you begin You need to complete the following task before you can start. Amazon Keyspaces requires the use of Transport Layer Security (TLS) to help secure connections with clients. To connect to Amazon Keyspaces using TLS, you need to download an Amazon digital certificate and configure the Go driver to use TLS. Download the Starfield digital certificate using the following command and save sf-class2- root.crt locally or in your home directory. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Note You can also use the Amazon digital certificate to |
AmazonKeyspaces-052 | AmazonKeyspaces.pdf | 52 | to Amazon Keyspaces using the Go driver for Apache Cassandra and the SigV4 authentication plugin Before you begin You need to complete the following task before you can start. Amazon Keyspaces requires the use of Transport Layer Security (TLS) to help secure connections with clients. To connect to Amazon Keyspaces using TLS, you need to download an Amazon digital certificate and configure the Go driver to use TLS. Download the Starfield digital certificate using the following command and save sf-class2- root.crt locally or in your home directory. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Note You can also use the Amazon digital certificate to connect to Amazon Keyspaces and can continue to do so if your client is connecting to Amazon Keyspaces successfully. The Starfield certificate provides additional backwards compatibility for clients using older certificate authorities. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Connect to Amazon Keyspaces using the Gocql driver for Apache Cassandra and service-specific credentials 1. Create a directory for your application. Using a Cassandra Go client driver 136 Amazon Keyspaces (for Apache Cassandra) Developer Guide mkdir ./gocqlexample 2. Navigate to the new directory. cd gocqlexample 3. Create a file for your application. touch cqlapp.go 4. Download the Go driver. go get github.com/gocql/gocql 5. Add the following sample code to the cqlapp.go file. package main import ( "fmt" "github.com/gocql/gocql" "log" ) func main() { // add the Amazon Keyspaces service endpoint cluster := gocql.NewCluster("cassandra.us-east-2.amazonaws.com") cluster.Port=9142 // add your service specific credentials cluster.Authenticator = gocql.PasswordAuthenticator{ Username: "ServiceUserName", Password: "ServicePassword"} // provide the path to the sf-class2-root.crt cluster.SslOpts = &gocql.SslOptions{ CaPath: "path_to_file/sf-class2-root.crt", EnableHostVerification: false, } // Override default Consistency to LocalQuorum cluster.Consistency = gocql.LocalQuorum cluster.DisableInitialHostLookup = false Using a Cassandra Go client driver 137 Amazon Keyspaces (for Apache Cassandra) Developer Guide session, err := cluster.CreateSession() if err != nil { fmt.Println("err>", err) } defer session.Close() // run a sample query from the system keyspace var text string iter := session.Query("SELECT keyspace_name FROM system_schema.tables;").Iter() for iter.Scan(&text) { fmt.Println("keyspace_name:", text) } if err := iter.Close(); err != nil { log.Fatal(err) } session.Close() } Usage notes: a. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. b. Ensure that the ServiceUserName and ServicePassword match the user name and password you obtained when you generated the service-specific credentials by following the steps to Create service-specific credentials for programmatic access to Amazon Keyspaces. c. For a list of available endpoints, see the section called “Service endpoints”. 6. Build the program. go build cqlapp.go 7. Run the program. ./cqlapp Connect to Amazon Keyspaces using the Go driver for Apache Cassandra and the SigV4 authentication plugin The following code sample shows how to use the SigV4 authentication plugin for the open-source Go driver to access Amazon Keyspaces (for Apache Cassandra). Using a Cassandra Go client driver 138 Amazon Keyspaces (for Apache Cassandra) Developer Guide If you haven't already done so, create credentials for your IAM principal following the steps at the section called “Create IAM credentials for AWS authentication”. If an application is running on Lambda or an Amazon EC2 instance, your application is automatically using the credentials of the instance. To run this tutorial locally, you can store the credentials as local environment variables. Add the Go SigV4 authentication plugin to your application from the GitHub repository. The plugin supports version 1.2.x of the open-source Go driver for Cassandra and depends on the AWS SDK for Go. $ go mod init $ go get github.com/aws/aws-sigv4-auth-cassandra-gocql-driver-plugin In this code example, the Amazon Keyspaces endpoint is represented by the Cluster class. It uses the AwsAuthenticator for the authenticator property of the cluster to obtain credentials. package main import ( "fmt" "github.com/aws/aws-sigv4-auth-cassandra-gocql-driver-plugin/sigv4" "github.com/gocql/gocql" "log" ) func main() { // configuring the cluster options cluster := gocql.NewCluster("cassandra.us-west-2.amazonaws.com") cluster.Port=9142 // the authenticator uses the default credential chain to find AWS credentials cluster.Authenticator = sigv4.NewAwsAuthenticator() cluster.SslOpts = &gocql.SslOptions{ CaPath: "path_to_file/sf-class2-root.crt", EnableHostVerification: false, } cluster.Consistency = gocql.LocalQuorum cluster.DisableInitialHostLookup = false session, err := cluster.CreateSession() if err != nil { fmt.Println("err>", err) Using a Cassandra Go client driver 139 Amazon Keyspaces (for Apache Cassandra) return } defer session.Close() // doing the query var text string Developer Guide iter := session.Query("SELECT keyspace_name FROM system_schema.tables;").Iter() for iter.Scan(&text) { fmt.Println("keyspace_name:", text) } if err := iter.Close(); err != nil { log.Fatal(err) } } Usage notes: 1. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. 2. For this example to run locally, you need to define the following variables as environment variables: • AWS_ACCESS_KEY_ID • AWS_SECRET_ACCESS_KEY • AWS_DEFAULT_REGION 3. To store access keys outside of code, see best practices at the section called “Manage access keys”. 4. For a list of available endpoints, see the section called “Service endpoints”. Using a Cassandra Perl client driver to access Amazon Keyspaces programmatically This section shows you how to connect to Amazon Keyspaces by using a Perl client driver. For |
AmazonKeyspaces-053 | AmazonKeyspaces.pdf | 53 | } } Usage notes: 1. Replace "path_to_file/sf-class2-root.crt" with the path to the certificate saved in the first step. 2. For this example to run locally, you need to define the following variables as environment variables: • AWS_ACCESS_KEY_ID • AWS_SECRET_ACCESS_KEY • AWS_DEFAULT_REGION 3. To store access keys outside of code, see best practices at the section called “Manage access keys”. 4. For a list of available endpoints, see the section called “Service endpoints”. Using a Cassandra Perl client driver to access Amazon Keyspaces programmatically This section shows you how to connect to Amazon Keyspaces by using a Perl client driver. For this code sample, we used Perl 5. Amazon Keyspaces requires the use of Transport Layer Security (TLS) to help secure connections with clients. Using a Cassandra Perl client driver 140 Amazon Keyspaces (for Apache Cassandra) Developer Guide Important To create a secure connection, our code samples use the Starfield digital certificate to authenticate the server before establishing the TLS connection. The Perl driver doesn't validate the server's Amazon SSL certificate, which means that you can't confirm that you are connecting to Amazon Keyspaces. The second step, to configure the driver to use TLS when connecting to Amazon Keyspaces is still required, and ensures that data transferred between the client and server is encrypted. 1. Download the Cassandra DBI driver from https://metacpan.org/pod/DBD::Cassandra and install the driver to your Perl environment. The exact steps depend on the environment. The following is a common example. cpanm DBD::Cassandra 2. Create a file for your application. touch cqlapp.pl 3. Add the following sample code to the cqlapp.pl file. use DBI; my $user = "ServiceUserName"; my $password = "ServicePassword"; my $db = DBI->connect("dbi:Cassandra:host=cassandra.us- east-2.amazonaws.com;port=9142;tls=1;", $user, $password); my $rows = $db->selectall_arrayref("select * from system_schema.keyspaces"); print "Found the following Keyspaces...\n"; for my $row (@$rows) { print join(" ",@$row['keyspace_name']),"\n"; } $db->disconnect; Using a Cassandra Perl client driver 141 Amazon Keyspaces (for Apache Cassandra) Developer Guide Important Ensure that the ServiceUserName and ServicePassword match the user name and password you obtained when you generated the service-specific credentials by following the steps to Create service-specific credentials for programmatic access to Amazon Keyspaces. Note For a list of available endpoints, see the section called “Service endpoints”. 4. Run the application. perl cqlapp.pl Configure cross-account access to Amazon Keyspaces with VPC endpoints You can create and use separate AWS accounts to isolate resources and for use in different environments, for example development and production. This topic walks you through cross- account access for Amazon Keyspaces using interface VPC endpoints in an Amazon Virtual Private Cloud. For more information about IAM cross-account access configuration, see Example scenario using separate development and production accounts in the IAM User Guide. For more information about Amazon Keyspaces and private VPC endpoints, see the section called “Using interface VPC endpoints”. Topics • Configure cross-account access to Amazon Keyspaces using VPC endpoints in a shared VPC • Configuring cross-account access to Amazon Keyspaces without a shared VPC Configure cross-account access 142 Amazon Keyspaces (for Apache Cassandra) Developer Guide Configure cross-account access to Amazon Keyspaces using VPC endpoints in a shared VPC You can create different AWS accounts to separate resources from applications. For example, you can create one account for your Amazon Keyspaces tables, a different account for applications in a development environment, and another account for applications in a production environment. This topic walks you through the configuration steps required to set up cross-account access for Amazon Keyspaces using interface VPC endpoints in a shared VPC. For detailed steps how to configure a VPC endpoint for Amazon Keyspaces, see the section called “Step 3: Create a VPC endpoint for Amazon Keyspaces”. In this example we use the following three accounts in a shared VPC: • Account A – This account contains infrastructure, including the VPC endpoints, the VPC subnets, and Amazon Keyspaces tables. • Account B – This account contains an application in a development environment that needs to connect to the Amazon Keyspaces table in Account A. • Account C – This account contains an application in a production environment that needs to connect to the Amazon Keyspaces table in Account A. Configure cross-account access in a shared VPC 143 Amazon Keyspaces (for Apache Cassandra) Developer Guide Account A is the account that contains the resources that Account B and Account C need to access, so Account A is the trusting account. Account B and Account C are the accounts with the principals that need access to the resources in Account A, so Account B and Account C are the trusted accounts. The trusting account grants the permissions to the trusted accounts by sharing an IAM role. The following procedure outlines the configuration steps required in Account A. Configuration for Account A 1. Use AWS Resource Access Manager to create a resource share for the subnet and |
AmazonKeyspaces-054 | AmazonKeyspaces.pdf | 54 | Account A is the account that contains the resources that Account B and Account C need to access, so Account A is the trusting account. Account B and Account C are the accounts with the principals that need access to the resources in Account A, so Account B and Account C are the trusted accounts. The trusting account grants the permissions to the trusted accounts by sharing an IAM role. The following procedure outlines the configuration steps required in Account A. Configuration for Account A 1. Use AWS Resource Access Manager to create a resource share for the subnet and share the private subnet with Account B and Account C. Configure cross-account access in a shared VPC 144 Amazon Keyspaces (for Apache Cassandra) Developer Guide Account B and Account C can now see and create resources in the subnet that has been shared with them. 2. Create an Amazon Keyspaces private VPC endpoint powered by AWS PrivateLink. This creates multiple endpoints across shared subnets and DNS entries for the Amazon Keyspaces service endpoint. 3. Create an Amazon Keyspaces keyspace and table. 4. Create an IAM role that has full access to the Amazon Keyspaces table, read access to the Amazon Keyspaces system tables, and is able to describe the Amazon EC2 VPC resources as shown in the following policy example. { "Version": "2012-10-17", "Statement": [ { "Sid": "CrossAccountAccess", "Effect": "Allow", "Action": [ "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcEndpoints", "cassandra:*" ], "Resource": "*" } ] } 5. Configure the IAM role trust policy that Account B and Account C can assume as trusted accounts as shown in the following example. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111111111111:root" }, "Action": "sts:AssumeRole", "Condition": {} } Configure cross-account access in a shared VPC 145 Amazon Keyspaces (for Apache Cassandra) Developer Guide ] } For more information about cross-account IAM policies, see Cross-account policies in the IAM User Guide. Configuration in Account B and Account C 1. In Account B and Account C, create new roles and attach the following policy that allows the principal to assume the shared role created in Account A. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Allowing the principal to assume the shared role is implemented using the AssumeRole API of the AWS Security Token Service (AWS STS). For more information, see Providing access to an IAM user in another AWS account that you own in the IAM User Guide. 2. In Account B and Account C, you can create applications that utilize the SIGV4 authentication plugin, which allows an application to assume the shared role to connect to the Amazon Keyspaces table located in Account A through the VPC endpoint in the shared VPC. For more information about the SIGV4 authentication plugin, see the section called “Create programmatic access credentials”. Configuring cross-account access to Amazon Keyspaces without a shared VPC If the Amazon Keyspaces table and private VPC endpoint are owned by different accounts but are not sharing a VPC, applications can still connect cross-account using VPC endpoints. Because the Configure cross-account access without a shared VPC 146 Amazon Keyspaces (for Apache Cassandra) Developer Guide accounts are not sharing the VPC endpoints, Account A, Account B, and Account C require their own VPC endpoints. To the Cassandra client driver, Amazon Keyspaces appears like a single node instead of a multi-node cluster. Upon connection, the client driver reaches the DNS server which returns one of the available endpoints in the account’s VPC. You can also access Amazon Keyspaces tables across different accounts without a shared VPC endpoint by using the public endpoint or deploying a private VPC endpoint in each account. When not using a shared VPC, each account requires its own VPC endpoint. In this example Account A, Account B, and Account C require their own VPC endpoints to access the table in Account A. When using VPC endpoints in this configuration, Amazon Keyspaces appears as a single node cluster to the Cassandra client driver instead of a multi-node cluster. Upon connection, the client driver reaches the DNS server which returns one of the available endpoints in the account’s VPC. But the client driver is not able to access the system.peers table to discover additional endpoints. Because there are less hosts available, the driver makes less connections. To adjust this, increase the connection pool setting of the driver by a factor of three. Configure cross-account access without a shared VPC 147 Amazon Keyspaces (for Apache Cassandra) Developer Guide Getting started with Amazon Keyspaces (for Apache Cassandra) If you're new to Apache Cassandra and Amazon Keyspaces, this tutorial guides you through installing the necessary programs and tools to use Amazon Keyspaces successfully. You'll learn how to create a keyspace and table using Cassandra Query Language (CQL), the AWS |
AmazonKeyspaces-055 | AmazonKeyspaces.pdf | 55 | to access the system.peers table to discover additional endpoints. Because there are less hosts available, the driver makes less connections. To adjust this, increase the connection pool setting of the driver by a factor of three. Configure cross-account access without a shared VPC 147 Amazon Keyspaces (for Apache Cassandra) Developer Guide Getting started with Amazon Keyspaces (for Apache Cassandra) If you're new to Apache Cassandra and Amazon Keyspaces, this tutorial guides you through installing the necessary programs and tools to use Amazon Keyspaces successfully. You'll learn how to create a keyspace and table using Cassandra Query Language (CQL), the AWS Management Console, or the AWS Command Line Interface (AWS CLI). You then use Cassandra Query Language (CQL) to perform create, read, update, and delete (CRUD) operations on data in your Amazon Keyspaces table. This tutorial covers the following steps. • Prerequisites – Before starting the tutorial, follow the AWS setup instructions to sign up for AWS and create an IAM user with access to Amazon Keyspaces. Then you set up the cqhsh- expansion and AWS CloudShell. Alternatively you can use the AWS CLI to create resources in Amazon Keyspaces. • Step 1: Create a keyspace and table – In this section, you'll create a keyspace named "catalog" and a table named "book_awards" within it. You'll specify the table's columns, data types, partition key, and clustering column using the AWS Management Console, CQL, or the AWS CLI. • Step 2: Perform CRUD operations – Here, you'll use the cqlsh-expansion in CloudShell to insert, read, update, and delete data in the "book_awards" table. You'll learn how to use various CQL statements like SELECT, INSERT, UPDATE, and DELETE, and practice filtering and modifying data. • Step 3: Clean up resources – To avoid incurring charges for unused resources, this section guides you through deleting the "book_awards" table and "catalog" keyspace using the console, CQL, or the AWS CLI. For tutorials to connect programmatically to Amazon Keyspaces using different Apache Cassandra client drivers, see the section called “Using a Cassandra client driver”. For code examples using different AWS SDKs, see Code examples for Amazon Keyspaces using AWS SDKs. Topics • Tutorial prerequisites and considerations • Create a keyspace in Amazon Keyspaces • Check keyspace creation status in Amazon Keyspaces 148 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Create a table in Amazon Keyspaces • Check table creation status in Amazon Keyspaces • Create, read, update, and delete data (CRUD) using CQL in Amazon Keyspaces • Delete a table in Amazon Keyspaces • Delete a keyspace in Amazon Keyspaces Tutorial prerequisites and considerations Before you can get started with Amazon Keyspaces, follow the AWS setup instructions in Accessing Amazon Keyspaces (for Apache Cassandra). These steps include signing up for AWS and creating an AWS Identity and Access Management (IAM) user with access to Amazon Keyspaces. To complete all the steps of the tutorial, you need to install cqlsh. You can follow the setup instructions at Using cqlsh to connect to Amazon Keyspaces. To access Amazon Keyspaces using cqlsh or the AWS CLI, we recommend using AWS CloudShell. CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. You can run AWS Command Line Interface (AWS CLI) commands against Amazon Keyspaces using your preferred shell (Bash, PowerShell or Z shell). To use cqlsh, you must install the cqlsh-expansion. For cqlsh-expansion installation instructions, see the section called “Using the cqlsh-expansion”. For more information about CloudShell see the section called “Using AWS CloudShell”. To use the AWS CLI to create, view, and delete resources in Amazon Keyspaces, follow the setup instructions at the section called “Downloading and Configuring the AWS CLI”. After completing the prerequisite steps, proceed to Create a keyspace in Amazon Keyspaces. Create a keyspace in Amazon Keyspaces In this section, you create a keyspace using the console, cqlsh, or the AWS CLI. Note Before you begin, make sure that you have configured all the tutorial prerequisites. Prerequisites 149 Amazon Keyspaces (for Apache Cassandra) Developer Guide A keyspace groups related tables that are relevant for one or more applications. A keyspace contains one or more tables and defines the replication strategy for all the tables it contains. For more information about keyspaces, see the following topics: • Data definition language (DDL) statements in the CQL language reference: Keyspaces • Quotas for Amazon Keyspaces (for Apache Cassandra) In this tutorial we create a single-Region keyspace, and the replication strategy of the keyspace is SingleRegionStrategy. Using SingleRegionStrategy, Amazon Keyspaces replicates data across three Availability Zones in one AWS Region. To learn how to create multi-Region keyspaces, see the section called “Create a multi-Region keyspace”. Using the console To create a keyspace using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https:// console.aws.amazon.com/keyspaces/home. |
AmazonKeyspaces-056 | AmazonKeyspaces.pdf | 56 | information about keyspaces, see the following topics: • Data definition language (DDL) statements in the CQL language reference: Keyspaces • Quotas for Amazon Keyspaces (for Apache Cassandra) In this tutorial we create a single-Region keyspace, and the replication strategy of the keyspace is SingleRegionStrategy. Using SingleRegionStrategy, Amazon Keyspaces replicates data across three Availability Zones in one AWS Region. To learn how to create multi-Region keyspaces, see the section called “Create a multi-Region keyspace”. Using the console To create a keyspace using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https:// console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces. 3. Choose Create keyspace. 4. In the Keyspace name box, enter catalog as the name for your keyspace. Name constraints: • The name can't be empty. • Allowed characters: alphanumeric characters and underscore ( _ ). • Maximum length is 48 characters. 5. Under AWS Regions, confirm that Single-Region replication is the replication strategy for the keyspace. 6. To create the keyspace, choose Create keyspace. 7. Verify that the keyspace catalog was created by doing the following: a. b. In the navigation pane, choose Keyspaces. Locate your keyspace catalog in the list of keyspaces. Create a keyspace 150 Amazon Keyspaces (for Apache Cassandra) Developer Guide Using CQL The following procedure creates a keyspace using CQL. To create a keyspace using CQL 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl The output of that command should look like this. Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142 [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh current consistency level is ONE. 2. Create your keyspace using the following CQL command. CREATE KEYSPACE catalog WITH REPLICATION = {'class': 'SingleRegionStrategy'}; SingleRegionStrategy uses a replication factor of three and replicates data across three AWS Availability Zones in its Region. Note Amazon Keyspaces defaults all input to lowercase unless you enclose it in quotation marks. 3. Verify that your keyspace was created. SELECT * from system_schema.keyspaces; The output of this command should look similar to this. cqlsh> SELECT * from system_schema.keyspaces; keyspace_name | durable_writes | replication Create a keyspace 151 Amazon Keyspaces (for Apache Cassandra) Developer Guide -------------------------+---------------- +------------------------------------------------------------------------------------- system_schema | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system_schema_mcs | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system_multiregion_info | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} catalog | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} (5 rows) Using the AWS CLI The following procedure creates a keyspace using the AWS CLI. To create a keyspace using the AWS CLI 1. To confirm that your environment is setup, you can run the following command in CloudShell. aws keyspaces help 2. Create your keyspace using the following AWS CLI statement. aws keyspaces create-keyspace --keyspace-name 'catalog' 3. Verify that your keyspace was created with the following AWS CLI statement aws keyspaces get-keyspace --keyspace-name 'catalog' The output of this command should look similar to this example. { "keyspaceName": "catalog", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/catalog/", "replicationStrategy": "SINGLE_REGION" } Create a keyspace 152 Amazon Keyspaces (for Apache Cassandra) Developer Guide Check keyspace creation status in Amazon Keyspaces Amazon Keyspaces performs data definition language (DDL) operations, such as creating and deleting keyspaces, asynchronously. You can monitor the creation status of new keyspaces in the AWS Management Console, which indicates when a keyspace is pending or active. You can also monitor the creation status of a new keyspace programmatically by using the system_schema_mcs keyspace. A keyspace becomes visible in the system_schema_mcs keyspaces table when it's ready for use. The recommended design pattern to check when a new keyspace is ready for use is to poll the Amazon Keyspaces system_schema_mcs keyspaces table (system_schema_mcs.*). For a list of DDL statements for keyspaces, see the the section called “Keyspaces” section in the CQL language reference. The following query shows whether a keyspace has been successfully created. SELECT * FROM system_schema_mcs.keyspaces WHERE keyspace_name = 'mykeyspace'; For a keyspace that has been successfully created, the output of the query looks like the following. keyspace_name | durable_writes | replication --------------+-----------------+-------------- mykeyspace | true |{...} 1 item Create a table in Amazon Keyspaces In this section, you create a table using the console, cqlsh, or the AWS CLI. A table is where your data is organized and stored. The primary key of your table determines how data is partitioned in your table. The primary key is composed of a required partition key and one or more optional clustering columns. The combined values that compose the primary key must be unique across all the table’s data. For more information about tables, see the following topics: • Partition key design: the section called “Partition key design” • Working with |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.