id
stringlengths 8
78
| source
stringclasses 743
values | chunk_id
int64 1
5.05k
| text
stringlengths 593
49.7k
|
---|---|---|---|
AmazonKeyspaces-057 | AmazonKeyspaces.pdf | 57 | Create a table in Amazon Keyspaces In this section, you create a table using the console, cqlsh, or the AWS CLI. A table is where your data is organized and stored. The primary key of your table determines how data is partitioned in your table. The primary key is composed of a required partition key and one or more optional clustering columns. The combined values that compose the primary key must be unique across all the table’s data. For more information about tables, see the following topics: • Partition key design: the section called “Partition key design” • Working with tables: the section called “Check table creation status” Check keyspace creation status 153 Amazon Keyspaces (for Apache Cassandra) Developer Guide • DDL statements in the CQL language reference: Tables • Table resource management: Managing serverless resources • Monitoring table resource utilization: the section called “Monitoring with CloudWatch” • Quotas for Amazon Keyspaces (for Apache Cassandra) When you create a table, you specify the following: • The name of the table. • The name and data type of each column in the table. • The primary key for the table. • Partition key – Required • Clustering columns – Optional Use the following procedure to create a table with the specified columns, data types, partition keys, and clustering columns. Using the console The following procedure creates the table book_awards with these columns and data types. year int award text rank int category text book_title text author text publisher text To create a table using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https:// console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces. 3. Choose catalog as the keyspace you want to create this table in. 4. Choose Create table. 5. In the Table name box, enter book_awards as a name for your table. Create a table 154 Amazon Keyspaces (for Apache Cassandra) Developer Guide Name constraints: • The name can't be empty. • Allowed characters: alphanumeric characters and underscore ( _ ). • Maximum length is 48 characters. 6. In the Columns section, repeat the following steps for each column that you want to add to this table. Add the following columns and data types. year int award text rank int category text book_title text author text publisher text a. Name – Enter a name for the column. Name constraints: • The name can't be empty. • Allowed characters: alphanumeric characters and underscore ( _ ). • Maximum length is 48 characters. b. Type – In the list of data types, choose the data type for this column. c. To add another column, choose Add column. 7. Choose award and year as the partition keys under Partition Key. A partition key is required for each table. A partition key can be made of one or more columns. 8. Add category and rank as Clustering columns. Clustering columns are optional and determine the sort order within each partition. a. b. To add a clustering column, choose Add clustering column. In the Column list, choose category. In the Order list, choose ASC to sort in ascending order on the values in this column. (Choose DESC for descending order.) c. Then select Add clustering column and choose rank. Create a table 155 Amazon Keyspaces (for Apache Cassandra) Developer Guide 9. In the Table settings section, choose Default settings. 10. Choose Create table. 11. Verify that your table was created. a. In the navigation pane, choose Tables. b. Confirm that your table is in the list of tables. c. Choose the name of your table. d. Confirm that all your columns and data types are correct. Note The columns might not be listed in the same order that you added them to the table. Using CQL This procedure creates a table with the following columns and data types using CQL. The year and award columns are partition keys with category and rank as clustering columns, together they make up the primary key of the table. year int award text rank int category text book_title text author text publisher text To create a table using CQL 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl The output of that command should look like this. Create a table 156 Amazon Keyspaces (for Apache Cassandra) Developer Guide Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142 [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh current consistency level is ONE. 2. At the keyspace prompt (cqlsh:keyspace_name>), create your table by entering the following code into your command window. CREATE TABLE catalog.book_awards ( year int, award text, rank int, category text, book_title text, author text, publisher text, PRIMARY |
AmazonKeyspaces-058 | AmazonKeyspaces.pdf | 58 | command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl The output of that command should look like this. Create a table 156 Amazon Keyspaces (for Apache Cassandra) Developer Guide Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142 [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh current consistency level is ONE. 2. At the keyspace prompt (cqlsh:keyspace_name>), create your table by entering the following code into your command window. CREATE TABLE catalog.book_awards ( year int, award text, rank int, category text, book_title text, author text, publisher text, PRIMARY KEY ((year, award), category, rank) ); Note ASC is the default clustering order. You can also specify DESC for descending order. Note that the year and award are partition key columns. Then, category and rank are the clustering columns ordered by ascending order (ASC). Together, these columns form the primary key of the table. 3. Verify that your table was created. SELECT * from system_schema.tables WHERE keyspace_name='catalog.book_awards' ; The output should look similar to this. keyspace_name | table_name | bloom_filter_fp_chance | caching | cdc | comment | compaction | compression | crc_check_chance | dclocal_read_repair_chance | default_time_to_live | extensions | flags | gc_grace_seconds | id | max_index_interval | memtable_flush_period_in_ms | min_index_interval | read_repair_chance | speculative_retry Create a table 157 Amazon Keyspaces (for Apache Cassandra) Developer Guide ---------------+------------+------------------------+---------+-----+--------- +------------+-------------+------------------+---------------------------- +----------------------+------------+-------+------------------+---- +--------------------+-----------------------------+-------------------- +--------------------+------------------- (0 rows)> 4. Verify your table's structure. SELECT * FROM system_schema.columns WHERE keyspace_name = 'catalog' AND table_name = 'book_awards'; The output of this statement should look similar to this example. keyspace_name | table_name | column_name | clustering_order | column_name_bytes | kind | position | type ---------------+-------------+-------------+------------------ +------------------------+---------------+----------+------ catalog | book_awards | year | none | 0x79656172 | partition_key | 0 | int catalog | book_awards | award | none | 0x6177617264 | partition_key | 1 | text catalog | book_awards | category | asc | 0x63617465676f7279 | clustering | 0 | text catalog | book_awards | rank | asc | 0x72616e6b | clustering | 1 | int catalog | book_awards | author | none | 0x617574686f72 | regular | -1 | text catalog | book_awards | book_title | none | 0x626f6f6b5f7469746c65 | regular | -1 | text catalog | book_awards | publisher | none | 0x7075626c6973686572 | regular | -1 | text (7 rows) Confirm that all the columns and data types are as you expected. The order of the columns might be different than in the CREATE statement. Create a table 158 Amazon Keyspaces (for Apache Cassandra) Developer Guide Using the AWS CLI This procedure creates a table with the following columns and data types using the AWS CLI. The year and award columns make up the partition key with category and rank as clustering columns. year int award text rank int category text book_title text author text publisher text To create a table using the AWS CLI The following command creates a table with the name book_awards. The partition key of the table consists of the columns year and award and the clustering key consists of the columns category and rank, both clustering columns use the ascending sort order. (For easier readability, the schema-definition of the table create command in this section is broken into separate lines.) 1. You can create the table using the following statement. aws keyspaces create-table --keyspace-name 'catalog' \ --table-name 'book_awards' \ --schema-definition 'allColumns=[{name=year,type=int}, {name=award,type=text},{name=rank,type=int}, {name=category,type=text}, {name=author,type=text}, {name=book_title,type=text},{name=publisher,type=text}], partitionKeys=[{name=year}, {name=award}],clusteringKeys=[{name=category,orderBy=ASC},{name=rank,orderBy=ASC}]' This command results in the following output. { "resourceArn": "arn:aws:cassandra:us-east-1:111222333444:/keyspace/catalog/ table/book_awards" } 2. To confirm the metadata and properties of the table, you can use the following command. Create a table 159 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws keyspaces get-table --keyspace-name 'catalog' --table-name 'book_awards' This command returns the following output. { "keyspaceName": "catalog", "tableName": "book_awards", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/catalog/ table/book_awards", "creationTimestamp": "2024-07-11T15:12:55.571000+00:00", "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "year", "type": "int" }, { "name": "award", "type": "text" }, { "name": "category", "type": "text" }, { "name": "rank", "type": "int" }, { "name": "author", "type": "text" }, { "name": "book_title", "type": "text" }, { "name": "publisher", "type": "text" } ], Create a table 160 Amazon Keyspaces (for Apache Cassandra) Developer Guide "partitionKeys": [ { "name": "year" }, { "name": "award" } ], "clusteringKeys": [ { "name": "category", "orderBy": "ASC" }, { "name": "rank", "orderBy": "ASC" } ], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2024-07-11T15:12:55.571000+00:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" }, "replicaSpecifications": [] } To perform CRUD (create, read, update, and delete) operations on the data in your table, proceed to the section called “CRUD operations”. Create a table 161 Amazon Keyspaces (for Apache Cassandra) Developer Guide Check table creation status in Amazon Keyspaces Amazon Keyspaces performs data definition language (DDL) operations, such as creating and deleting tables, asynchronously. You |
AmazonKeyspaces-059 | AmazonKeyspaces.pdf | 59 | [ { "name": "category", "orderBy": "ASC" }, { "name": "rank", "orderBy": "ASC" } ], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2024-07-11T15:12:55.571000+00:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" }, "replicaSpecifications": [] } To perform CRUD (create, read, update, and delete) operations on the data in your table, proceed to the section called “CRUD operations”. Create a table 161 Amazon Keyspaces (for Apache Cassandra) Developer Guide Check table creation status in Amazon Keyspaces Amazon Keyspaces performs data definition language (DDL) operations, such as creating and deleting tables, asynchronously. You can monitor the creation status of new tables in the AWS Management Console, which indicates when a table is pending or active. You can also monitor the creation status of a new table programmatically by using the system schema table. A table shows as active in the system schema when it's ready for use. The recommended design pattern to check when a new table is ready for use is to poll the Amazon Keyspaces system schema tables (system_schema_mcs.*). For a list of DDL statements for tables, see the the section called “Tables” section in the CQL language reference. The following query shows the status of a table. SELECT keyspace_name, table_name, status FROM system_schema_mcs.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable'; For a table that is still being created and is pending,the output of the query looks like this. keyspace_name | table_name | status --------------+------------+-------- mykeyspace | mytable | CREATING For a table that has been successfully created and is active, the output of the query looks like the following. keyspace_name | table_name | status --------------+------------+-------- mykeyspace | mytable | ACTIVE Create, read, update, and delete data (CRUD) using CQL in Amazon Keyspaces In this step of the tutorial, you'll learn how to insert, read, update, and delete data in an Amazon Keyspaces table using CQL data manipulation language (DML) statements. In Amazon Keyspaces, Check table creation status 162 Amazon Keyspaces (for Apache Cassandra) Developer Guide you can only create DML statements in CQL language. In this tutorial, you'll practice running DML statements using the cqlsh-expansion with AWS CloudShell in the AWS Management Console. • Inserting data – This section covers inserting single and multiple records into a table using the INSERT statement. You'll learn how to upload data from a CSV file and verify successful inserts using SELECT queries. • Reading data – Here, you'll explore different variations of the SELECT statement to retrieve data from a table. Topics include selecting all data, selecting specific columns, filtering rows based on conditions using the WHERE clause, and understanding simple and compound conditions. • Updating data – In this section, you'll learn how to modify existing data in a table using the UPDATE statement. You'll practice updating single and multiple columns while understanding restrictions around updating primary key columns. • Deleting data – The final section covers deleting data from a table using the DELETEstatement. You'll learn how to delete specific cells, entire rows, and the implications of deleting data versus deleting the entire table or keyspace. Throughout the tutorial, you'll find examples, tips, and opportunities to practice writing your own CQL queries for various scenarios. Topics • Inserting and loading data into an Amazon Keyspaces table • Read data from a table using the CQL SELECT statement in Amazon Keyspaces • Update data in an Amazon Keyspaces table using CQL • Delete data from a table using the CQL DELETE statement Inserting and loading data into an Amazon Keyspaces table To create data in your book_awards table, use the INSERT statement to add a single row. 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl The output of that command should look like this. Create 163 Amazon Keyspaces (for Apache Cassandra) Developer Guide Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142 [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh current consistency level is ONE. 2. Before you can write data to your Amazon Keyspaces table using cqlsh, you must set the write consistency for the current cqlsh session to LOCAL_QUORUM. For more information about supported consistency levels, see the section called “Write consistency levels”. Note that this step is not required if you are using the CQL editor in the AWS Management Console. CONSISTENCY LOCAL_QUORUM; 3. To insert a single record, run the following command in the CQL editor. INSERT INTO catalog.book_awards (award, year, category, rank, author, book_title, publisher) VALUES ('Wolf', 2023, 'Fiction',3,'Shirley Rodriguez','Mountain', 'AnyPublisher') ; 4. Verify that the data was correctly added to your table by running the following command. SELECT * FROM catalog.book_awards ; The output of the statement should look like |
AmazonKeyspaces-060 | AmazonKeyspaces.pdf | 60 | current cqlsh session to LOCAL_QUORUM. For more information about supported consistency levels, see the section called “Write consistency levels”. Note that this step is not required if you are using the CQL editor in the AWS Management Console. CONSISTENCY LOCAL_QUORUM; 3. To insert a single record, run the following command in the CQL editor. INSERT INTO catalog.book_awards (award, year, category, rank, author, book_title, publisher) VALUES ('Wolf', 2023, 'Fiction',3,'Shirley Rodriguez','Mountain', 'AnyPublisher') ; 4. Verify that the data was correctly added to your table by running the following command. SELECT * FROM catalog.book_awards ; The output of the statement should look like this. year | award | category | rank | author | book_title | publisher ------+-------+----------+------+-------------------+------------+-------------- 2023 | Wolf | Fiction | 3 | Shirley Rodriguez | Mountain | AnyPublisher (1 rows) To insert multiple records from a file using cqlsh 1. Download the sample CSV file (keyspaces_sample_table.csv) contained in the archive file samplemigration.zip. Unzip the archive and take note of the path to keyspaces_sample_table.csv. Create 164 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2. Open AWS CloudShell in the AWS Management Console and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl 3. At the cqlsh prompt (cqlsh>), specify a keyspace. USE catalog ; 4. Set write consistency to LOCAL_QUORUM. For more information about supported consistency levels, see the section called “Write consistency levels”. CONSISTENCY LOCAL_QUORUM; 5. In the AWS CloudShell choose Actions on the top right side of the screen and then choose Upload file to upload the csv file downloaded earlier. Take note of the path to the file. 6. At the keyspace prompt (cqlsh:catalog>), run the following statement. COPY book_awards (award, year, category, rank, author, book_title, publisher) FROM '/home/cloudshell-user/keyspaces_sample_table.csv' WITH header=TRUE ; The output of the statement should look similar to this. cqlsh:catalog> COPY book_awards (award, year, category, rank, author, book_title, publisher) FROM '/home/cloudshell-user/ keyspaces_sample_table.csv' WITH delimiter=',' AND header=TRUE ; cqlsh current consistency level is LOCAL_QUORUM. Reading options from /home/cloudshell-user/.cassandra/cqlshrc:[copy]: {'numprocesses': '16', 'maxattempts': '1000'} Create 165 Amazon Keyspaces (for Apache Cassandra) Developer Guide Reading options from /home/cloudshell-user/.cassandra/cqlshrc:[copy-from]: {'ingestrate': '1500', 'maxparseerrors': '1000', 'maxinserterrors': '-1', 'maxbatchsize': '10', 'minbatchsize': '1', 'chunksize': '30'} Reading options from the command line: {'delimiter': ',', 'header': 'TRUE'} Using 16 child processes Starting copy of catalog.book_awards with columns [award, year, category, rank, author, book_title, publisher]. OSError: handle is closed 0 rows/s; Avg. rate: 0 rows/s Processed: 9 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s 9 rows imported from 1 files in 0 day, 0 hour, 0 minute, and 26.706 seconds (0 skipped). 7. Verify that the data was correctly added to your table by running the following query. SELECT * FROM book_awards ; You should see the following output. year | award | category | rank | author | book_title | publisher ------+------------------+-------------+------+-------------------- +-----------------------+--------------- 2020 | Wolf | Non-Fiction | 1 | Wang Xiulan | History of Ideas | Example Books 2020 | Wolf | Non-Fiction | 2 | Ana Carolina Silva | Science Today | SomePublisher 2020 | Wolf | Non-Fiction | 3 | Shirley Rodriguez | The Future of Sea Ice | AnyPublisher 2020 | Kwesi Manu Prize | Fiction | 1 | Akua Mansa | Where did you go? | SomePublisher 2020 | Kwesi Manu Prize | Fiction | 2 | John Stiles | Yesterday | Example Books 2020 | Kwesi Manu Prize | Fiction | 3 | Nikki Wolf | Moving to the Chateau | AnyPublisher 2020 | Richard Roe | Fiction | 1 | Alejandro Rosalez | Long Summer | SomePublisher 2020 | Richard Roe | Fiction | 2 | Arnav Desai | The Key | Example Books 2020 | Richard Roe | Fiction | 3 | Mateo Jackson | Inside the Whale | AnyPublisher Create 166 Amazon Keyspaces (for Apache Cassandra) Developer Guide (9 rows) To learn more about using cqlsh COPY to upload data from csv files to an Amazon Keyspaces table, see the section called “Loading data using cqlsh”. Read data from a table using the CQL SELECT statement in Amazon Keyspaces In the Inserting and loading data into an Amazon Keyspaces table section, you used the SELECT statement to verify that you had successfully added data to your table. In this section, you refine your use of SELECT to display specific columns, and only rows that meet specific criteria. The general form of the SELECT statement is as follows. SELECT column_list FROM table_name [WHERE condition [ALLOW FILTERING]] ; Topics • Select all the data in your table • Select a subset of columns • Select a subset of rows Select all the data in your table The simplest form of the SELECT statement returns all the data in your table. Important In a production environment, it's typically not a best practice to run this |
AmazonKeyspaces-061 | AmazonKeyspaces.pdf | 61 | added data to your table. In this section, you refine your use of SELECT to display specific columns, and only rows that meet specific criteria. The general form of the SELECT statement is as follows. SELECT column_list FROM table_name [WHERE condition [ALLOW FILTERING]] ; Topics • Select all the data in your table • Select a subset of columns • Select a subset of rows Select all the data in your table The simplest form of the SELECT statement returns all the data in your table. Important In a production environment, it's typically not a best practice to run this command, because it returns all the data in your table. To select all your table's data 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl Read 167 Amazon Keyspaces (for Apache Cassandra) 2. Run the following query. SELECT * FROM catalog.book_awards ; Developer Guide Using the wild-card character ( * ) for the column_list selects all columns. The output of the statement looks like the following example. year | award | category | rank | author | book_title | publisher ------+------------------+-------------+------+-------------------- +-----------------------+--------------- 2020 | Wolf | Non-Fiction | 1 | Wang Xiulan | History of Ideas | AnyPublisher 2020 | Wolf | Non-Fiction | 2 | Ana Carolina Silva | Science Today | SomePublisher 2020 | Wolf | Non-Fiction | 3 | Shirley Rodriguez | The Future of Sea Ice | AnyPublisher 2020 | Kwesi Manu Prize | Fiction | 1 | Akua Mansa | Where did you go? | SomePublisher 2020 | Kwesi Manu Prize | Fiction | 2 | John Stiles | Yesterday | Example Books 2020 | Kwesi Manu Prize | Fiction | 3 | Nikki Wolf | Moving to the Chateau | AnyPublisher 2020 | Richard Roe | Fiction | 1 | Alejandro Rosalez | Long Summer | SomePublisher 2020 | Richard Roe | Fiction | 2 | Arnav Desai | The Key | Example Books 2020 | Richard Roe | Fiction | 3 | Mateo Jackson | Inside the Whale | AnyPublisher Select a subset of columns To query for a subset of columns 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl 2. To retrieve only the award, category, and year columns, run the following query. Read 168 Amazon Keyspaces (for Apache Cassandra) Developer Guide SELECT award, category, year FROM catalog.book_awards ; The output contains only the specified columns in the order listed in the SELECT statement. award | category | year ------------------+-------------+------ Wolf | Non-Fiction | 2020 Wolf | Non-Fiction | 2020 Wolf | Non-Fiction | 2020 Kwesi Manu Prize | Fiction | 2020 Kwesi Manu Prize | Fiction | 2020 Kwesi Manu Prize | Fiction | 2020 Richard Roe | Fiction | 2020 Richard Roe | Fiction | 2020 Richard Roe | Fiction | 2020 Select a subset of rows When querying a large dataset, you might only want records that meet certain criteria. To do this, you can append a WHERE clause to the end of our SELECT statement. To query for a subset of rows 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl 2. To retrieve only the records for the awards of a given year, run the following query. SELECT * FROM catalog.book_awards WHERE year=2020 AND award='Wolf' ; The preceding SELECT statement returns the following output. year | award | category | rank | author | book_title | publisher ------+-------+-------------+------+--------------------+----------------------- +--------------- Read 169 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2020 | Wolf | Non-Fiction | 1 | Wang Xiulan | History of Ideas | AnyPublisher 2020 | Wolf | Non-Fiction | 2 | Ana Carolina Silva | Science Today | SomePublisher 2020 | Wolf | Non-Fiction | 3 | Shirley Rodriguez | The Future of Sea Ice | AnyPublisher Understanding the WHERE clause The WHERE clause is used to filter the data and return only the data that meets the specified criteria. The specified criteria can be a simple condition or a compound condition. How to use conditions in a WHERE clause • A simple condition – A single column. WHERE column_name=value You can use a simple condition in a WHERE clause if any of the following conditions are met: • The column is the only partition key column of the table. • You add ALLOW FILTERING after the condition in the WHERE clause. Be aware that using ALLOW FILTERING can result in inconsistent performance, especially with large, and multi-partitioned tables. • A compound condition – Multiple simple conditions connected by AND. WHERE column_name1=value1 AND column_name2=value2 |
AmazonKeyspaces-062 | AmazonKeyspaces.pdf | 62 | be a simple condition or a compound condition. How to use conditions in a WHERE clause • A simple condition – A single column. WHERE column_name=value You can use a simple condition in a WHERE clause if any of the following conditions are met: • The column is the only partition key column of the table. • You add ALLOW FILTERING after the condition in the WHERE clause. Be aware that using ALLOW FILTERING can result in inconsistent performance, especially with large, and multi-partitioned tables. • A compound condition – Multiple simple conditions connected by AND. WHERE column_name1=value1 AND column_name2=value2 AND column_name3=value3... You can use compound conditions in a WHERE clause if any of the following conditions are met: • The columns you can use in the WHERE clause need to include either all or a subset of the columns in the table's partition key. If you want to use only a subset of the columns in the WHERE clause, you must include a contiguous set of partition key columns from left to right, beginning with the partition key's leading column. For example, if the partition key columns are year, month, and award then you can use the following columns in the WHERE clause: • year • year AND month • year AND month AND award Read 170 Amazon Keyspaces (for Apache Cassandra) Developer Guide • You add ALLOW FILTERING after the compound condition in the WHERE clause, as in the following example. SELECT * FROM my_table WHERE col1=5 AND col2='Bob' ALLOW FILTERING ; Be aware that using ALLOW FILTERING can result in inconsistent performance, especially with large, and multi-partitioned tables. Try it Create your own CQL queries to find the following from your book_awards table: • Find the winners of the 2020 Wolf awards and display the book titles and authors, ordered by rank. • Show the first prize winners for all awards in 2020 and display the book titles and award names. Update data in an Amazon Keyspaces table using CQL To update the data in your book_awards table, use the UPDATE statement. The general form of the UPDATE statement is as follows. UPDATE table_name SET column_name=new_value WHERE primary_key=value ; Tip • You can update multiple columns by using a comma-separated list of column_names and values, as in the following example. UPDATE my_table SET col1='new_value_1', col2='new_value2' WHERE col3='1' ; • If the primary key is composed of multiple columns, all primary key columns and their values must be included in the WHERE clause. • You cannot update any column in the primary key because that would change the primary key for the record. Update 171 Amazon Keyspaces (for Apache Cassandra) To update a single cell Developer Guide Using your book_awards table, change the name of a publisher the for winner of the non-fiction Wolf awards in 2020. UPDATE book_awards SET publisher='new Books' WHERE year = 2020 AND award='Wolf' AND category='Non-Fiction' AND rank=1; Verify that the publisher is now new Books. SELECT * FROM book_awards WHERE year = 2020 AND award='Wolf' AND category='Non-Fiction' AND rank=1; The statement should return the following output. year | award | category | rank | author | book_title | publisher ------+-------+-------------+------+-------------+------------------+----------- 2020 | Wolf | Non-Fiction | 1 | Wang Xiulan | History of Ideas | new Books Try it Advanced: The winner of the 2020 fiction "Kwezi Manu Prize" has changed their name. Update this record to change the name to 'Akua Mansa-House'. Delete data from a table using the CQL DELETE statement To delete data in your book_awards table, use the DELETE statement. You can delete data from a row or from a partition. Be careful when deleting data, because deletions are irreversible. Deleting one or all rows from a table doesn't delete the table. Thus you can repopulate it with data. Deleting a table deletes the table and all data in it. To use the table again, you must re-create it and add data to it. Deleting a keyspace deletes the keyspace and all tables within it. To use the keyspace and tables, you must re-create them, and then populate them with data. You can use Amazon Keyspaces Point-in-time (PITR) recovery to help restore deleted tables, to learn more see the section called “Backup and restore with point-in-time recovery” . To learn how to restore a deleted table with PITR enabled, see the section called “Restore a deleted table”. Delete 172 Amazon Keyspaces (for Apache Cassandra) Developer Guide Delete cells Deleting a column from a row removes the data from the specified cell. When you display that column using a SELECT statement, the data is displayed as null, though a null value is not stored in that location. The general syntax to delete one or more specific columns is as follows. DELETE column_name1[, column_name2...] FROM table_name WHERE condition ; In your |
AmazonKeyspaces-063 | AmazonKeyspaces.pdf | 63 | section called “Backup and restore with point-in-time recovery” . To learn how to restore a deleted table with PITR enabled, see the section called “Restore a deleted table”. Delete 172 Amazon Keyspaces (for Apache Cassandra) Developer Guide Delete cells Deleting a column from a row removes the data from the specified cell. When you display that column using a SELECT statement, the data is displayed as null, though a null value is not stored in that location. The general syntax to delete one or more specific columns is as follows. DELETE column_name1[, column_name2...] FROM table_name WHERE condition ; In your book_awards table, you can see that the title of the book that won the first price of the 2020 "Richard Roe" price is "Long Summer". Imagine that this title has been recalled, and you need to delete the data from this cell. To delete a specific cell 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl 2. Run the following DELETE query. DELETE book_title FROM catalog.book_awards WHERE year=2020 AND award='Richard Roe' AND category='Fiction' AND rank=1; 3. Verify that the delete request was made as expected. SELECT * FROM catalog.book_awards WHERE year=2020 AND award='Richard Roe' AND category='Fiction' AND rank=1; The output of this statement looks like this. year | award | category | rank | author | book_title | publisher ------+-------------+----------+------+-------------------+------------ +--------------- 2020 | Richard Roe | Fiction | 1 | Alejandro Rosalez | null | SomePublisher Delete 173 Amazon Keyspaces (for Apache Cassandra) Developer Guide Delete rows There might be a time when you need to delete an entire row, for example to meet a data deletion request. The general syntax for deleting a row is as follows. DELETE FROM table_name WHERE condition ; To delete a row 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl 2. Run the following DELETE query. DELETE FROM catalog.book_awards WHERE year=2020 AND award='Richard Roe' AND category='Fiction' AND rank=1; 3. Verify that the delete was made as expected. SELECT * FROM catalog.book_awards WHERE year=2020 AND award='Richard Roe' AND category='Fiction' AND rank=1; The output of this statement looks like this after the row has been deleted. year | award | category | rank | author | book_title | publisher ------+-------+----------+------+--------+------------+----------- (0 rows) You can delete expired data automatically from your table using Amazon Keyspaces Time to Live, for more information, see the section called “Expire data with Time to Live”. Delete a table in Amazon Keyspaces To avoid being charged for tables and data that you don't need, delete all the tables that you're not using. When you delete a table, the table and its data are deleted and you stop accruing charges Delete a table 174 Amazon Keyspaces (for Apache Cassandra) Developer Guide for them. However, the keyspace remains. When you delete a keyspace, the keyspace and all its tables are deleted and you stop accruing charges for them. You can delete a table using the console, CQL, or the AWS CLI. When you delete a table, the table and all its data are deleted. Using the console The following procedure deletes a table and all its data using the AWS Management Console. To delete a table using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https:// console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables. 3. Choose the box to the left of the name of each table that you want to delete. 4. Choose Delete. 5. On the Delete table screen, enter Delete in the box. Then, choose Delete table. 6. To verify that the table was deleted, choose Tables in the navigation pane, and confirm that the book_awards table is no longer listed. Using CQL The following procedure deletes a table and all its data using CQL. To delete a table using CQL 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl 2. Delete your table by entering the following statement. DROP TABLE IF EXISTS catalog.book_awards ; 3. Verify that your table was deleted. SELECT * FROM system_schema.tables WHERE keyspace_name = 'catalog' ; Delete a table 175 Amazon Keyspaces (for Apache Cassandra) Developer Guide The output should look like this. Note that this might take some time, so re-run the statement after a minute if you don't see this result. keyspace_name | table_name | bloom_filter_fp_chance | caching | cdc | comment | compaction | compression | crc_check_chance | dclocal_read_repair_chance | default_time_to_live | extensions | flags | gc_grace_seconds | id | max_index_interval | memtable_flush_period_in_ms | min_index_interval | read_repair_chance | speculative_retry ---------------+------------+------------------------+---------+-----+--------- +------------+-------------+------------------+---------------------------- +----------------------+------------+-------+------------------+---- |
AmazonKeyspaces-064 | AmazonKeyspaces.pdf | 64 | TABLE IF EXISTS catalog.book_awards ; 3. Verify that your table was deleted. SELECT * FROM system_schema.tables WHERE keyspace_name = 'catalog' ; Delete a table 175 Amazon Keyspaces (for Apache Cassandra) Developer Guide The output should look like this. Note that this might take some time, so re-run the statement after a minute if you don't see this result. keyspace_name | table_name | bloom_filter_fp_chance | caching | cdc | comment | compaction | compression | crc_check_chance | dclocal_read_repair_chance | default_time_to_live | extensions | flags | gc_grace_seconds | id | max_index_interval | memtable_flush_period_in_ms | min_index_interval | read_repair_chance | speculative_retry ---------------+------------+------------------------+---------+-----+--------- +------------+-------------+------------------+---------------------------- +----------------------+------------+-------+------------------+---- +--------------------+-----------------------------+-------------------- +--------------------+------------------- (0 rows) Using the AWS CLI The following procedure deletes a table and all its data using the AWS CLI. To delete a table using the AWS CLI 1. Open CloudShell 2. Delete your table with the following statement. aws keyspaces delete-table --keyspace-name 'catalog' --table-name 'book_awards' 3. To verify that your table was deleted, you can list all tables in a keyspace. aws keyspaces list-tables --keyspace-name 'catalog' You should see the following output. Note that this asynchronous operation can take some time. Re-run the command again after a short while to confirm that the table has been deleted. { "tables": [] } Delete a table 176 Amazon Keyspaces (for Apache Cassandra) Developer Guide Delete a keyspace in Amazon Keyspaces To avoid being charged for keyspaces, delete all the keyspaces that you're not using. When you delete a keyspace, the keyspace and all its tables are deleted and you stop accruing charges for them. You can delete a keyspace using either the console, CQL, or the AWS CLI. Using the console The following procedure deletes a keyspace and all its tables and data using the console. To delete a keyspace using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https:// console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces. 3. Choose the box to the left of the name of each keyspace that you want to delete. 4. Choose Delete. 5. On the Delete keyspace screen, enter Delete in the box. Then, choose Delete keyspace. 6. To verify that the keyspace catalog was deleted, choose Keyspaces in the navigation pane and confirm that it is no longer listed. Because you deleted its keyspace, the book_awards table under Tables should also not be listed. Using CQL The following procedure deletes a keyspace and all its tables and data using CQL. To delete a keyspace using CQL 1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update us-east-1 with your own Region. cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl 2. Delete your keyspace by entering the following statement. DROP KEYSPACE IF EXISTS catalog ; Delete a keyspace 177 Amazon Keyspaces (for Apache Cassandra) Developer Guide 3. Verify that your keyspace was deleted. SELECT * from system_schema.keyspaces ; Your keyspace should not be listed. Note that because this is an asynchronous operation, there can be a delay until the keyspace is deleted. After the keyspace has been deleted, the output of the statement should look like this. keyspace_name | durable_writes | replication -------------------------+---------------- +------------------------------------------------------------------------------------- system_schema | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system_schema_mcs | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system_multiregion_info | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} (4 rows) Using the AWS CLI The following procedure deletes a keyspace and all its tables and data using the AWS CLI. To delete a keyspace using the AWS CLI 1. Open AWS CloudShell 2. Delete your keyspace by entering the following statement. aws keyspaces delete-keyspace --keyspace-name 'catalog' 3. Verify that your keyspace was deleted. aws keyspaces list-keyspaces The output of this statement should look similar to this. Note that because this is an asynchronous operation, there can be a delay until the keyspace is deleted. Delete a keyspace 178 Amazon Keyspaces (for Apache Cassandra) Developer Guide { "keyspaces": [ { "keyspaceName": "system_schema", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ system_schema/", "replicationStrategy": "SINGLE_REGION" }, { "keyspaceName": "system_schema_mcs", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ system_schema_mcs/", "replicationStrategy": "SINGLE_REGION" }, { "keyspaceName": "system", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ system/", "replicationStrategy": "SINGLE_REGION" }, { "keyspaceName": "system_multiregion_info", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ system_multiregion_info/", "replicationStrategy": "SINGLE_REGION" } ] } Delete a keyspace 179 Amazon Keyspaces (for Apache Cassandra) Developer Guide Tutorials and solutions for Amazon Keyspaces This topic includes various tutorials that walk through the detailed steps that are required to connect to Amazon Keyspaces programmatically, or to use Amazon Keyspaces with other AWS services to create solutions. The available tutorials cover different popular cross-service access scenarios and demonstrate how to integrate Amazon Keyspaces with Amazon Virtual Private Cloud, Amazon Elastic Kubernetes Service, Amazon Simple Storage Service, and AWS Glue, or with open source technologies like Apache Spark. For step-by-step tutorials that show how to connect to Amazon Keyspaces using Apache Cassandra drivers, see the |
AmazonKeyspaces-065 | AmazonKeyspaces.pdf | 65 | (for Apache Cassandra) Developer Guide Tutorials and solutions for Amazon Keyspaces This topic includes various tutorials that walk through the detailed steps that are required to connect to Amazon Keyspaces programmatically, or to use Amazon Keyspaces with other AWS services to create solutions. The available tutorials cover different popular cross-service access scenarios and demonstrate how to integrate Amazon Keyspaces with Amazon Virtual Private Cloud, Amazon Elastic Kubernetes Service, Amazon Simple Storage Service, and AWS Glue, or with open source technologies like Apache Spark. For step-by-step tutorials that show how to connect to Amazon Keyspaces using Apache Cassandra drivers, see the section called “Using a Cassandra client driver”. Topics • Tutorial: Connect to Amazon Keyspaces using an interface VPC endpoint • Tutorial: Integrate with Apache Spark to import or export data • Tutorial: Connect from a containerized application hosted on Amazon Elastic Kubernetes Service • Tutorial: Export an Amazon Keyspaces table to Amazon S3 using AWS Glue Tutorial: Connect to Amazon Keyspaces using an interface VPC endpoint This tutorial walks you through setting up and using an interface VPC endpoint for Amazon Keyspaces. Interface VPC endpoints enable private communication between your virtual private cloud (VPC) running in Amazon VPC and Amazon Keyspaces. Interface VPC endpoints are powered by AWS PrivateLink, which is an AWS service that enables private communication between VPCs and AWS services. For more information, see the section called “Using interface VPC endpoints”. Topics • Tutorial prerequisites and considerations • Step 1: Launch an Amazon EC2 instance • Step 2: Configure your Amazon EC2 instance • Step 3: Create a VPC endpoint for Amazon Keyspaces Connecting with VPC endpoints 180 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Step 4: Configure permissions for the VPC endpoint connection • Step 5: Configure monitoring with CloudWatch • Step 6: (Optional) Best practices to configure the connection pool size for your application • Step 7: (Optional) Clean up Tutorial prerequisites and considerations Before you start this tutorial, follow the AWS setup instructions in Accessing Amazon Keyspaces (for Apache Cassandra). These steps include signing up for AWS and creating an AWS Identity and Access Management (IAM) principal with access to Amazon Keyspaces. Take note of the name of the IAM user and the access keys because you'll need them later in this tutorial. Create a keyspace with the name myKeyspaceand at least one table to test the connection using the VPC endpoint later in this tutorial. You can find detailed instructions in Getting started. After completing the prerequisite steps, proceed to Step 1: Launch an Amazon EC2 instance. Step 1: Launch an Amazon EC2 instance In this step, you launch an Amazon EC2 instance in your default Amazon VPC. You can then create and use a VPC endpoint for Amazon Keyspaces. To launch an Amazon EC2 instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. Choose Launch Instance and do the following: From the EC2 console dashboard, in the Launch instance box, choose Launch instance, and then choose Launch instance from the options that appear. Under Name and tags, for Name, enter a descriptive name for your instance. Under Application and OS Images (Amazon Machine Image): • Choose Quick Start, and then choose Ubuntu. This is the operating system (OS) for your instance. • Under Amazon Machine Image (AMI), you can use the default image that is marked as Free tier eligible. An Amazon Machine Image (AMI) is a basic configuration that serves as a template for your instance. Prerequisites 181 Amazon Keyspaces (for Apache Cassandra) Under Instance Type: Developer Guide • From the Instance type list, choose the t2.micro instance type, which is selected by default. Under Key pair (login), for Key pair name, choose one of the following options for this tutorial: • If you don't have an Amazon EC2 key pair, choose Create a new key pair and follow the instructions. You will be asked to download a private key file (.pem file). You will need this file later when you log in to your Amazon EC2 instance, so take note of the file path. • If you already have an existing Amazon EC2 key pair, go to Select a key pair and choose your key pair from the list. You must already have the private key file ( .pem file) available in order to log in to your Amazon EC2 instance. Under Network Settings: • Choose Edit. • Choose Select an existing security group. • In the list of security groups, choose default. This is the default security group for your VPC. Continue to Summary. • Review a summary of your instance configuration in the Summary panel. When you're ready, choose Launch instance. 3. On the completion screen for the new Amazon EC2 instance, choose the Connect to instance tile. The next screen shows the necessary information and |
AmazonKeyspaces-066 | AmazonKeyspaces.pdf | 66 | list. You must already have the private key file ( .pem file) available in order to log in to your Amazon EC2 instance. Under Network Settings: • Choose Edit. • Choose Select an existing security group. • In the list of security groups, choose default. This is the default security group for your VPC. Continue to Summary. • Review a summary of your instance configuration in the Summary panel. When you're ready, choose Launch instance. 3. On the completion screen for the new Amazon EC2 instance, choose the Connect to instance tile. The next screen shows the necessary information and the required steps to connect to your new instance. Take note of the following information: • The sample command to protect the key file • The connection string • The Public IPv4 DNS name After taking note of the information on this page, you can continue to the next step in this tutorial (Step 2: Configure your Amazon EC2 instance). Step 1: Launch an Amazon EC2 instance 182 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note It takes a few minutes for your Amazon EC2 instance to become available. Before you go on to the next step, ensure that the Instance State is running and that all of its Status Checks have passed. Step 2: Configure your Amazon EC2 instance When your Amazon EC2 instance is available, you can log into it and prepare it for first use. Note The following steps assume that you're connecting to your Amazon EC2 instance from a computer running Linux. For other ways to connect, see Connect to your Linux instance in the Amazon EC2 User Guide. To configure your Amazon EC2 instance 1. You need to authorize inbound SSH traffic to your Amazon EC2 instance. To do this, create a new EC2 security group, and then assign the security group to your EC2 instance. a. In the navigation pane, choose Security Groups. b. Choose Create Security Group. In the Create Security Group window, do the following: • Security group name – Enter a name for your security group. For example: my-ssh- access • Description – Enter a short description for the security group. • VPC – Choose your default VPC. • In the Inbound rules section, choose Add Rule and do the following: • Type – Choose SSH. • Source – Choose My IP. • Choose Add rule. On the bottom of the page, confirm the configuration settings and choose Create Security Group. Step 2: Configure your Amazon EC2 instance 183 Amazon Keyspaces (for Apache Cassandra) Developer Guide c. In the navigation pane, choose Instances. d. Choose the Amazon EC2 instance that you launched in Step 1: Launch an Amazon EC2 instance. e. f. Choose Actions, choose Security, and then choose Change Security Groups. In Change Security Groups, select the security group that you created earlier in this procedure (for example, my-ssh-access). The existing default security group should also be selected. Confirm the configuration settings and choose Assign Security Groups. 2. Use the following command to protect your private key file from access. If you skip this step, the connection fails. chmod 400 path_to_file/my-keypair.pem 3. Use the ssh command to log in to your Amazon EC2 instance, as in the following example. ssh -i path_to_file/my-keypair.pem ubuntu@public-dns-name You need to specify your private key file (.pem file) and the public DNS name of your instance. (See Step 1: Launch an Amazon EC2 instance). The login ID is ubuntu. No password is required. For more information about allowing connections to your Amazon EC2 instance and for AWS CLI instructions, see Authorize inbound traffic for your Linux instances in the Amazon EC2 User Guide. 4. Download and install the latest version of the AWS Command Line Interface. a. Install unzip. sudo apt install unzip b. Download the zip file with the AWS CLI. curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" c. Unzip the file. unzip awscliv2.zip Step 2: Configure your Amazon EC2 instance 184 Developer Guide Amazon Keyspaces (for Apache Cassandra) d. Install the AWS CLI. sudo ./aws/install e. Confirm the version of the AWS CLI installation. aws --version The output should look like this: aws-cli/2.9.19 Python/3.9.11 Linux/5.15.0-1028-aws exe/x86_64.ubuntu.22 prompt/ off 5. Configure your AWS credentials, as shown in the following example. Enter your AWS access key ID, secret key, and default Region name when prompted. aws configure AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: us-east-1 Default output format [None]: 6. You have to use a cqlsh connection to Amazon Keyspaces to confirm that your VPC endpoint has been configured correctly. If you use your local environment or the Amazon Keyspaces CQL editor in the AWS Management Console, the connection automatically goes through the public endpoint instead of your VPC endpoint. To use cqlsh to test your VPC endpoint connection |
AmazonKeyspaces-067 | AmazonKeyspaces.pdf | 67 | the following example. Enter your AWS access key ID, secret key, and default Region name when prompted. aws configure AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: us-east-1 Default output format [None]: 6. You have to use a cqlsh connection to Amazon Keyspaces to confirm that your VPC endpoint has been configured correctly. If you use your local environment or the Amazon Keyspaces CQL editor in the AWS Management Console, the connection automatically goes through the public endpoint instead of your VPC endpoint. To use cqlsh to test your VPC endpoint connection in this tutorial, complete the setup instructions in Using cqlsh to connect to Amazon Keyspaces. You are now ready to create a VPC endpoint for Amazon Keyspaces. Step 3: Create a VPC endpoint for Amazon Keyspaces In this step, you create a VPC endpoint for Amazon Keyspaces using the AWS CLI. To create the VPC endpoint using the VPC console, you can follow the Create a VPC endpoint instructions in the AWS PrivateLink Guide. When filtering for the Service name, enter Cassandra. Step 3: Create a VPC endpoint for Amazon Keyspaces 185 Amazon Keyspaces (for Apache Cassandra) Developer Guide To create a VPC endpoint using the AWS CLI 1. Before you begin, verify that you can communicate with Amazon Keyspaces using its public endpoint. aws keyspaces list-tables --keyspace-name 'myKeyspace' The output shows a list of Amazon Keyspaces tables that are contained in the specified keyspace. If you don't have any tables, the list is empty. { "tables": [ { "keyspaceName": "myKeyspace", "tableName": "myTable1", "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/ catalog/table/myTable1" }, { "keyspaceName": "myKeyspace", "tableName": "myTable2", "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/ catalog/table/myTable2" } ] } 2. Verify that Amazon Keyspaces is an available service for creating VPC endpoints in the current AWS Region. (The command is shown in bold text, followed by example output.) aws ec2 describe-vpc-endpoint-services { "ServiceNames": [ "com.amazonaws.us-east-1.cassandra", "com.amazonaws.us-east-1.cassandra-fips" ] } In the example output, Amazon Keyspaces is one of the services available, so you can proceed with creating a VPC endpoint for it. Step 3: Create a VPC endpoint for Amazon Keyspaces 186 Amazon Keyspaces (for Apache Cassandra) Developer Guide 3. Determine your VPC identifier. aws ec2 describe-vpcs { "Vpcs": [ { "VpcId": "vpc-a1234bcd", "InstanceTenancy": "default", "State": "available", "DhcpOptionsId": "dopt-8454b7e1", "CidrBlock": "111.31.0.0/16", "IsDefault": true } ] } In the example output, the VPC ID is vpc-a1234bcd. 4. Use a filter to gather details about the subnets of the VPC. aws ec2 describe-subnets --filters "Name=vpc-id,Values=vpc-a1234bcd" { { "Subnets":[ { "AvailabilityZone":"us-east-1a", "AvailabilityZoneId":"use2-az1", "AvailableIpAddressCount":4085, "CidrBlock":"111.31.0.0/20", "DefaultForAz":true, "MapPublicIpOnLaunch":true, "MapCustomerOwnedIpOnLaunch":false, "State":"available", "SubnetId":"subnet-920aacf9", "VpcId":"vpc-a1234bcd", "OwnerId":"111122223333", "AssignIpv6AddressOnCreation":false, "Ipv6CidrBlockAssociationSet":[ ], "SubnetArn":"arn:aws:ec2:us-east-1:111122223333:subnet/subnet-920aacf9", Step 3: Create a VPC endpoint for Amazon Keyspaces 187 Amazon Keyspaces (for Apache Cassandra) Developer Guide "EnableDns64":false, "Ipv6Native":false, "PrivateDnsNameOptionsOnLaunch":{ "HostnameType":"ip-name", "EnableResourceNameDnsARecord":false, "EnableResourceNameDnsAAAARecord":false } }, { "AvailabilityZone":"us-east-1c", "AvailabilityZoneId":"use2-az3", "AvailableIpAddressCount":4085, "CidrBlock":"111.31.32.0/20", "DefaultForAz":true, "MapPublicIpOnLaunch":true, "MapCustomerOwnedIpOnLaunch":false, "State":"available", "SubnetId":"subnet-4c713600", "VpcId":"vpc-a1234bcd", "OwnerId":"111122223333", "AssignIpv6AddressOnCreation":false, "Ipv6CidrBlockAssociationSet":[ ], "SubnetArn":"arn:aws:ec2:us-east-1:111122223333:subnet/subnet-4c713600", "EnableDns64":false, "Ipv6Native":false, "PrivateDnsNameOptionsOnLaunch":{ "HostnameType":"ip-name", "EnableResourceNameDnsARecord":false, "EnableResourceNameDnsAAAARecord":false } }, { "AvailabilityZone":"us-east-1b", "AvailabilityZoneId":"use2-az2", "AvailableIpAddressCount":4086, "CidrBlock":"111.31.16.0/20", "DefaultForAz":true, "MapPublicIpOnLaunch":true, } ] Step 3: Create a VPC endpoint for Amazon Keyspaces 188 Amazon Keyspaces (for Apache Cassandra) Developer Guide } In the example output, there are two available subnet IDs: subnet-920aacf9 and subnet-4c713600. 5. Create the VPC endpoint. For the --vpc-id parameter, specify the VPC ID from the previous step. For the --subnet-id parameter, specify the subnet IDs from the previous step. Use the --vpc-endpoint-type parameter to define the endpoint as an interface. For more information about the command, see create-vpc-endpoint in the AWS CLI Command Reference. aws ec2 create-vpc-endpoint --vpc-endpoint-type Interface --vpc-id vpc-a1234bcd --service-name com.amazonaws.us-east-1.cassandra --subnet-id subnet-920aacf9 subnet-4c713600 { "VpcEndpoint": { "VpcEndpointId": "vpce-000ab1cdef23456789", "VpcEndpointType": "Interface", "VpcId": "vpc-a1234bcd", "ServiceName": "com.amazonaws.us-east-1.cassandra", "State": "pending", "RouteTableIds": [], "SubnetIds": [ "subnet-920aacf9", "subnet-4c713600" ], "Groups": [ { "GroupId": "sg-ac1b0e8d", "GroupName": "default" } ], "IpAddressType": "ipv4", "DnsOptions": { "DnsRecordIpType": "ipv4" }, "PrivateDnsEnabled": true, "RequesterManaged": false, "NetworkInterfaceIds": [ "eni-043c30c78196ad82e", "eni-06ce37e3fd878d9fa" ], Step 3: Create a VPC endpoint for Amazon Keyspaces 189 Amazon Keyspaces (for Apache Cassandra) Developer Guide "DnsEntries": [ { "DnsName": "vpce-000ab1cdef23456789-m2b22rtz.cassandra.us- east-1.vpce.amazonaws.com", "HostedZoneId": "Z7HUB22UULQXV" }, { "DnsName": "vpce-000ab1cdef23456789-m2b22rtz-us- east-1a.cassandra.us-east-1.vpce.amazonaws.com", "HostedZoneId": "Z7HUB22UULQXV" }, { "DnsName": "vpce-000ab1cdef23456789-m2b22rtz-us- east-1c.cassandra.us-east-1.vpce.amazonaws.com", "HostedZoneId": "Z7HUB22UULQXV" }, { "DnsName": "vpce-000ab1cdef23456789-m2b22rtz-us- east-1b.cassandra.us-east-1.vpce.amazonaws.com", "HostedZoneId": "Z7HUB22UULQXV" }, { "DnsName": "vpce-000ab1cdef23456789-m2b22rtz-us- east-1d.cassandra.us-east-1.vpce.amazonaws.com", "HostedZoneId": "Z7HUB22UULQXV" }, { "DnsName": "cassandra.us-east-1.amazonaws.com", "HostedZoneId": "ZONEIDPENDING" } ], "CreationTimestamp": "2023-01-27T16:12:36.834000+00:00", "OwnerId": "111122223333" } } } Step 4: Configure permissions for the VPC endpoint connection The procedures in this step demonstrate how to configure rules and permissions for using the VPC endpoint with Amazon Keyspaces. Step 4: Configure permissions for the VPC endpoint connection 190 Amazon Keyspaces (for Apache Cassandra) Developer Guide To configure an inbound rule for the new endpoint to allow TCP inbound traffic 1. In the Amazon VPC console, on the left-side panel, choose Endpoints and choose the endpoint you created in the earlier step. 2. Choose Security groups and then choose the security group associated with this endpoint. 3. Choose Inbound rules and then |
AmazonKeyspaces-068 | AmazonKeyspaces.pdf | 68 | Configure permissions for the VPC endpoint connection The procedures in this step demonstrate how to configure rules and permissions for using the VPC endpoint with Amazon Keyspaces. Step 4: Configure permissions for the VPC endpoint connection 190 Amazon Keyspaces (for Apache Cassandra) Developer Guide To configure an inbound rule for the new endpoint to allow TCP inbound traffic 1. In the Amazon VPC console, on the left-side panel, choose Endpoints and choose the endpoint you created in the earlier step. 2. Choose Security groups and then choose the security group associated with this endpoint. 3. Choose Inbound rules and then choose Edit inbound rules. 4. Add an inbound rule with Type as CQLSH / CASSANDRA. This sets the Port range, automatically to 9142. 5. To save the new inbound rule, choose Save rules. To configure IAM user permissions 1. Confirm that the IAM user used to connect to Amazon Keyspaces has the appropriate permissions. In AWS Identity and Access Management (IAM), you can use the AWS managed policy AmazonKeyspacesReadOnlyAccess to grant the IAM user read access to Amazon Keyspaces. a. Sign in to the AWS Management Console and open the IAM console at https:// console.aws.amazon.com/iam/. b. On the IAM console dashboard, choose Users, and then choose your IAM user from the list. c. On the Summary page, choose Add permissions. d. Choose Attach existing policies directly. e. From the list of policies, choose AmazonKeyspacesReadOnlyAccess, and then choose Next: Review. f. Choose Add permissions. 2. Verify that you can access Amazon Keyspaces through the VPC endpoint. aws keyspaces list-tables --keyspace-name 'my_Keyspace' If you want, you can try some other AWS CLI commands for Amazon Keyspaces. For more information, see the AWS CLI Command Reference. Note The minimum permissions required for an IAM user or role to access Amazon Keyspaces are read permissions to the system table, as shown in the following policy. For more Step 4: Configure permissions for the VPC endpoint connection 191 Amazon Keyspaces (for Apache Cassandra) Developer Guide information about policy-based permissions, see the section called “Identity-based policy examples”. { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "cassandra:Select" ], "Resource":[ "arn:aws:cassandra:us-east-1:555555555555:/keyspace/system*" ] } ] } 3. Grant the IAM user read access to the Amazon EC2 instance with the VPC. When you use Amazon Keyspaces with VPC endpoints, you need to grant the IAM user or role that accesses Amazon Keyspaces read-only permissions to your Amazon EC2 instance and the VPC to gather endpoint and network interface data. Amazon Keyspaces stores this information in the system.peers table and uses it to manage connections. Note The managed policies AmazonKeyspacesReadOnlyAccess_v2 and AmazonKeyspacesFullAccess include the required permissions to let Amazon Keyspaces access the Amazon EC2 instance to read information about available interface VPC endpoints. a. Sign in to the AWS Management Console and open the IAM console at https:// console.aws.amazon.com/iam/. b. On the IAM console dashboard, choose Policies. c. Choose Create policy, and then choose the JSON tab. d. Copy the following policy and choose Next: Tags. Step 4: Configure permissions for the VPC endpoint connection 192 Amazon Keyspaces (for Apache Cassandra) Developer Guide { "Version":"2012-10-17", "Statement":[ { "Sid":"ListVPCEndpoints", "Effect":"Allow", "Action":[ "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcEndpoints" ], "Resource": "*" } ] } e. Choose Next: Review, enter the name keyspacesVPCendpoint for the policy, and choose Create policy. f. On the IAM console dashboard, choose Users, and then choose your IAM user from the list. g. On the Summary page, choose Add permissions. h. Choose Attach existing policies directly. i. j. From the list of policies, choose keyspacesVPCendpoint, and then choose Next: Review. Choose Add permissions. 4. To verify that the Amazon Keyspaces system.peers table is getting updated with VPC information, run the following query from your Amazon EC2 instance using cqlsh. If you haven't already installed cqlshon your Amazon EC2 instance in step 2, follow the instructions in the section called “Using the cqlsh-expansion”. SELECT peer FROM system.peers; The output returns nodes with private IP addresses, depending on your VPC and subnet setup in your AWS Region. peer --------------- 112.11.22.123 112.11.22.124 112.11.22.125 Step 4: Configure permissions for the VPC endpoint connection 193 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note You have to use a cqlshconnection to Amazon Keyspaces to confirm that your VPC endpoint has been configured correctly. If you use your local environment or the Amazon Keyspaces CQL editor in the AWS Management Console, the connection automatically goes through the public endpoint instead of your VPC endpoint. If you see nine IP addresses, these are the entries Amazon Keyspaces automatically writes to the system.peers table for public endpoint connections. Step 5: Configure monitoring with CloudWatch This step shows you how to use Amazon CloudWatch to monitor the VPC endpoint connection to Amazon Keyspaces. AWS PrivateLink publishes data points to CloudWatch about your interface endpoints. You can use metrics to verify that your system is |
AmazonKeyspaces-069 | AmazonKeyspaces.pdf | 69 | been configured correctly. If you use your local environment or the Amazon Keyspaces CQL editor in the AWS Management Console, the connection automatically goes through the public endpoint instead of your VPC endpoint. If you see nine IP addresses, these are the entries Amazon Keyspaces automatically writes to the system.peers table for public endpoint connections. Step 5: Configure monitoring with CloudWatch This step shows you how to use Amazon CloudWatch to monitor the VPC endpoint connection to Amazon Keyspaces. AWS PrivateLink publishes data points to CloudWatch about your interface endpoints. You can use metrics to verify that your system is performing as expected. The AWS/PrivateLinkEndpoints namespace in CloudWatch includes the metrics for interface endpoints. For more information, see CloudWatch metrics for AWS PrivateLink in the AWS PrivateLink Guide. To create a CloudWatch dashboard with VPC endpoint metrics 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation pane, choose Dashboards. Then choose Create dashboard. Enter a name for the dashboard and choose Create. 3. Under Add widget, choose Number. 4. Under Metrics, choose AWS/PrivateLinkEndpoints. 5. Choose Endpoint Type, Service Name, VPC Endpoint ID, VPC ID. 6. 7. Select the metrics ActiveConnections and NewConnections, and choose Create Widget. Save the dashboard. The ActiveConnections metric is defined as the number of concurrent active connections that the endpoint received during the last one-minute period. The NewConnections metric is defined as the number of new connections that were established through the endpoint during the last one- minute period. Step 5: Configure monitoring 194 Amazon Keyspaces (for Apache Cassandra) Developer Guide For more information about creating dashboards, see Create dashboard in the CloudWatch User Guide. Step 6: (Optional) Best practices to configure the connection pool size for your application In this section, we outline how to determine the ideal connection pool size based on the query throughput requirements of your application. Amazon Keyspaces allows a maximum of 3,000 CQL queries per second per TCP connection. So there's virtually no limit on the number of connections that a driver can establish with Amazon Keyspaces. However, we recommend that you match the connection pool size to the requirements of your application and consider the available endpoints when you're using Amazon Keyspaces with VPC endpoint connections. You configure the connection pool size in the client driver. For example, based on a local pool size of 2 and a VPC interface endpoint created across 3 Availability Zones, the driver establishes 6 connections for querying (7 in total, which includes a control connection). Using these 6 connections, you can support a maximum of 18,000 CQL queries per second. If your application needs to support 40,000 CQL queries per second, work backwards from the number of queries that are needed to determine the required connection pool size. To support 40,000 CQL queries per second, you need to configure the local pool size to be at least 5, which supports a minimum of 45,000 CQL queries per second. You can monitor if you exceed the quota for the maximum number of operations per second, per connection by using the PerConnectionRequestRateExceeded CloudWatch metric in the AWS/ Cassandra namespace. The PerConnectionRequestRateExceeded metric shows the number of requests to Amazon Keyspaces that exceed the quota for the per-connection request rate. The code examples in this step show how to estimate and configure connection pooling when you're using interface VPC endpoints. Java You can configure the number of connections per pool in the Java driver. For a complete example of a Java client driver connection, see the section called “Using a Cassandra Java client driver”. Step 6: (Optional) Best practices for connections 195 Amazon Keyspaces (for Apache Cassandra) Developer Guide When the client driver is started, first the control connection is established for administrative tasks, such as for schema and topology changes. Then the additional connections are created. In the following example, the local pool size driver configuration is specified as 2. If the VPC endpoint is created across 3 subnets within the VPC, this results in 7 NewConnections in CloudWatch for the interface endpoint, as shown in the following formula. NewConnections = 3 (VPC subnet endpoints created across) * 2 (pool size) + 1 ( control connection) datastax-java-driver { basic.contact-points = [ "cassandra.us-east-1.amazonaws.com:9142"] advanced.auth-provider{ class = PlainTextAuthProvider username = "ServiceUserName" password = "ServicePassword" } basic.load-balancing-policy { local-datacenter = "us-east-1" slow-replica-avoidance = false } advanced.ssl-engine-factory { class = DefaultSslEngineFactory truststore-path = "./src/main/resources/cassandra_truststore.jks" truststore-password = "my_password" hostname-validation = false } advanced.connection { pool.local.size = 2 } } If the number of active connections doesn’t match your configured pool size (aggregation across subnets) + 1 control connection, something is preventing the connections from being created. Step 6: (Optional) Best practices for connections 196 Amazon Keyspaces (for Apache Cassandra) Developer Guide Node.js You can configure the number of connections per pool in the Node.js driver. For a complete |
AmazonKeyspaces-070 | AmazonKeyspaces.pdf | 70 | class = PlainTextAuthProvider username = "ServiceUserName" password = "ServicePassword" } basic.load-balancing-policy { local-datacenter = "us-east-1" slow-replica-avoidance = false } advanced.ssl-engine-factory { class = DefaultSslEngineFactory truststore-path = "./src/main/resources/cassandra_truststore.jks" truststore-password = "my_password" hostname-validation = false } advanced.connection { pool.local.size = 2 } } If the number of active connections doesn’t match your configured pool size (aggregation across subnets) + 1 control connection, something is preventing the connections from being created. Step 6: (Optional) Best practices for connections 196 Amazon Keyspaces (for Apache Cassandra) Developer Guide Node.js You can configure the number of connections per pool in the Node.js driver. For a complete example of a Node.js client driver connection, see the section called “Using a Cassandra Node.js client driver”. For the following code example, the local pool size driver configuration is specified as 1. If the VPC endpoint is created across 4 subnets within the VPC, this results in 5 NewConnections in CloudWatch for the interface endpoint, as shown in the following formula. NewConnections = 4 (VPC subnet endpoints created across) * 1 (pool size) + 1 ( control connection) const cassandra = require('cassandra-driver'); const fs = require('fs'); const types = cassandra.types; const auth = new cassandra.auth.PlainTextAuthProvider('ServiceUserName', 'ServicePassword'); const sslOptions1 = { ca: [ fs.readFileSync('/home/ec2-user/sf-class2-root.crt', 'utf-8')], host: 'cassandra.us-east-1.amazonaws.com', rejectUnauthorized: true }; const client = new cassandra.Client({ contactPoints: ['cassandra.us-east-1.amazonaws.com'], localDataCenter: 'us-east-1', pooling: { coreConnectionsPerHost: { [types.distance.local]: 1 } }, consistency: types.consistencies.localQuorum, queryOptions: { isIdempotent: true }, authProvider: auth, sslOptions: sslOptions1, protocolOptions: { port: 9142 } }); Step 7: (Optional) Clean up If you want to delete the resources that you have created in this tutorial, follow these procedures. Step 7: (Optional) Clean up 197 Amazon Keyspaces (for Apache Cassandra) Developer Guide To remove your VPC endpoint for Amazon Keyspaces 1. Log in to your Amazon EC2 instance. 2. Determine the VPC endpoint ID that is used for Amazon Keyspaces. If you omit the grep parameters, VPC endpoint information is shown for all services. aws ec2 describe-vpc-endpoint-services | grep ServiceName | grep cassandra { "VpcEndpoint": { "PolicyDocument": "{\"Version\":\"2008-10-17\",\"Statement\":[{\"Effect\": \"Allow\",\"Principal\":\"*\",\"Action\":\"*\",\"Resource\":\"*\"}]}", "VpcId": "vpc-0bbc736e", "State": "available", "ServiceName": "com.amazonaws.us-east-1.cassandra", "RouteTableIds": [], "VpcEndpointId": "vpce-9b15e2f2", "CreationTimestamp": "2017-07-26T22:00:14Z" } } In the example output, the VPC endpoint ID is vpce-9b15e2f2. 3. Delete the VPC endpoint. aws ec2 delete-vpc-endpoints --vpc-endpoint-ids vpce-9b15e2f2 { "Unsuccessful": [] } The empty array [] indicates success (there were no unsuccessful requests). To terminate your Amazon EC2 instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Choose your Amazon EC2 instance. 4. Choose Actions, choose Instance State, and then choose Terminate. Step 7: (Optional) Clean up 198 Amazon Keyspaces (for Apache Cassandra) Developer Guide 5. In the confirmation window, choose Yes, Terminate. Tutorial: Integrate with Apache Spark to import or export data Apache Spark is an open-source engine for large-scale data analytics. Apache Spark enables you to perform analytics on data stored in Amazon Keyspaces more efficiently. You can also use Amazon Keyspaces to provide applications with consistent, single-digit-millisecond read access to analytics data from Spark. The open-source Spark Cassandra Connector simplifies reading and writing data between Amazon Keyspaces and Spark. Amazon Keyspaces support for the Spark Cassandra Connector streamlines running Cassandra workloads in Spark-based analytics pipelines by using a fully managed and serverless database service. With Amazon Keyspaces, you don’t need to worry about Spark competing for the same underlying infrastructure resources as your tables. Amazon Keyspaces tables scale up and down automatically based on your application traffic. The following tutorial walks you through steps and best practices required to read and write data to Amazon Keyspaces using the Spark Cassandra Connector. The tutorial demonstrates how to migrate data to Amazon Keyspaces by loading data from a file with the Spark Cassandra Connector and writing it to an Amazon Keyspaces table. Then, the tutorial shows how to read the data back from Amazon Keyspaces using the Spark Cassandra Connector. You would do this to run Cassandra workloads in Spark-based analytics pipelines. Topics • Prerequisites for establishing connections to Amazon Keyspaces with the Spark Cassandra Connector • Step 1: Configure Amazon Keyspaces for integration with the Apache Cassandra Spark Connector • Step 2: Configure the Apache Cassandra Spark Connector • Step 3: Create the application configuration file • Step 4: Prepare the source data and the target table in Amazon Keyspaces • Step 5: Write and read Amazon Keyspaces data using the Apache Cassandra Spark Connector • Troubleshooting common errors when using the Spark Cassandra Connector with Amazon Keyspaces Integrating with Apache Spark 199 Amazon Keyspaces (for Apache Cassandra) Developer Guide Prerequisites for establishing connections to Amazon Keyspaces with the Spark Cassandra Connector Before you connect to Amazon Keyspaces with the Spark Cassandra Connector, you need to make sure that you've installed the following. The compatibility of Amazon Keyspaces with the Spark Cassandra Connector has been tested with the following |
AmazonKeyspaces-071 | AmazonKeyspaces.pdf | 71 | Prepare the source data and the target table in Amazon Keyspaces • Step 5: Write and read Amazon Keyspaces data using the Apache Cassandra Spark Connector • Troubleshooting common errors when using the Spark Cassandra Connector with Amazon Keyspaces Integrating with Apache Spark 199 Amazon Keyspaces (for Apache Cassandra) Developer Guide Prerequisites for establishing connections to Amazon Keyspaces with the Spark Cassandra Connector Before you connect to Amazon Keyspaces with the Spark Cassandra Connector, you need to make sure that you've installed the following. The compatibility of Amazon Keyspaces with the Spark Cassandra Connector has been tested with the following recommended versions: • Java version 8 • Scala 2.12 • Spark 3.4 • Cassandra Connector 2.5 and higher • Cassandra driver 4.12 1. 2. To install Scala, follow the instructions at https://www.scala-lang.org/download/scala2.html. To install Spark 3.4.1, follow this example. curl -o spark-3.4.1-bin-hadoop3.tgz -k https://dlcdn.apache.org/spark/spark-3.4.1/ spark-3.4.1-bin-hadoop3.tgz # now to untar tar -zxvf spark-3.4.1-bin-hadoop3.tgz # set this variable. export SPARK_HOME=$PWD/spark-3.4.1-bin-hadoop3 ``` Step 1: Configure Amazon Keyspaces for integration with the Apache Cassandra Spark Connector In this step, you confirm that the partitioner for your account is compatible with the Apache Spark Connector and setup the required IAM permissions. The following best practices help you to provision sufficient read/write capacity for the table. 1. Confirm that the Murmur3Partitioner partitioner is the default partitioner for your account. This partitioner is compatible with the Spark Cassandra Connector. For more Prerequisites 200 Amazon Keyspaces (for Apache Cassandra) Developer Guide information on partitioners and how to change them, see the section called “Working with partitioners”. 2. Setup your IAM permissions for Amazon Keyspaces, using interface VPC endpoints, with Apache Spark. • Assign read/write access to the user table and read access to the system tables as shown in the IAM policy example listed below. • Populating the system.peers table with your available interface VPC endpoints is required for clients accessing Amazon Keyspaces with Spark over VPC endpoints. { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "cassandra:Select", "cassandra:Modify" ], "Resource":[ "arn:aws:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/ mytable", "arn:aws:cassandra:us-east-1:111122223333:/keyspace/system*" ] }, { "Sid":"ListVPCEndpoints", "Effect":"Allow", "Action":[ "ec2:DescribeNetworkInterfaces", "ec2:DescribeVpcEndpoints" ], "Resource":"*" } ] } 3. Consider the following best practices to configure sufficient read/write throughput capacity for your Amazon Keyspaces table to support the traffic from the Spark Cassandra Connector. • Start using on-demand capacity to help you test the scenario. Step 1: Configure Amazon Keyspaces 201 Amazon Keyspaces (for Apache Cassandra) Developer Guide • To optimize the cost of table throughput for production environments, use a rate limiter for traffic from the connector, and configure your table to use provisioned capacity with automatic scaling. For more information, see the section called “Manage throughput capacity with auto scaling”. • You can use a fixed rate limiter that comes with the Cassandra driver. There are some rate limiters tailored to Amazon Keyspaces in the AWS samples repo. • For more information about capacity management, see the section called “Configure read/ write capacity modes”. Step 2: Configure the Apache Cassandra Spark Connector Apache Spark is a general-purpose compute platform that you can configure in different ways. To configure Spark and the Spark Cassandra Connector for integration with Amazon Keyspaces, we recommend that you start with the minimum configuration settings described in the following section, and then increase them later as appropriate for your workload. • Create Spark partition sizes smaller than 8 MBs. In Spark, partitions represent an atomic chunk of data that can be run in parallel. When you are writing data to Amazon Keyspaces with the Spark Cassandra Connector, the smaller the Spark partition, the smaller the amount of records that the task is going to write. If a Spark task encounters multiple errors, it fails after the designated number of retries has been exhausted. To avoid replaying large tasks and reprocessing a lot of data, keep the size of the Spark partition small. • Use a low concurrent number of writes per executor with a large number of retries. Amazon Keyspaces returns insufficient capacity errors back to Cassandra drivers as operation timeouts. You can't address timeouts caused by insufficient capacity by changing the configured timeout duration because the Spark Cassandra Connector attempts to retry requests transparently using the MultipleRetryPolicy. To ensure that retries don’t overwhelm the driver’s connection pool, use a low concurrent number of writes per executor with a large number of retries. The following code snippet is an example of this. spark.cassandra.query.retry.count = 500 spark.cassandra.output.concurrent.writes = 3 • Break down the total throughput and distribute it across multiple Cassandra sessions. Step 2: Configure the Apache Cassandra Spark Connector 202 Amazon Keyspaces (for Apache Cassandra) Developer Guide • The Cassandra Spark Connector creates one session for each Spark executor. Think about this session as the unit of scale to determine the required throughput and the number of connections required. • When defining the number of cores |
AmazonKeyspaces-072 | AmazonKeyspaces.pdf | 72 | connection pool, use a low concurrent number of writes per executor with a large number of retries. The following code snippet is an example of this. spark.cassandra.query.retry.count = 500 spark.cassandra.output.concurrent.writes = 3 • Break down the total throughput and distribute it across multiple Cassandra sessions. Step 2: Configure the Apache Cassandra Spark Connector 202 Amazon Keyspaces (for Apache Cassandra) Developer Guide • The Cassandra Spark Connector creates one session for each Spark executor. Think about this session as the unit of scale to determine the required throughput and the number of connections required. • When defining the number of cores per executor and the number of cores per task, start low and increase as needed. • Set Spark task failures to allow processing in the event of transient errors. After you become familiar with your application's traffic characteristics and requirements, we recommend setting spark.task.maxFailures to a bounded value. • For example, the following configuration can handle two concurrent tasks per executor, per session: spark.executor.instances = configurable -> number of executors for the session. spark.executor.cores = 2 -> Number of cores per executor. spark.task.cpus = 1 -> Number of cores per task. spark.task.maxFailures = -1 • Turn off batching. • We recommend that you turn off batching to improve random access patterns. The following code snippet is an example of this. spark.cassandra.output.batch.size.rows = 1 (Default = None) spark.cassandra.output.batch.grouping.key = none (Default = Partition) spark.cassandra.output.batch.grouping.buffer.size = 100 (Default = 1000) • Set SPARK_LOCAL_DIRS to a fast, local disk with enough space. • By default, Spark saves map output files and resilient distributed datasets (RDDs) to a /tmp folder. Depending on your Spark host’s configuration, this can result in no space left on the device style errors. • To set the SPARK_LOCAL_DIRS environment variable to a directory called /example/spark- dir, you can use the following command. export SPARK_LOCAL_DIRS=/example/spark-dir Step 2: Configure the Apache Cassandra Spark Connector 203 Amazon Keyspaces (for Apache Cassandra) Developer Guide Step 3: Create the application configuration file To use the open-source Spark Cassandra Connector with Amazon Keyspaces, you need to provide an application configuration file that contains the settings required to connect with the DataStax Java driver. You can use either service-specific credentials or the SigV4 plugin to connect. If you haven't already done so, you need to convert the Starfield digital certificate into a trustStore file. You can follow the detailed steps at the section called “Before you begin” from the Java driver connection tutorial. Take note of the trustStore file path and password because you need this information when you create the application config file. Connect with SigV4 authentication This section shows you an example application.conf file that you can use when connecting with AWS credentials and the SigV4 plugin. If you haven't already done so, you need to generate your IAM access keys (an access key ID and a secret access key) and save them in your AWS config file or as environment variables. For detailed instructions, see the section called “Required credentials for AWS authentication”. In the following example, replace the file path to your trustStore file, and replace the password. datastax-java-driver { basic.contact-points = ["cassandra.us-east-1.amazonaws.com:9142"] basic.load-balancing-policy { class = DefaultLoadBalancingPolicy local-datacenter = us-east-1 slow-replica-avoidance = false } basic.request { consistency = LOCAL_QUORUM } advanced { auth-provider = { class = software.aws.mcs.auth.SigV4AuthProvider aws-region = us-east-1 } ssl-engine-factory { class = DefaultSslEngineFactory truststore-path = "path_to_file/cassandra_truststore.jks" truststore-password = "password" hostname-validation=false Step 3: Create the app config file 204 Amazon Keyspaces (for Apache Cassandra) Developer Guide } } advanced.connection.pool.local.size = 3 } Update and save this configuration file as /home/user1/application.conf. The following examples use this path. Connect with service-specific credentials This section shows you an example application.conf file that you can use when connecting with service-specific credentials. If you haven't already done so, you need to generate service- specific credentials for Amazon Keyspaces. For detailed instructions, see the section called “Create service-specific credentials”. In the following example, replace username and password with your own credentials. Also, replace the file path to your trustStore file, and replace the password. datastax-java-driver { basic.contact-points = ["cassandra.us-east-1.amazonaws.com:9142"] basic.load-balancing-policy { class = DefaultLoadBalancingPolicy local-datacenter = us-east-1 } basic.request { consistency = LOCAL_QUORUM } advanced { auth-provider = { class = PlainTextAuthProvider username = "username" password = "password" aws-region = "us-east-1" } ssl-engine-factory { class = DefaultSslEngineFactory truststore-path = "path_to_file/cassandra_truststore.jks" truststore-password = "password" hostname-validation=false } metadata = { schema { Step 3: Create the app config file 205 Amazon Keyspaces (for Apache Cassandra) Developer Guide token-map.enabled = true } } } } Update and save this configuration file as /home/user1/application.conf to use with the code example. Connect with a fixed rate To force a fixed rate per Spark executor, you can define a request throttler. This request throttler limits the rate of requests per second. The Spark Cassandra Connector deploys a Cassandra session per |
AmazonKeyspaces-073 | AmazonKeyspaces.pdf | 73 | "username" password = "password" aws-region = "us-east-1" } ssl-engine-factory { class = DefaultSslEngineFactory truststore-path = "path_to_file/cassandra_truststore.jks" truststore-password = "password" hostname-validation=false } metadata = { schema { Step 3: Create the app config file 205 Amazon Keyspaces (for Apache Cassandra) Developer Guide token-map.enabled = true } } } } Update and save this configuration file as /home/user1/application.conf to use with the code example. Connect with a fixed rate To force a fixed rate per Spark executor, you can define a request throttler. This request throttler limits the rate of requests per second. The Spark Cassandra Connector deploys a Cassandra session per executor. Using the following formula can help you achieve consistent throughput against a table. max-request-per-second * numberOfExecutors = total throughput against a table You can add this example to the application config file that you created earlier. datastax-java-driver { advanced.throttler { class = RateLimitingRequestThrottler max-requests-per-second = 3000 max-queue-size = 30000 drain-interval = 1 millisecond } } Step 4: Prepare the source data and the target table in Amazon Keyspaces In this step, you create a source file with sample data and an Amazon Keyspaces table. 1. Create the source file. You can choose one of the following options: • For this tutorial, you use a comma-separated values (CSV) file with the name keyspaces_sample_table.csv as the source file for the data migration. The provided sample file contains a few rows of data for a table with the name book_awards. Step 4: Prepare the source data and the target table 206 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Download the sample CSV file (keyspaces_sample_table.csv) that is contained in the following archive file samplemigration.zip. Unzip the archive and take note of the path to keyspaces_sample_table.csv. • If you want to follow along with your own CSV file to write data to Amazon Keyspaces, make sure that the data is randomized. Data that is read directly from a database or exported to flat files is typically ordered by the partition and primary key. Importing ordered data to Amazon Keyspaces can cause it to be written to smaller segments of Amazon Keyspaces partitions, which results in an uneven traffic distribution. This can lead to slower performance and higher error rates. In contrast, randomizing data helps to take advantage of the built-in load balancing capabilities of Amazon Keyspaces by distributing traffic across partitions more evenly. There are various tools that you can use for randomizing data. For an example that uses the open-source tool Shuf, see the section called “Step 2: Prepare the data” in the data migration tutorial. The following is an example that shows how to shuffle data as a DataFrame. import org.apache.spark.sql.functions.randval shuffledDF = dataframe.orderBy(rand()) 2. Create the target keyspace and table in Amazon Keyspaces. a. Connect to Amazon Keyspaces using cqlsh, and replace the service endpoint, user name, and password in the following example with your own values. cqlsh cassandra.us-east-2.amazonaws.com 9142 -u "111122223333" - p "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" --ssl b. Create a new keyspace with the name catalog as shown in the following example. CREATE KEYSPACE catalog WITH REPLICATION = {'class': 'SingleRegionStrategy'}; c. After the new keyspace has a status of available, use the following code to create the target table book_awards. To learn more about asynchronous resource creation and how to check if a resource is available, see the section called “Check keyspace creation status”. CREATE TABLE catalog.book_awards ( year int, Step 4: Prepare the source data and the target table 207 Amazon Keyspaces (for Apache Cassandra) Developer Guide award text, rank int, category text, book_title text, author text, publisher text, PRIMARY KEY ((year, award), category, rank) ); Step 5: Write and read Amazon Keyspaces data using the Apache Cassandra Spark Connector In this step, you start by loading the data from the sample file into a DataFrame with the Spark Cassandra Connector. Next, you write the data from the DataFrame into your Amazon Keyspaces table. You can also use this part independently, for example, to migrate data into an Amazon Keyspaces table. Finally, you read the data from your table into a DataFrame using the Spark Cassandra Connector. You can also use this part independently, for example, to read data from an Amazon Keyspaces table to perform data analytics with Apache Spark. 1. Start the Spark Shell as shown in the following example. Note that this example is using SigV4 authentication. ./spark-shell --files application.conf --conf spark.cassandra.connection.config.profile.path=application.conf --packages software.aws.mcs:aws-sigv4-auth-cassandra-java-driver- plugin:4.0.5,com.datastax.spark:spark-cassandra-connector_2.12:3.1.0 --conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions 2. Import the Spark Cassandra Connector with the following code. import org.apache.spark.sql.cassandra._ 3. To read data from the CSV file and store it in a DataFrame, you can use the following code example. var df = spark.read.option("header","true").option("inferSchema","true").csv("keyspaces_sample_table.csv") You can display the result with the following command. Step 5: Write and read Amazon Keyspaces data 208 Amazon Keyspaces (for Apache Cassandra) Developer Guide scala> df.show(); The output should look similar to |
AmazonKeyspaces-074 | AmazonKeyspaces.pdf | 74 | Spark. 1. Start the Spark Shell as shown in the following example. Note that this example is using SigV4 authentication. ./spark-shell --files application.conf --conf spark.cassandra.connection.config.profile.path=application.conf --packages software.aws.mcs:aws-sigv4-auth-cassandra-java-driver- plugin:4.0.5,com.datastax.spark:spark-cassandra-connector_2.12:3.1.0 --conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions 2. Import the Spark Cassandra Connector with the following code. import org.apache.spark.sql.cassandra._ 3. To read data from the CSV file and store it in a DataFrame, you can use the following code example. var df = spark.read.option("header","true").option("inferSchema","true").csv("keyspaces_sample_table.csv") You can display the result with the following command. Step 5: Write and read Amazon Keyspaces data 208 Amazon Keyspaces (for Apache Cassandra) Developer Guide scala> df.show(); The output should look similar to this. +----------------+----+-----------+----+------------------+-------------------- +-------------+ | award|year| category|rank| author| book_title| publisher| +----------------+----+-----------+----+------------------+-------------------- +-------------+ |Kwesi Manu Prize|2020| Fiction| 1| Akua Mansa| Where did you go?| SomePublisher| |Kwesi Manu Prize|2020| Fiction| 2| John Stiles| Yesterday| Example Books| |Kwesi Manu Prize|2020| Fiction| 3| Nikki Wolf|Moving to the Cha...| AnyPublisher| | Wolf|2020|Non-Fiction| 1| Wang Xiulan| History of Ideas| Example Books| | Wolf|2020|Non-Fiction| 2|Ana Carolina Silva| Science Today| SomePublisher| | Wolf|2020|Non-Fiction| 3| Shirley Rodriguez|The Future of Sea...| AnyPublisher| | Richard Roe|2020| Fiction| 1| Alejandro Rosalez| Long Summer| SomePublisher| | Richard Roe|2020| Fiction| 2| Arnav Desai| The Key| Example Books| | Richard Roe|2020| Fiction| 3| Mateo Jackson| Inside the Whale| AnyPublisher| +----------------+----+-----------+----+------------------+-------------------- +-------------+ You can confirm the schema of the data in the DataFrame as shown in the following example. scala> df.printSchema The output should look like this. root |-- award: string (nullable = true) |-- year: integer (nullable = true) Step 5: Write and read Amazon Keyspaces data 209 Amazon Keyspaces (for Apache Cassandra) Developer Guide |-- category: string (nullable = true) |-- rank: integer (nullable = true) |-- author: string (nullable = true) |-- book_title: string (nullable = true) |-- publisher: string (nullable = true) 4. Use the following command to write the data in the DataFrame to the Amazon Keyspaces table. df.write.cassandraFormat("book_awards", "catalog").mode("APPEND").save() 5. To confirm that the data was saved, you can read it back to a dataframe, as shown in the following example. var newDf = spark.read.cassandraFormat("book_awards", "catalog").load() Then you can show the data that is now contained in the dataframe. scala> newDf.show() The output of that command should look like this. +--------------------+------------------+----------------+-----------+------------- +----+----+ | book_title| author| award| category| publisher|rank|year| +--------------------+------------------+----------------+-----------+------------- +----+----+ | Long Summer| Alejandro Rosalez| Richard Roe| Fiction| SomePublisher| 1|2020| | History of Ideas| Wang Xiulan| Wolf|Non-Fiction|Example Books| 1|2020| | Where did you go?| Akua Mansa|Kwesi Manu Prize| Fiction| SomePublisher| 1|2020| | Inside the Whale| Mateo Jackson| Richard Roe| Fiction| AnyPublisher| 3|2020| | Yesterday| John Stiles|Kwesi Manu Prize| Fiction|Example Books| 2|2020| |Moving to the Cha...| Nikki Wolf|Kwesi Manu Prize| Fiction| AnyPublisher| 3|2020| Step 5: Write and read Amazon Keyspaces data 210 Amazon Keyspaces (for Apache Cassandra) Developer Guide |The Future of Sea...| Shirley Rodriguez| Wolf|Non-Fiction| AnyPublisher| 3|2020| | Science Today|Ana Carolina Silva| Wolf|Non-Fiction| SomePublisher| 2|2020| | The Key| Arnav Desai| Richard Roe| Fiction|Example Books| 2|2020| +--------------------+------------------+----------------+-----------+------------- +----+----+ Troubleshooting common errors when using the Spark Cassandra Connector with Amazon Keyspaces If you're using Amazon Virtual Private Cloud and you connect to Amazon Keyspaces, the most common errors experienced when using the Spark connector are caused by the following configuration issues. • The IAM user or role used in the VPC lacks the required permissions to access the system.peers table in Amazon Keyspaces. For more information, see the section called “Populating system.peers table entries with interface VPC endpoint information”. • The IAM user or role lacks the required read/write permissions to the user table and read access to the system tables in Amazon Keyspaces. For more information, see the section called “Step 1: Configure Amazon Keyspaces”. • The Java driver configuration doesn't disable hostname verification when creating the SSL/TLS connection. For examples, see the section called “Step 2: Configure the driver”. For detailed connection troubleshooting steps, see the section called “VPC endpoint connection errors”. In addition, you can use Amazon CloudWatch metrics to help you troubleshoot issues with your Spark Cassandra Connector configuration in Amazon Keyspaces. To learn more about using Amazon Keyspaces with CloudWatch, see the section called “Monitoring with CloudWatch”. The following section describes the most useful metrics to observe when you're using the Spark Cassandra Connector. Troubleshooting 211 Amazon Keyspaces (for Apache Cassandra) Developer Guide PerConnectionRequestRateExceeded Amazon Keyspaces has a quota of 3,000 requests per second, per connection. Each Spark executor establishes a connection with Amazon Keyspaces. Running multiple retries can exhaust your per-connection request rate quota. If you exceed this quota, Amazon Keyspaces emits a PerConnectionRequestRateExceeded metric in CloudWatch. If you see PerConnectionRequestRateExceeded events present along with other system or user errors, it's likely that Spark is running multiple retries beyond the allotted number of requests per connection. If you see PerConnectionRequestRateExceeded events without other errors, then you might need to increase the number of connections in your driver settings to allow for more throughput, or you might need |
AmazonKeyspaces-075 | AmazonKeyspaces.pdf | 75 | quota of 3,000 requests per second, per connection. Each Spark executor establishes a connection with Amazon Keyspaces. Running multiple retries can exhaust your per-connection request rate quota. If you exceed this quota, Amazon Keyspaces emits a PerConnectionRequestRateExceeded metric in CloudWatch. If you see PerConnectionRequestRateExceeded events present along with other system or user errors, it's likely that Spark is running multiple retries beyond the allotted number of requests per connection. If you see PerConnectionRequestRateExceeded events without other errors, then you might need to increase the number of connections in your driver settings to allow for more throughput, or you might need to increase the number of executors in your Spark job. StoragePartitionThroughputCapacityExceeded Amazon Keyspaces has a quota of 1,000 WCUs or WRUs per second/3,000 RCUs or RRUs per second, per-partition. If you're seeing StoragePartitionThroughputCapacityExceeded CloudWatch events, it could indicate that data is not randomized on load. For examples how to shuffle data, see the section called “Step 4: Prepare the source data and the target table”. Common errors and warnings If you're using Amazon Virtual Private Cloud and you connect to Amazon Keyspaces, the Cassandra driver might issue a warning message about the control node itself in the system.peers table. For more information, see the section called “Common errors and warnings”. You can safely ignore this warning. Tutorial: Connect from a containerized application hosted on Amazon Elastic Kubernetes Service This tutorial walks you through the steps required to set up an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to host a containerized application that connects to Amazon Keyspaces using SigV4 authentication. Amazon EKS is a managed service that eliminates the need to install, operate, and maintain your own Kubernetes control plane. Kubernetes is an open-source system that automates the management, scaling, and deployment of containerized applications. Connecting from Amazon EKS 212 Amazon Keyspaces (for Apache Cassandra) Developer Guide The tutorial provides step-by-step guidance to configure, build, and deploy a containerized Java application to Amazon EKS. In the last step you run the application to write data to an Amazon Keyspaces table. Topics • Prerequisites for connecting from Amazon EKS to Amazon Keyspaces • Step 1: Configure the Amazon EKS cluster and setup IAM permissions • Step 2: Configure the application • Step 3: Create the application image and upload the Docker file to your Amazon ECR repository • Step 4: Deploy the application to Amazon EKS and write data to your table • Step 5: (Optional) Cleanup Prerequisites for connecting from Amazon EKS to Amazon Keyspaces Create the following AWS resources before you can begin with the tutorial 1. Before you start this tutorial, follow the AWS setup instructions in Accessing Amazon Keyspaces (for Apache Cassandra). These steps include signing up for AWS and creating an AWS Identity and Access Management (IAM) principal with access to Amazon Keyspaces. 2. Create an Amazon Keyspaces keyspace with the name aws and a table with the name user that you can write to from the containerized application running in Amazon EKS later in this tutorial. You can do this either with the AWS CLI or using cqlsh. AWS CLI aws keyspaces create-keyspace --keyspace-name 'aws' To confirm that the keyspace was created, you can use the following command. aws keyspaces list-keyspaces To create the table, you can use the following command. aws keyspaces create-table --keyspace-name 'aws' --table-name 'user' --schema- definition 'allColumns=[ Prerequisites 213 Amazon Keyspaces (for Apache Cassandra) Developer Guide {name=username,type=text}, {name=fname,type=text}, {name=last_update_date,type=timestamp},{name=lname,type=text}], partitionKeys=[{name=username}]' To confirm that your table was created, you can use the following command. aws keyspaces list-tables --keyspace-name 'aws' For more information, see create keyspace and create table in the AWS CLI Command Reference. cqlsh CREATE KEYSPACE aws WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'} AND durable_writes = true; CREATE TABLE aws.user ( username text PRIMARY KEY, fname text, last_update_date timestamp, lname text ); To verify that your table was created, you can use the following statement. SELECT * FROM system_schema.tables WHERE keyspace_name = "aws"; Your table should be listed in the output of this statement. Note that there can be a delay until the table is created. For more information, see the section called “CREATE TABLE”. 3. Create an Amazon EKS cluster with a Fargate - Linux node type. Fargate is a serverless compute engine that lets you deploy Kubernetes Pods without managing Amazon Amazon EC2 instances. To follow this tutorial without having to update the cluster name in all the example commands, create a cluster with the name my-eks-cluster following the instructions at Getting started with Amazon EKS – eksctl in the Amazon EKS User Guide. When your cluster is created, verify that your nodes and the two default Pods are running and healthy. You can do so with the following command. kubectl get pods -A -o wide You should see something similar to this output. Prerequisites 214 Amazon Keyspaces |
AmazonKeyspaces-076 | AmazonKeyspaces.pdf | 76 | a serverless compute engine that lets you deploy Kubernetes Pods without managing Amazon Amazon EC2 instances. To follow this tutorial without having to update the cluster name in all the example commands, create a cluster with the name my-eks-cluster following the instructions at Getting started with Amazon EKS – eksctl in the Amazon EKS User Guide. When your cluster is created, verify that your nodes and the two default Pods are running and healthy. You can do so with the following command. kubectl get pods -A -o wide You should see something similar to this output. Prerequisites 214 Amazon Keyspaces (for Apache Cassandra) Developer Guide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-1234567890-abcde 1/1 Running 0 18m 192.0.2.0 fargate-ip-192-0-2-0.region-code.compute.internal <none> <none> kube-system coredns-1234567890-12345 1/1 Running 0 18m 192.0.2.1 fargate-ip-192-0-2-1.region-code.compute.internal <none> <none> 4. Install Docker. For instructions on how to install Docker on an Amazon EC2 instance, see Install Docker in the Amazon Elastic Container Registry User Guide. Docker is available for many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. 5. Create an Amazon ECR repository. Amazon ECR is an AWS managed container image registry service that you can use with your preferred CLI to push, pull, and manage Docker images. For more information about Amazon ECR repositories, see the Amazon Elastic Container Registry User Guide. You can use the following command to create a repository with the name my- ecr-repository. aws ecr create-repository --repository-name my-ecr-repository After completing the prerequisite steps, proceed to the section called “Step 1: Configure the Amazon EKS cluster”. Step 1: Configure the Amazon EKS cluster and setup IAM permissions Configure the Amazon EKS cluster and create the IAM resources that are required to allow an Amazon EKS service account to connect to your Amazon Keyspaces table 1. Create an Open ID Connect (OIDC) provider for the Amazon EKS cluster. This is needed to use IAM roles for service accounts. For more information about OIDC providers and how to create them, see Creating an IAM OIDC provider for your cluster in the Amazon EKS User Guide. Step 1: Configure the Amazon EKS cluster 215 Amazon Keyspaces (for Apache Cassandra) Developer Guide a. Create an IAM OIDC identity provider for your cluster with the following command. This example assumes that your cluster name is my-eks-cluster. If you have a cluster with a different name, remember to update the name in all future commands. eksctl utils associate-iam-oidc-provider --cluster my-eks-cluster --approve b. Confirm that the OIDC identity provider has been registered with IAM with the following command. aws iam list-open-id-connect-providers --region aws-region The output should look similar to this. Take note of the OIDC's Amazon Resource Name (ARN), you need it in the next step when you create a trust policy for the service account. { "OpenIDConnectProviderList": [ .. { "Arn": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.aws- region.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE" } ] } 2. Create a service account for the Amazon EKS cluster. Service accounts provide an identity for processes that run in a Pod. A Pod is the smallest and simplest Kubernetes object that you can use to deploy a containerized application. Next, create an IAM role that the service account can assume to obtain permissions to resources. You can access any AWS service from a Pod that has been configured to use a service account that can assume an IAM role with access permissions to that service. a. Create a new namespace for the service account. A namespace helps to isolate cluster resources created for this tutorial. You can create a new namespace using the following command. kubectl create namespace my-eks-namespace b. To use a custom namespace, you have to associate it with a Fargate profile. The following code is an example of this. Step 1: Configure the Amazon EKS cluster 216 Amazon Keyspaces (for Apache Cassandra) Developer Guide eksctl create fargateprofile \ --cluster my-eks-cluster \ --name my-fargate-profile \ --namespace my-eks-namespace \ --labels *=* c. Create a service account with the name my-eks-serviceaccount in the namespace my- eks-namespace for your Amazon EKS cluster by using the following command. cat >my-serviceaccount.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: my-eks-serviceaccount namespace: my-eks-namespace EOF kubectl apply -f my-serviceaccount.yaml d. Run the following command to create a trust policy file that instructs the IAM role to trust your service account. This trust relationship is required before a principal can assume a role. You need to make the following edits to the file: • For the Principal, enter the ARN that IAM returned to the list-open-id-connect- providers command. The ARN contains your account number and Region. • In the condition statement, replace the AWS Region and the OIDC id. • Confirm that the service account name and |
AmazonKeyspaces-077 | AmazonKeyspaces.pdf | 77 | v1 kind: ServiceAccount metadata: name: my-eks-serviceaccount namespace: my-eks-namespace EOF kubectl apply -f my-serviceaccount.yaml d. Run the following command to create a trust policy file that instructs the IAM role to trust your service account. This trust relationship is required before a principal can assume a role. You need to make the following edits to the file: • For the Principal, enter the ARN that IAM returned to the list-open-id-connect- providers command. The ARN contains your account number and Region. • In the condition statement, replace the AWS Region and the OIDC id. • Confirm that the service account name and namespace are correct. You need to attach the trust policy file in the next step when you create the IAM role. cat >trust-relationship.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::111122223333:oidc-provider/ oidc.eks.aws-region.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE" }, "Action": "sts:AssumeRoleWithWebIdentity", Step 1: Configure the Amazon EKS cluster 217 Amazon Keyspaces (for Apache Cassandra) Developer Guide "Condition": { "StringEquals": { "oidc.eks.aws-region.amazonaws.com/ id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:my-eks- namespace:my-eks-serviceaccount", "oidc.eks.aws-region.amazonaws.com/ id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com" } } } ] } EOF Optional: You can also add multiple entries in the StringEquals or StringLike conditions to allow multiple service accounts or namespaces to assume the role. To allow your service account to assume an IAM role in a different AWS account, see Cross-account IAM permissions in the Amazon EKS User Guide. 3. Create an IAM role with the name my-iam-role for the Amazon EKS service account to assume. Attach the trust policy file created in the last step to the role. The trust policy specifies the service account and OIDC provider that the IAM role can trust. aws iam create-role --role-name my-iam-role --assume-role-policy-document file:// trust-relationship.json --description "EKS service account role" 4. Assign the IAM role permissions to Amazon Keyspaces by attaching an access policy. a. Attach an access policy to define the actions the IAM role can perform on specific Amazon Keyspaces resources. For this tutorial we use the AWS managed policy AmazonKeyspacesFullAccess, because our application is going to write data to your Amazon Keyspaces table. As a best practise however, it's recommended to create custom access policies that implement the least privileges principle. For more information, see the section called “How Amazon Keyspaces works with IAM”. aws iam attach-role-policy --role-name my-iam-role --policy- arn=arn:aws:iam::aws:policy/AmazonKeyspacesFullAccess Confirm that the policy was successfully attached to the IAM role with the following statement. Step 1: Configure the Amazon EKS cluster 218 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws iam list-attached-role-policies --role-name my-iam-role The output should look like this. { "AttachedPolicies": [ { "PolicyName": "AmazonKeyspacesFullAccess", "PolicyArn": "arn:aws:iam::aws:policy/AmazonKeyspacesFullAccess" } ] } b. Annotate the service account with the Amazon Resource Name (ARN) of the IAM role it can assume. Make sure to update the role ARN with your account ID. kubectl annotate serviceaccount -n my-eks-namespace my-eks-serviceaccount eks.amazonaws.com/role-arn=arn:aws:iam::111122223333:role/my-iam-role 5. Confirm that the IAM role and the service account are correctly configured. a. Confirm that the IAM role's trust policy is correctly configured with the following statement. aws iam get-role --role-name my-iam-role --query Role.AssumeRolePolicyDocument The output should look similar to this. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::111122223333:oidc-provider/ oidc.eks.aws-region.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { Step 1: Configure the Amazon EKS cluster 219 Amazon Keyspaces (for Apache Cassandra) Developer Guide "oidc.eks.aws-region/id/ EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com", "oidc.eks.aws-region.amazonaws.com/id/ EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:my-eks- namespace:my-eks-serviceaccount" } } } ] } b. Confirm that the Amazon EKS service account is annotated with the IAM role. kubectl describe serviceaccount my-eks-serviceaccount -n my-eks-namespace The output should look similar to this. Name: my-eks-serviceaccount Namespace:my-eks-namespace Labels: <none> Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-iam- role Image pull secrets: <none> Mountable secrets: <none> Tokens: <none> [...] After you created the Amazon EKS service account, the IAM role, and configured the required relationships and permissions, proceed to the section called “Step 2: Configure the application”. Step 2: Configure the application In this step you build your application that connects to Amazon Keyspaces using the SigV4 plugin. You can view and download the example Java application from the Amazon Keyspaces example code repo on Github. Or you can follow along using your own application, making sure to complete all configuration steps. Step 2: Configure the application 220 Amazon Keyspaces (for Apache Cassandra) Developer Guide Configure your application and add the required dependencies. 1. You can download the example Java application by cloning the Github repository using the following command. git clone https://github.com/aws-samples/amazon-keyspaces-examples.git 2. After downloading the Github repo, unzip the downloaded file and navigate to the resources directory to the application.conf file. a. Application configuration In this step you configure the SigV4 authentication plugin. You can use the following example in your application. If you haven't already done so, you need to generate your IAM access keys (an access key ID and a secret access key) and save them in your AWS config file or as environment variables. For detailed instructions, |
AmazonKeyspaces-078 | AmazonKeyspaces.pdf | 78 | dependencies. 1. You can download the example Java application by cloning the Github repository using the following command. git clone https://github.com/aws-samples/amazon-keyspaces-examples.git 2. After downloading the Github repo, unzip the downloaded file and navigate to the resources directory to the application.conf file. a. Application configuration In this step you configure the SigV4 authentication plugin. You can use the following example in your application. If you haven't already done so, you need to generate your IAM access keys (an access key ID and a secret access key) and save them in your AWS config file or as environment variables. For detailed instructions, see the section called “Required credentials for AWS authentication”. Update the AWS Region and the service endpoint for Amazon Keyspaces as needed. For more service endpoints, see the section called “Service endpoints”. Replace the truststore location, truststore name, and the truststore password with your own. datastax-java-driver { basic.contact-points = ["cassandra.aws-region.amazonaws.com:9142"] basic.load-balancing-policy.local-datacenter = "aws-region" advanced.auth-provider { class = software.aws.mcs.auth.SigV4AuthProvider aws-region = "aws-region" } advanced.ssl-engine-factory { class = DefaultSslEngineFactory truststore-path = "truststore_locationtruststore_name.jks" truststore-password = "truststore_password;" } } b. Add the STS module dependency. This adds the ability to use a WebIdentityTokenCredentialsProvider that returns the AWS credentials that the application needs to provide so that the service account can assume the IAM role. You can do this based on the following example. Step 2: Configure the application 221 Amazon Keyspaces (for Apache Cassandra) Developer Guide <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-sts</artifactId> <version>1.11.717</version> </dependency> c. Add the SigV4 dependency. This package implements the SigV4 authentication plugin that is needed to authenticate to Amazon Keyspaces <dependency> <groupId>software.aws.mcs</groupId> <artifactId>aws-sigv4-auth-cassandra-java-driver-plugin</ artifactId> <version>4.0.3</version> </dependency> 3. Add a logging dependency. Without logs, troubleshooting connection issues is impossible. In this tutorial, we use slf4j as the logging framework, and use logback.xml to store the log output. We set the logging level to debug to establish the connection. You can use the following example to add the dependency. <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>2.0.5</version> </dependency> You can use the following code snippet to configure the logging. <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</ pattern> </encoder> </appender> Step 2: Configure the application 222 Amazon Keyspaces (for Apache Cassandra) Developer Guide <root level="debug"> <appender-ref ref="STDOUT" /> </rootv </configuration> Note The debug level is needed to investigate connection failures. After you have successfully connected to Amazon Keyspaces from your application, you can change the logging level to info or warning as needed. Step 3: Create the application image and upload the Docker file to your Amazon ECR repository In this step, you compile the example application, build a Docker image, and push the image to your Amazon ECR repository. Build your application, build a Docker image, and submit it to Amazon Elastic Container Registry 1. Set environment variables for the build that define your AWS Region. Replace the Regions in the examples with your own. export CASSANDRA_HOST=cassandra.aws-region.amazonaws.com:9142 export CASSANDRA_DC=aws-region 2. Compile your application with Apache Maven version 3.6.3 or higher using the following command. mvn clean install This creates a JAR file with all dependencies included in the target directory. 3. Retrieve your ECR repository URI that's needed for the next step with the following command. Make sure to update the Region to the one you've been using. aws ecr describe-repositories --region aws-region Step 3: Create application image 223 Amazon Keyspaces (for Apache Cassandra) Developer Guide The output should look like in the following example. "repositories": [ { "repositoryArn": "arn:aws:ecr:aws-region:111122223333:repository/my-ecr- repository", "registryId": "111122223333", "repositoryName": "my-ecr-repository", "repositoryUri": "111122223333.dkr.ecr.aws-region.amazonaws.com/my-ecr- repository", "createdAt": "2023-11-02T03:46:34+00:00", "imageTagMutability": "MUTABLE", "imageScanningConfiguration": { "scanOnPush": false }, "encryptionConfiguration": { "encryptionType": "AES256" } }, 4. From the application's root directory build the Docker image using the repository URI from the last step. Modify the Docker file as needed. In the build command, make sure to replace your account ID and set the AWS Region to the Region where the Amazon ECR repository my-ecr- repository is located. docker build -t 111122223333.dkr.ecr.aws-region.amazonaws.com/my-ecr- repository:latest . 5. Retrieve an authentication token to push the Docker image to Amazon ECR. You can do so with the following command. aws ecr get-login-password --region aws-region | docker login --username AWS -- password-stdin 111122223333.dkr.ecr.aws-region.amazonaws.com 6. First, check for existing images in your Amazon ECR repository. You can use the following command. aws ecr describe-images --repository-name my-ecr-repository --region aws-region Then, push the Docker image to the repo. You can use the following command. Step 3: Create application image 224 Amazon Keyspaces (for Apache Cassandra) Developer Guide docker push 111122223333.dkr.ecr.aws-region.amazonaws.com/my-ecr-repository:latest Step 4: Deploy the application to Amazon EKS and write data to your table In this step of the tutorial, you configure the Amazon EKS deployment for your application, and confirm that the application is running and can connect to Amazon Keyspaces. To deploy an application to Amazon EKS, you need to configure all relevant settings in a file called deployment.yaml. This file is then used |
AmazonKeyspaces-079 | AmazonKeyspaces.pdf | 79 | --repository-name my-ecr-repository --region aws-region Then, push the Docker image to the repo. You can use the following command. Step 3: Create application image 224 Amazon Keyspaces (for Apache Cassandra) Developer Guide docker push 111122223333.dkr.ecr.aws-region.amazonaws.com/my-ecr-repository:latest Step 4: Deploy the application to Amazon EKS and write data to your table In this step of the tutorial, you configure the Amazon EKS deployment for your application, and confirm that the application is running and can connect to Amazon Keyspaces. To deploy an application to Amazon EKS, you need to configure all relevant settings in a file called deployment.yaml. This file is then used by Amazon EKS to deploy the application. The metadata in the file should contain the following information: • Application name the name of the application. For this tutorial, we use my-keyspaces-app. • Kubernetes namespace the namespace of the Amazon EKS cluster. For this tutorial, we use my- eks-namespace. • Amazon EKS service account name the name of the Amazon EKS service account. For this tutorial, we use my-eks-serviceaccount. • image name the name of the application image. For this tutorial, we use my-keyspaces-app. • Image URI the Docker image URI from Amazon ECR. • AWS account ID your AWS account ID. • IAM role ARN the ARN of the IAM role created for the service account to assume. For this tutorial, we use my-iam-role. • AWS Region of the Amazon EKS cluster the AWS Region you created your Amazon EKS cluster in. In this step, you deploy and run the application that connects to Amazon Keyspaces and writes data to the table. 1. Configure the deployment.yaml file. You need to replace the following values: • name • namespace • serviceAccountName • image Step 4: Deploy the application to Amazon EKS 225 Developer Guide Amazon Keyspaces (for Apache Cassandra) • AWS_ROLE_ARN value • The AWS Region in CASSANDRA_HOST • AWS_REGION You can use the following file as an example. apiVersion: apps/v1 kind: Deployment metadata: name: my-keyspaces-app namespace: my-eks-namespace spec: replicas: 1 selector: matchLabels: app: my-keyspaces-app template: metadata: labels: app: my-keyspaces-app spec: serviceAccountName: my-eks-serviceaccount containers: - name: my-keyspaces-app image: 111122223333.dkr.ecr.aws-region.amazonaws.com/my-ecr- repository:latest ports: - containerPort: 8080 env: - name: CASSANDRA_HOST value: "cassandra.aws-region.amazonaws.com:9142" - name: CASSANDRA_DC value: "aws-region" - name: AWS_WEB_IDENTITY_TOKEN_FILE value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token - name: AWS_ROLE_ARN value: "arn:aws:iam::111122223333:role/my-iam-role" - name: AWS_REGION value: "aws-region" Step 4: Deploy the application to Amazon EKS 226 Developer Guide Amazon Keyspaces (for Apache Cassandra) 2. Deploy deployment.yaml. kubectl apply -f deployment.yaml The output should look like this. deployment.apps/my-keyspaces-app created 3. Check the status of the Pod in your namespace of the Amazon EKS cluster. kubectl get pods -n my-eks-namespace The output should look similar to this example. NAME READY STATUS RESTARTS AGE my-keyspaces-app-123abcde4f-g5hij 1/1 Running 0 75s For more details, you can use the following command. kubectl describe pod my-keyspaces-app-123abcde4f-g5hij -n my-eks-namespace Name: my-keyspaces-app-123abcde4f-g5hij Namespace: my-eks-namespace Priority: 2000001000 Priority Class Name: system-node-critical Service Account: my-eks-serviceaccount Node: fargate-ip-192-168-102-209.ec2.internal/192.168.102.209 Start Time: Thu, 23 Nov 2023 12:15:43 +0000 Labels: app=my-keyspaces-app eks.amazonaws.com/fargate-profile=my-fargate-profile pod-template-hash=6c56fccc56 Annotations: CapacityProvisioned: 0.25vCPU 0.5GB Logging: LoggingDisabled: LOGGING_CONFIGMAP_NOT_FOUND Status: Running IP: 192.168.102.209 IPs: IP: 192.168.102.209 Controlled By: ReplicaSet/my-keyspaces-app-6c56fccc56 Containers: my-keyspaces-app: Step 4: Deploy the application to Amazon EKS 227 Amazon Keyspaces (for Apache Cassandra) Container ID: Developer Guide containerd://41ff7811d33ae4bc398755800abcdc132335d51d74f218ba81da0700a6f8c67b Image: 111122223333.dkr.ecr.aws-region.amazonaws.com/ my_eks_repository:latest Image ID: 111122223333.dkr.ecr.aws-region.amazonaws.com/ my_eks_repository@sha256:fd3c6430fc5251661efce99741c72c1b4b03061474940200d0524b84a951439c Port: 8080/TCP Host Port: 0/TCP State: Running Started: Thu, 23 Nov 2023 12:15:19 +0000 Finished: Thu, 23 Nov 2023 12:16:17 +0000 Ready: True Restart Count: 1 Environment: CASSANDRA_HOST: cassandra.aws-region.amazonaws.com:9142 CASSANDRA_DC: aws-region AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/ serviceaccount/token AWS_ROLE_ARN: arn:aws:iam::111122223333:role/my-iam-role AWS_REGION: aws-region AWS_STS_REGIONAL_ENDPOINTS: regional Mounts: /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fssbf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: aws-iam-token: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 86400 kube-api-access-fssbf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Step 4: Deploy the application to Amazon EKS 228 Amazon Keyspaces (for Apache Cassandra) Developer Guide Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning LoggingDisabled 2m13s fargate-scheduler Disabled logging because aws-logging configmap was not found. configmap "aws-logging" not found Normal Scheduled 89s fargate-scheduler Successfully assigned my-eks-namespace/my-keyspaces-app-6c56fccc56-mgs2m to fargate- ip-192-168-102-209.ec2.internal Normal Pulled 75s kubelet Successfully pulled image "111122223333.dkr.ecr.aws-region.amazonaws.com/ my_eks_repository:latest" in 13.027s (13.027s including waiting) Normal Pulling 54s (x2 over 88s) kubelet Pulling image "111122223333.dkr.ecr.aws-region.amazonaws.com/my_eks_repository:latest" Normal Created 54s (x2 over 75s) kubelet Created container my-keyspaces-app Normal Pulled 54s kubelet Successfully pulled image "111122223333.dkr.ecr.aws-region.amazonaws.com/ my_eks_repository:latest" in 222ms (222ms including waiting) Normal Started 53s (x2 over 75s) kubelet Started container my-keyspaces-app 4. Check the Pod's logs to confirm that your application is running and can connect to your Amazon Keyspaces table. You can do so with the following command. Make sure to replace the name of your deployment. kubectl logs -f my-keyspaces-app-123abcde4f-g5hij -n my-eks-namespace You should |
AmazonKeyspaces-080 | AmazonKeyspaces.pdf | 80 | kubelet Successfully pulled image "111122223333.dkr.ecr.aws-region.amazonaws.com/ my_eks_repository:latest" in 13.027s (13.027s including waiting) Normal Pulling 54s (x2 over 88s) kubelet Pulling image "111122223333.dkr.ecr.aws-region.amazonaws.com/my_eks_repository:latest" Normal Created 54s (x2 over 75s) kubelet Created container my-keyspaces-app Normal Pulled 54s kubelet Successfully pulled image "111122223333.dkr.ecr.aws-region.amazonaws.com/ my_eks_repository:latest" in 222ms (222ms including waiting) Normal Started 53s (x2 over 75s) kubelet Started container my-keyspaces-app 4. Check the Pod's logs to confirm that your application is running and can connect to your Amazon Keyspaces table. You can do so with the following command. Make sure to replace the name of your deployment. kubectl logs -f my-keyspaces-app-123abcde4f-g5hij -n my-eks-namespace You should be able to see application log entries confirming the connection to Amazon Keyspaces like in the example below. 2:47:20.553 [s0-admin-0] DEBUG c.d.o.d.i.c.metadata.MetadataManager - [s0] Adding initial contact points [Node(endPoint=cassandra.aws- region.amazonaws.com/1.222.333.44:9142, hostId=null, hashCode=e750d92)] 22:47:20.562 [s0-admin-1] DEBUG c.d.o.d.i.c.c.ControlConnection - [s0] Initializing with event types [SCHEMA_CHANGE, STATUS_CHANGE, TOPOLOGY_CHANGE] 22:47:20.564 [s0-admin-1] DEBUG c.d.o.d.i.core.context.EventBus - [s0] Registering com.datastax.oss.driver.internal.core.metadata.LoadBalancingPolicyWrapper$$Lambda Step 4: Deploy the application to Amazon EKS 229 Amazon Keyspaces (for Apache Cassandra) Developer Guide $812/0x0000000801105e88@769afb95 for class com.datastax.oss.driver.internal.core.metadata.NodeStateEvent 22:47:20.566 [s0-admin-1] DEBUG c.d.o.d.i.c.c.ControlConnection - [s0] Trying to establish a connection to Node(endPoint=cassandra.us- east-1.amazonaws.com/1.222.333.44:9142, hostId=null, hashCode=e750d92) 5. Run the following CQL query on your Amazon Keyspaces table to confirm that one row of data has been written to your table: SELECT * from aws.user; You should see the following output: fname | lname | username | last_update_date ----------+-------+----------+----------------------------- random | k | test | 2023-12-07 13:58:31.57+0000 Step 5: (Optional) Cleanup Follow these steps to remove all the resources created in this tutorial. Remove the resources created in this tutorial 1. Delete your deployment. You can use the following command to do so. kubectl delete deployment my-keyspaces-app -n my-eks-namespace 2. Delete the Amazon EKS cluster and all Pods contained in it. This also deletes related resources like the service account and OIDC identity provider. You can use the following command to do so. eksctl delete cluster --name my-eks-cluster --region aws-region 3. Delete the IAM role used for the Amazon EKS service account with access permissions to Amazon Keyspaces. First, you have to remove the managed policy that is attached to the role. aws iam detach-role-policy --role-name my-iam-role --policy-arn arn:aws:iam::aws:policy/AmazonKeyspacesFullAccess Step 5: (Optional) Cleanup 230 Amazon Keyspaces (for Apache Cassandra) Developer Guide Then you can delete the role using the following command. aws iam delete-role --role-name my-iam-role For more information, see Deleting an IAM role (AWS CLI) in the IAM User Guide. 4. Delete the Amazon ECR repository including all the images stored in it. You can do so using the following command. aws ecr delete-repository \ --repository-name my-ecr-repository \ --force \ --region aws-region Note that the force flag is required to delete a repository that contains images. To delete your image first, you can do so using the following command. aws ecr batch-delete-image \ --repository-name my-ecr-repository \ --image-ids imageTag=latest \ --region aws-region For more information, see Delete an image in the Amazon Elastic Container Registry User Guide. 5. Delete the Amazon Keyspaces keyspace and table. Deleting the keyspace automatically deletes all tables in that keyspace. You can use one the following options to do so. AWS CLI aws keyspaces delete-keyspace --keyspace-name 'aws' To confirm that the keyspace was deleted, you can use the following command. aws keyspaces list-keyspaces To delete the table first, you can use the following command. aws keyspaces delete-table --keyspace-name 'aws' --table-name 'user' Step 5: (Optional) Cleanup 231 Amazon Keyspaces (for Apache Cassandra) Developer Guide To confirm that your table was deleted, you can use the following command. aws keyspaces list-tables --keyspace-name 'aws' For more information, see delete keyspace and delete table in the AWS CLI Command Reference. cqlsh DROP KEYSPACE IF EXISTS "aws"; To verify that your keyspaces was deleted, you can use the following statement. SELECT * FROM system_schema.keyspaces ; Your keyspace should not be listed in the output of this statement. Note that there can be a delay until the keyspaces is deleted. For more information, see the section called “DROP KEYSPACE”. To delete the table first, you can use the following command. DROP TABLE "aws.user" To confirm that your table was deleted, you can use the following command. SELECT * FROM system_schema.tables WHERE keyspace_name = "aws"; Your table should not be listed in the output of this statement. Note that there can be a delay until the table is deleted. For more information, see the section called “DROP TABLE”. Tutorial: Export an Amazon Keyspaces table to Amazon S3 using AWS Glue This tutorial shows you how to export an Amazon Keyspaces table to an Amazon S3 bucket using AWS Glue. For this tutorial, many manual steps are automated using shell scripts available in the Exporting data to Amazon S3 232 Amazon Keyspaces (for Apache Cassandra) Developer Guide Amazon Keyspaces Github repo. Using this process, you can export Amazon Keyspaces data to |
AmazonKeyspaces-081 | AmazonKeyspaces.pdf | 81 | listed in the output of this statement. Note that there can be a delay until the table is deleted. For more information, see the section called “DROP TABLE”. Tutorial: Export an Amazon Keyspaces table to Amazon S3 using AWS Glue This tutorial shows you how to export an Amazon Keyspaces table to an Amazon S3 bucket using AWS Glue. For this tutorial, many manual steps are automated using shell scripts available in the Exporting data to Amazon S3 232 Amazon Keyspaces (for Apache Cassandra) Developer Guide Amazon Keyspaces Github repo. Using this process, you can export Amazon Keyspaces data to Amazon S3 without having to setup a Spark cluster. Topics • Prerequisites for exporting data from Amazon Keyspaces to Amazon S3 • Step 1: Create the Amazon S3 bucket, download the required tools, and configure the environment • Step 2: Configure the AWS Glue job that exports the Amazon Keyspaces table • Step 3: Run the AWS Glue job to export the Amazon Keyspaces table to the Amazon S3 bucket from the AWS CLI • Step 4: (Optional) Create a trigger to schedule the export job • Step 5: (Optional) Cleanup Prerequisites for exporting data from Amazon Keyspaces to Amazon S3 Confirm the following prerequisites and create the Amazon Keyspaces resources before you begin with the tutorial 1. Before you start this tutorial, follow the AWS setup instructions in Accessing Amazon Keyspaces (for Apache Cassandra). These steps include signing up for AWS and creating an AWS Identity and Access Management (IAM) principal with access to Amazon Keyspaces. 2. The scripts in this tutorial use your credentials and default AWS Region stored in a known location. For more information, see the section called “Manage access keys”. The following example shows how to store the required values as environment variables for the default user. $ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE $ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY $ export AWS_DEFAULT_REGION=aws-region 3. To run the scripts in this tutorial, you need the following software and tools installed on your machine: • Java • Apache Maven Prerequisites 233 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Git • AWS CLI This tutorial was tested with AWS CLI 2, Java 17.0.13, and Apache Maven 3.8.7. 4. You need an Amazon Keyspaces table with sample data to export later in this tutorial. You can use your own Amazon Keyspaces table or create a sample table following the steps in the Getting started tutorial. a. To install the cqlsh-expansion, follow the steps at the section called “Using the cqlsh-expansion”. b. Confirm that the Murmur3Partitioner partitioner is the default partitioner for your account. This partitioner is compatible with the Apache Spark Cassandra Connector and with AWS Glue. For more information on partitioners, see the section called “Working with partitioners”. To change the partitioner of your account, you can use the following statement. SELECT partitioner FROM system.local; UPDATE system.local set partitioner='org.apache.cassandra.dht.Murmur3Partitioner' where key='local'; c. To create an Amazon Keyspaces keyspace, follow the steps at the section called “Create a keyspace”. d. To create the Amazon Keyspaces table, follow the steps at the section called “Create a table”. e. To load sample data into the table to export to Amazon S3, follow the steps at the section called “Create”. After completing the prerequisite steps, proceed to the section called “Step 1: Create the Amazon S3 bucket, download tools, and configure the environment”. Prerequisites 234 Amazon Keyspaces (for Apache Cassandra) Developer Guide Step 1: Create the Amazon S3 bucket, download the required tools, and configure the environment In this step, you download the external tools and create and configure the AWS resources required for the automated data export solution of an Amazon Keyspaces table to an Amazon S3 bucket using an AWS Glue job. To perform all these tasks in an efficient way, we run a shell script with the name setup-connector.sh available on Github. The script setup-connector.sh automates the following steps. 1. Creates an Amazon S3 bucket using AWS CloudFormation. This bucket stores the downloaded jar and configuration files, as well as the exported table data. 2. Creates an IAM role using AWS CloudFormation. AWS Glue jobs use this role to access Amazon Keyspaces and Amazon S3. 3. Downloads the Apache Spark Cassandra Connector and uploads it to the Amazon S3 bucket. 4. Downloads the SigV4 Authentication plugin and uploads it to the Amazon S3 bucket. 5. Downloads the Apache Spark Extensions and uploads them to the Amazon S3 bucket. 6. Downloads the Keyspaces Retry Policy from Github, compiles the code using Maven, and uploads the output to the Amazon S3 bucket. 7. Uploads the keyspaces-application.conf file to the Amazon S3 bucket. Use the setup-connector.sh shell script to automate the setup and configuration steps. 1. Copy the files from the aws-glue repository on Github to your local machine. This directory contains the shell script as well as |
AmazonKeyspaces-082 | AmazonKeyspaces.pdf | 82 | to the Amazon S3 bucket. 4. Downloads the SigV4 Authentication plugin and uploads it to the Amazon S3 bucket. 5. Downloads the Apache Spark Extensions and uploads them to the Amazon S3 bucket. 6. Downloads the Keyspaces Retry Policy from Github, compiles the code using Maven, and uploads the output to the Amazon S3 bucket. 7. Uploads the keyspaces-application.conf file to the Amazon S3 bucket. Use the setup-connector.sh shell script to automate the setup and configuration steps. 1. Copy the files from the aws-glue repository on Github to your local machine. This directory contains the shell script as well as other required files. 2. Run the shell script setup-connector.sh. You can specify the following three optional parameters. 1. SETUP_STACKNAME – This is the name of the AWS CloudFormation stack used to create the AWS resources. 2. S3_BUCKET_NAME – This is the name of the Amazon S3 bucket. 3. GLUE_SERVICE_ROLE_NAME – This is the name of the IAM service role that AWS Glue uses to run jobs that connect to Amazon Keyspaces and Amazon S3. Step 1: Create the Amazon S3 bucket, download tools, and configure the environment 235 Amazon Keyspaces (for Apache Cassandra) Developer Guide You can use the following command to run the shell script, provide the three parameters with the following names. ./setup-connector.sh cfn-setup s3-keyspaces iam-export-role To confirm that your bucket was created, you can use the following AWS CLI command. aws s3 ls s3://s3-keyspaces The output of the command should look like this. PRE conf/ PRE jars/ To confirm that the IAM role was created and to review the details, you can use the following AWS CLI statement. aws iam get-role --role-name "iam-export-role" { "Role": { "Path": "/", "RoleName": "iam-export-role", "RoleId": "AKIAIOSFODNN7EXAMPLE", "Arn": "arn:aws:iam::1111-2222-3333:role/iam-export-role", "CreateDate": "2025-01-28T16:09:03+00:00", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "glue.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }, Step 1: Create the Amazon S3 bucket, download tools, and configure the environment 236 Amazon Keyspaces (for Apache Cassandra) Developer Guide "Description": "AWS Glue service role to import and export data from Amazon Keyspaces", "MaxSessionDuration": 3600, "RoleLastUsed": { "LastUsedDate": "2025-01-29T12:03:54+00:00", "Region": "us-east-1" } } } If the AWS CloudFormation stack process fails, you can review the detailed error information about the failed stack in the AWS CloudFormation console. After the Amazon S3 bucket containing all scripts and tools has been created and the IAM role is configured, proceed to the section called “Step 2: Configure the AWS Glue job”. Step 2: Configure the AWS Glue job that exports the Amazon Keyspaces table In the second step of the tutorial you use the script setup-export.sh available on Github to create and configure the AWS Glue job that connects to Amazon Keyspaces using the SigV4 plugin and then exports the specified table to your Amazon S3 bucket created in the previous step. Using the script allows you to export data from Amazon Keyspaces without setting up an Apache Spark cluster. Create an AWS Glue job to export an Amazon Keyspaces table to an Amazon S3 bucket. • In this step, you run the setup-export.sh shell script located in the export-to-s3/ directory to use AWS CloudFormation to create and configure the AWS Glue export job. The script takes the following parameters. PARENT_STACK_NAME, EXPORT_STACK_NAME, KEYSPACE_NAME, TABLE_NAME, S3_URI, FORMAT • PARENT_STACK_NAME – The name of the AWS CloudFormation stack created in the previous step. • EXPORT_STACK_NAME – The name of the AWS CloudFormation stack that creates the AWS Glue export job. Step 2: Configure the AWS Glue job 237 Amazon Keyspaces (for Apache Cassandra) Developer Guide • KEYSPACE_NAME and TABLE_NAME – The fully qualified name of the keyspace and table to be exported. For this tutorial, we use catalog.book_awards, but you can replace this with your own fully qualified table name. • S3URI – The optional URI of the Amazon S3 bucket. The default is the Amazon S3 bucket from the parent stack. • FORMAT – The optional data format. The default value is parquet. For this tutorial, to make data load and transformation easier, we use the default. You can use the following command as an example. setup-export.sh cfn-setup cfn-glue catalog book_awards To confirm that the job has been created, you can use the following statement. aws glue list-jobs The output of the statement should look similar to this. { "JobNames": [ "AmazonKeyspacesExportToS3-cfn-setup-cfn-glue" ] } To see the details of the job, you can use the following command. aws glue get-job --job-name AmazonKeyspacesExportToS3-cfn-setup-cfn-glue The output of the command shows all the details of the job. This includes the default arguments that you can override when running the job. { "Job": { "Name": "AmazonKeyspacesExportToS3-cfn-setup-cfn-glue", "JobMode": "SCRIPT", "JobRunQueuingEnabled": false, "Description": "export to s3", Step 2: Configure the AWS Glue job 238 Amazon Keyspaces (for Apache Cassandra) Developer Guide "Role": "iam-export-role", "CreatedOn": "2025-01-30T15:53:30.765000+00:00", "LastModifiedOn": "2025-01-30T15:53:30.765000+00:00", "ExecutionProperty": { "MaxConcurrentRuns": 1 |
AmazonKeyspaces-083 | AmazonKeyspaces.pdf | 83 | aws glue list-jobs The output of the statement should look similar to this. { "JobNames": [ "AmazonKeyspacesExportToS3-cfn-setup-cfn-glue" ] } To see the details of the job, you can use the following command. aws glue get-job --job-name AmazonKeyspacesExportToS3-cfn-setup-cfn-glue The output of the command shows all the details of the job. This includes the default arguments that you can override when running the job. { "Job": { "Name": "AmazonKeyspacesExportToS3-cfn-setup-cfn-glue", "JobMode": "SCRIPT", "JobRunQueuingEnabled": false, "Description": "export to s3", Step 2: Configure the AWS Glue job 238 Amazon Keyspaces (for Apache Cassandra) Developer Guide "Role": "iam-export-role", "CreatedOn": "2025-01-30T15:53:30.765000+00:00", "LastModifiedOn": "2025-01-30T15:53:30.765000+00:00", "ExecutionProperty": { "MaxConcurrentRuns": 1 }, "Command": { "Name": "glueetl", "ScriptLocation": "s3://s3-keyspaces/scripts/cfn-setup-cfn-glue- export.scala", "PythonVersion": "3" }, "DefaultArguments": { "--write-shuffle-spills-to-s3": "true", "--S3_URI": "s3://s3-keyspaces", "--TempDir": "s3://s3-keyspaces/shuffle-space/export-sample/", "--extra-jars": "s3://s3-keyspaces/jars/spark-cassandra- connector-assembly_2.12-3.1.0.jar,s3://s3-keyspaces/jars/aws-sigv4-auth- cassandra-java-driver-plugin-4.0.9-shaded.jar,s3://s3-keyspaces/jars/spark- extension_2.12-2.8.0-3.4.jar,s3://s3-keyspaces/jars/amazon-keyspaces-helpers-1.0- SNAPSHOT.jar", "--class": "GlueApp", "--user-jars-first": "true", "--enable-metrics": "true", "--enable-spark-ui": "true", "--KEYSPACE_NAME": "catalog", "--spark-event-logs-path": "s3://s3-keyspaces/spark-logs/", "--enable-continuous-cloudwatch-log": "true", "--write-shuffle-files-to-s3": "true", "--FORMAT": "parquet", "--TABLE_NAME": "book_awards", "--job-language": "scala", "--extra-files": "s3://s3-keyspaces/conf/keyspaces-application.conf", "--DRIVER_CONF": "keyspaces-application.conf" }, "MaxRetries": 0, "AllocatedCapacity": 4, "Timeout": 2880, "MaxCapacity": 4.0, "WorkerType": "G.2X", "NumberOfWorkers": 2, "GlueVersion": "3.0" } Step 2: Configure the AWS Glue job 239 Amazon Keyspaces (for Apache Cassandra) Developer Guide } If the AWS CloudFormation stack process fails, you can review the errors for the failed stack in the AWS CloudFormation console. You can review the details of the export job in the AWS Glue console by choosing ETL jobs on the left-side menu. After you have confirmed the details of the AWS Glue export job, proceed to the section called “Step 3: Run the export AWS Glue job from the AWS CLI” to run the job to export the data from your Amazon Keyspaces table. Step 3: Run the AWS Glue job to export the Amazon Keyspaces table to the Amazon S3 bucket from the AWS CLI In this step, you use the AWS CLI to run the AWS Glue job created in the previous step to export an Amazon Keyspaces table to your bucket in Amazon S3. Run the export job from the AWS CLI 1. In the following example, the AWS CLI command runs the job created in the previous step. aws glue start-job-run --job-name AmazonKeyspacesExportToS3-cfn-setup-cfn-glue • You can override any of the AWS Glue job parameters including the default arguments in the AWS CLI command. To override any default arguments of the job, for example keyspace or table name, you can pass them as arguments. For a full list of arguments, see start-job-run in the AWS Glue Command Line Reference. The following command runs the AWS Glue export job, but overrides the number of AWS Glue workers, worker type, and the table name. aws glue start-job-run --job-name AmazonKeyspacesExportToS3-cfn-setup-cfn-glue \ --number-of-workers 8 --worker-type G.2X \ --arguments '{"--TABLE_NAME":"my_table"}' 2. Confirm that your table has been exported to your Amazon S3 bucket. Based on the size of the table, this can take some time. When the export job is finished, you can see the following folders in the bucket using the example command. Step 3: Run the export AWS Glue job from the AWS CLI 240 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws s3 ls s3://s3-keyspaces The output shows the following structure in your bucket. PRE conf/ PRE export/ PRE jars/ PRE scripts/ PRE spark-logs/ Your files will be located in the following folder structure under export, data/time values will show your own values. \------- export \----- keyspace_name \----- table_name \----- snapshot \----- year=2025 \----- month=01 \----- day=02 \----- hour=09 \----- minute=22 \--- YOUR DATA HERE To schedule the AWS Glue job you just ran manually, proceed to the section called “Step 4: (Optional) Schedule the export job”. Step 4: (Optional) Create a trigger to schedule the export job To run the export job created in the previous step on a regular basis, you can create a scheduled trigger. For more information, see AWS Glue triggers in the AWS Glue Developer Guide. Schedule a AWS Glue job 1. The following AWS CLI command is an example of a simple trigger with the name KeyspacesExportWeeklyTrigger that runs the AWS Glue job with the name AmazonKeyspacesExportToS3-cfn-setup-cfn-glue once per week on Monday at 12:00 UTC. Step 4: (Optional) Schedule the export job 241 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws glue create-trigger \ --name KeyspacesExportWeeklyTrigger \ --type SCHEDULED \ --schedule "cron(0 12 ? * MON *)" \ --start-on-creation \ --actions '[{ "JobName": "AmazonKeyspacesExportToS3-cfn-setup-cfn-glue" }]' • To override any of the default settings of the scheduled job, you can pass them as arguments. In this example we override the keyspace name, the table name, the number of workers, and the worker type by passing them as arguments. The following command is an example of this. aws glue create-trigger \ --name KeyspacesExportWeeklyTrigger \ --type SCHEDULED \ --schedule "cron(0 12 ? * MON *)" \ --start-on-creation \ --actions '[{ "JobName": "AmazonKeyspacesExportToS3-cfn-setup-cfn-glue", "Arguments": { "--number-of-workers": "8", "--worker-type": "G.2X"}, "--table_name": "my_table", "--keyspace_name": "my_keyspace" |
AmazonKeyspaces-084 | AmazonKeyspaces.pdf | 84 | \ --schedule "cron(0 12 ? * MON *)" \ --start-on-creation \ --actions '[{ "JobName": "AmazonKeyspacesExportToS3-cfn-setup-cfn-glue" }]' • To override any of the default settings of the scheduled job, you can pass them as arguments. In this example we override the keyspace name, the table name, the number of workers, and the worker type by passing them as arguments. The following command is an example of this. aws glue create-trigger \ --name KeyspacesExportWeeklyTrigger \ --type SCHEDULED \ --schedule "cron(0 12 ? * MON *)" \ --start-on-creation \ --actions '[{ "JobName": "AmazonKeyspacesExportToS3-cfn-setup-cfn-glue", "Arguments": { "--number-of-workers": "8", "--worker-type": "G.2X"}, "--table_name": "my_table", "--keyspace_name": "my_keyspace" }]' 2. To confirm that the trigger has been created, use the following command. aws glue list-triggers The output of the command should look similar to this. { "TriggerNames": [ "KeyspacesExportWeeklyTrigger" ] } Step 4: (Optional) Schedule the export job 242 Amazon Keyspaces (for Apache Cassandra) Developer Guide To clean up the AWS resources created in this tutorial, proceed to the section called “Step 5: (Optional) Cleanup”. Step 5: (Optional) Cleanup Follow these steps to remove all the AWS resources created in this tutorial. Remove the resources created in this tutorial 1. Delete the second AWS CloudFormation stack created in this tutorial. This removes the AWS Glue job and trigger created in this tutorial. You can use the following command to do this. aws cloudformation delete-stack --stack-name cfn-glue 2. Delete the Amazon S3 bucket along with all data stored in it. You can use the following command to do so. aws s3 rm s3://s3-keyspaces --recursive 3. Delete the first stack created in this tutorial. This deletes the IAM role and associated permissions created in this tutorial. You can use the following command as an example. aws cloudformation delete-stack --stack-name cfn-setup 4. Delete the Amazon Keyspaces keyspace and table. Deleting the keyspace automatically deletes all tables in that keyspace. You can use one the following options to do so. AWS CLI aws keyspaces delete-keyspace --keyspace-name 'aws' To confirm that the keyspace was deleted, you can use the following command. aws keyspaces list-keyspaces To delete the table first, you can use the following command. aws keyspaces delete-table --keyspace-name 'aws' --table-name 'user' To confirm that your table was deleted, you can use the following command. Step 5: (Optional) Cleanup 243 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws keyspaces list-tables --keyspace-name 'aws' For more information, see delete keyspace and delete table in the AWS CLI Command Reference. cqlsh DROP KEYSPACE IF EXISTS "aws"; To verify that your keyspaces was deleted, you can use the following statement. SELECT * FROM system_schema.keyspaces ; Your keyspace should not be listed in the output of this statement. Note that there can be a delay until the keyspaces is deleted. For more information, see the section called “DROP KEYSPACE”. To delete the table first, you can use the following command. DROP TABLE "aws.user" To confirm that your table was deleted, you can use the following command. SELECT * FROM system_schema.tables WHERE keyspace_name = "aws"; Your table should not be listed in the output of this statement. Note that there can be a delay until the table is deleted. For more information, see the section called “DROP TABLE”. Step 5: (Optional) Cleanup 244 Amazon Keyspaces (for Apache Cassandra) Developer Guide Managing serverless resources in Amazon Keyspaces (for Apache Cassandra) Amazon Keyspaces (for Apache Cassandra) is serverless. Instead of deploying, managing, and maintaining storage and compute resources for your workload through nodes in a cluster, Amazon Keyspaces allocates storage and read/write throughput resources directly to tables. Amazon Keyspaces provisions storage automatically based on the data stored in your tables. It scales storage up and down as you write, update, and delete data, and you pay only for the storage you use. Data is replicated across multiple Availability Zones for high availability. Amazon Keyspaces monitors the size of your tables continuously to determine your storage charges. For more information about how Amazon Keyspaces calculates the billable size of the data, see the section called “Estimate row size”. This chapter covers key aspects of resource management in Amazon Keyspaces. • Estimate row size – To estimate the encoded size of rows in Amazon Keyspaces, consider factors like partition key metadata, clustering column metadata, column identifiers, data types, and row metadata. This encoded row size is used for billing, quota management, and provisioned throughput capacity planning. • Estimate capacity consumption – This section covers examples of how to estimate read and write capacity consumption for common scenarios like range queries, limit queries, table scans, lightweight transactions, static columns, and multi-Region tables. You can use Amazon CloudWatch to monitor actual capacity utilization. For more information about monitoring with CloudWatch, see the section called “Monitoring with CloudWatch”. • Configure read/write capacity modes – You can choose between two capacity modes for processing reads and writes |
AmazonKeyspaces-085 | AmazonKeyspaces.pdf | 85 | metadata, column identifiers, data types, and row metadata. This encoded row size is used for billing, quota management, and provisioned throughput capacity planning. • Estimate capacity consumption – This section covers examples of how to estimate read and write capacity consumption for common scenarios like range queries, limit queries, table scans, lightweight transactions, static columns, and multi-Region tables. You can use Amazon CloudWatch to monitor actual capacity utilization. For more information about monitoring with CloudWatch, see the section called “Monitoring with CloudWatch”. • Configure read/write capacity modes – You can choose between two capacity modes for processing reads and writes on your tables: • On-demand mode (default) – Pay per request for read and write throughput. Amazon Keyspaces can instantly scale capacity up to any previously reached traffic level. • Provisioned mode – Specify the required number of read and write capacity units in advance. This mode helps maintain predictable throughput performance. • Manage throughput capacity with automatic scaling – For provisioned tables, you can enable automatic scaling to adjust throughput capacity automatically based on actual application traffic. Amazon Keyspaces uses target tracking to increase or decrease provisioned capacity, keeping utilization at your specified target. 245 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Use burst capacity effectively – Amazon Keyspaces provides burst capacity by reserving a portion of unused throughput for handling spikes in traffic. This flexibility allows occasional bursts of activity beyond your provisioned throughput. To troubleshoot capacity errors, see the section called “Serverless capacity errors”. Topics • Estimate row size in Amazon Keyspaces • Estimate capacity consumption of read and write throughput in Amazon Keyspaces • Configure read/write capacity modes in Amazon Keyspaces • Manage throughput capacity automatically with Amazon Keyspaces auto scaling • Use burst capacity effectively in Amazon Keyspaces Estimate row size in Amazon Keyspaces Amazon Keyspaces provides fully managed storage that offers single-digit millisecond read and write performance and stores data durably across multiple AWS Availability Zones. Amazon Keyspaces attaches metadata to all rows and primary key columns to support efficient data access and high availability. This topic provides details about how to estimate the encoded size of rows in Amazon Keyspaces. The encoded row size is used when calculating your bill and quota use. You can also use the encoded row size when estimating provisioned throughput capacity requirements for tables. To calculate the encoded size of rows in Amazon Keyspaces, you can use the following guidelines. Topics • Estimate the encoded size of columns • Estimate the encoded size of data values based on data type • Consider the impact of Amazon Keyspaces features on row size • Choose the right formula to calculate the encoded size of a row • Row size calculation example Estimate row size 246 Amazon Keyspaces (for Apache Cassandra) Developer Guide Estimate the encoded size of columns This section shows how to estimate the encoded size of columns in Amazon Keyspaces. • Regular columns – For regular columns, which are columns that aren't primary keys, clustering columns, or STATIC columns, use the raw size of the cell data based on the data type and add the required metadata. The data types and some key differences in how Amazon Keyspaces stores data type values and metadata are listed in the next section. • Partition key columns – Partition keys can contain up to 2048 bytes of data. Each key column in the partition key requires up to 3 bytes of metadata. When calculating the size of your row, you should assume each partition key column uses the full 3 bytes of metadata. • Clustering columns – Clustering columns can store up to 850 bytes of data. In addition to the size of the data value, each clustering column requires up to 20% of the data value size for metadata. When calculating the size of your row, you should add 1 byte of metadata for each 5 bytes of clustering column data value. Note To support efficient querying and built-in indexing, Amazon Keyspaces stores the data value of each partition key and clustering key column twice. • Column names – The space required for each column name is stored using a column identifier and added to each data value stored in the column. The storage value of the column identifier depends on the overall number of columns in your table: • 1–62 columns: 1 byte • 63–124 columns: 2 bytes • 125–186 columns: 3 bytes For each additional 62 columns add 1 byte. Note that in Amazon Keyspaces, up to 225 regular columns can be modified with a single INSERT or UPDATE statement. For more information, see the section called “Amazon Keyspaces service quotas”. Estimate the encoded size of data values based on data type This section shows how to estimate the encoded size of different data types in Amazon |
AmazonKeyspaces-086 | AmazonKeyspaces.pdf | 86 | in the column. The storage value of the column identifier depends on the overall number of columns in your table: • 1–62 columns: 1 byte • 63–124 columns: 2 bytes • 125–186 columns: 3 bytes For each additional 62 columns add 1 byte. Note that in Amazon Keyspaces, up to 225 regular columns can be modified with a single INSERT or UPDATE statement. For more information, see the section called “Amazon Keyspaces service quotas”. Estimate the encoded size of data values based on data type This section shows how to estimate the encoded size of different data types in Amazon Keyspaces. Estimate the encoded size of columns 247 Amazon Keyspaces (for Apache Cassandra) Developer Guide • String types – Cassandra ASCII, TEXT, and VARCHAR string data types are all stored in Amazon Keyspaces using Unicode with UTF-8 binary encoding. The size of a string in Amazon Keyspaces equals the number of UTF-8 encoded bytes. • Numeric types – Cassandra INT, BIGINT, SMALLINT, and TINYINT data types are stored in Amazon Keyspaces as data values with variable length, with up to 38 significant digits. Leading and trailing zeroes are trimmed. The size of any of these data types is approximately 1 byte per two significant digits + 1 byte. • Blob type – A BLOB in Amazon Keyspaces is stored with the value's raw byte length. • Boolean type – The size of a Boolean value or a Null value is 1 byte. • Collection types – A column that stores collection data types like LIST or MAP requires 3 bytes of metadata, regardless of its contents. The size of a LIST or MAP is (column id) + sum (size of nested elements) + (3 bytes). The size of an empty LIST or MAP is (column id) + (3 bytes). Each individual LIST or MAP element also requires 1 byte of metadata. • User-defined types – A user-defined type (UDT) requires 3 bytes for metadata, regardless of its contents. For each UDT element, Amazon Keyspaces requires an additional 1 byte of metadata. To calculate the encoded size of a UDT, start with the field name and the field value for the fields of a UDT: • field name – Each field name of the top-level UDT is stored using an identifier. The storage value of the identifier depends on the overall number of fields in your top-level UDT, and can vary between 1 and 3 bytes: • 1–62 fields: 1 byte • 63–124 fields: 2 bytes • 125– max fields: 3 bytes • field value – The bytes required to store the field values of the top-level UDT depend on the data type stored: • Scalar data type – The bytes required for storage are the same as for the same data type stored in a regular column. • Frozen UDT – For each frozen nested UDT, the nested UDT has the same size as it would have in the CQL binary protocol. For a nested UDT, 4 bytes are stored for each field (including empty fields) and the value of the stored field is the CQL binary protocol serialization format of the field value. • Frozen collections: Estimate the encoded size of data values based on data type 248 Amazon Keyspaces (for Apache Cassandra) Developer Guide • LIST and SET – For a nested frozen LIST or SET, 4 bytes are stored for each element of the collection plus the CQL binary protocol serialization format of the collection’s value. • MAP – For a nested frozen MAP, each key-value pair has the following storage requirements: • For each key allocate 4 bytes, then add the CQL binary protocol serialization format of the key. • For each value allocate 4 bytes, then add the CQL binary protocol serialization format of the value. • FROZEN keyword – For frozen collections nested within frozen collections, Amazon Keyspaces doesn't require any additional bytes for meta data. • STATIC keyword – STATIC column data doesn't count towards the maximum row size of 1 MB. To calculate the data size of static columns, see the section called “Calculate static column size per logical partition”. Consider the impact of Amazon Keyspaces features on row size This section shows how features in Amazon Keyspaces impact the encoded size of a row. • Client-side timestamps – Client-side timestamps are stored for every column in each row when the feature is turned on. These timestamps take up approximately 20–40 bytes (depending on your data), and contribute to the storage and throughput cost for the row. For more information about client-side timestamps, see the section called “Client-side timestamps”. • Time to Live (TTL) – TTL metadata takes up approximately 8 bytes for a row when the feature is turned on. Additionally, TTL metadata is stored for every column of each row. |
AmazonKeyspaces-087 | AmazonKeyspaces.pdf | 87 | shows how features in Amazon Keyspaces impact the encoded size of a row. • Client-side timestamps – Client-side timestamps are stored for every column in each row when the feature is turned on. These timestamps take up approximately 20–40 bytes (depending on your data), and contribute to the storage and throughput cost for the row. For more information about client-side timestamps, see the section called “Client-side timestamps”. • Time to Live (TTL) – TTL metadata takes up approximately 8 bytes for a row when the feature is turned on. Additionally, TTL metadata is stored for every column of each row. The TTL metadata takes up approximately 8 bytes for each column storing a scalar data type or a frozen collection. If the column stores a collection data type that's not frozen, for each element of the collection TTL requires approximately 8 additional bytes for metadata. For a column that stores a collection data type when TTL is enabled, you can use the following formula. total encoded size of column = (column id) + sum (nested elements + collection metadata (1 byte) + TTL metadata (8 bytes)) + collection column metadata (3 bytes) TTL metadata contributes to the storage and throughput cost for the row. For more information about TTL, see the section called “Expire data with Time to Live”. Consider the impact of Amazon Keyspaces features on row size 249 Amazon Keyspaces (for Apache Cassandra) Developer Guide Choose the right formula to calculate the encoded size of a row This section shows the different formulas that you can use to estimate either the storage or the capacity throughput requirements for a row of data in Amazon Keyspaces. The total encoded size of a row of data can be estimated based on one of the following formulas, based on your goal: • Throughput capacity – To estimate the encoded size of a row to assess the required read/write request units (RRUs/WRUs) or read/write capacity units (RCUs/WCUs): total encoded size of row = partition key columns + clustering columns + regular columns • Storage size – To estimate the encoded size of a row to predict the BillableTableSizeInBytes, add the required metadata for the storage of the row: total encoded size of row = partition key columns + clustering columns + regular columns + row metadata (100 bytes) Important All column metadata, for example column ids, partition key metadata, clustering column metadata, as well as client-side timestamps, TTL, and row metadata count towards the maximum row size of 1 MB. Row size calculation example Consider the following example of a table where all columns are of type integer. The table has two partition key columns, two clustering columns, and one regular column. Because this table has five columns, the space required for the column name identifier is 1 byte. CREATE TABLE mykeyspace.mytable(pk_col1 int, pk_col2 int, ck_col1 int, ck_col2 int, reg_col1 int, primary key((pk_col1, pk_col2),ck_col1, ck_col2)); In this example, we calculate the size of data when we write a row to the table as shown in the following statement: Choose the right formula to calculate the encoded size of a row 250 Amazon Keyspaces (for Apache Cassandra) Developer Guide INSERT INTO mykeyspace.mytable (pk_col1, pk_col2, ck_col1, ck_col2, reg_col1) values(1,2,3,4,5); To estimate the total bytes required by this write operation, you can use the following steps. 1. Calculate the size of a partition key column by adding the bytes for the data type stored in the column and the metadata bytes. Repeat this for all partition key columns. a. Calculate the size of the first column of the partition key (pk_col1): (2 bytes for the integer data type) x 2 + 1 byte for the column id + 3 bytes for partition key metadata = 8 bytes b. Calculate the size of the second column of the partition key (pk_col2): (2 bytes for the integer data type) x 2 + 1 byte for the column id + 3 bytes for partition key metadata = 8 bytes c. Add both columns to get the total estimated size of the partition key columns: 8 bytes + 8 bytes = 16 bytes for the partition key columns 2. Calculate the size of the clustering column by adding the bytes for the data type stored in the column and the metadata bytes. Repeat this for all clustering columns. a. Calculate the size of the first column of the clustering column (ck_col1): (2 bytes for the integer data type) x 2 + 20% of the data value (2 bytes) for clustering column metadata + 1 byte for the column id = 6 bytes b. Calculate the size of the second column of the clustering column (ck_col2): (2 bytes for the integer data type) x 2 + 20% of the data value (2 bytes) for clustering column metadata + 1 byte |
AmazonKeyspaces-088 | AmazonKeyspaces.pdf | 88 | bytes for the data type stored in the column and the metadata bytes. Repeat this for all clustering columns. a. Calculate the size of the first column of the clustering column (ck_col1): (2 bytes for the integer data type) x 2 + 20% of the data value (2 bytes) for clustering column metadata + 1 byte for the column id = 6 bytes b. Calculate the size of the second column of the clustering column (ck_col2): (2 bytes for the integer data type) x 2 + 20% of the data value (2 bytes) for clustering column metadata + 1 byte for the column id = 6 bytes c. Add both columns to get the total estimated size of the clustering columns: 6 bytes + 6 bytes = 12 bytes for the clustering columns 3. Add the size of the regular columns. In this example we only have one column that stores a single digit integer, which requires 2 bytes with 1 byte for the column id. Row size calculation example 251 Amazon Keyspaces (for Apache Cassandra) Developer Guide 4. Finally, to get the total encoded row size, add up the bytes for all columns. To estimate the billable size for storage, add the additional 100 bytes for row metadata: 16 bytes for the partition key columns + 12 bytes for clustering columns + 3 bytes for the regular column + 100 bytes for row metadata = 131 bytes. To learn how to monitor serverless resources with Amazon CloudWatch, see the section called “Monitoring with CloudWatch”. Estimate capacity consumption of read and write throughput in Amazon Keyspaces When you read or write data in Amazon Keyspaces, the amount of read/write request units (RRUs/WRUs) or read/write capacity units (RCUs/WCUs) your query consumes depends on the total amount of data Amazon Keyspaces has to process to run the query. In some cases, the data returned to the client can be a subset of the data that Amazon Keyspaces had to read to process the query. For conditional writes, Amazon Keyspaces consumes write capacity even if the conditional check fails. To estimate the total amount of data being processed for a request, you have to consider the encoded size of a row and the total number of rows. This topic covers some examples of common scenarios and access patterns to show how Amazon Keyspaces processes queries and how that affects capacity consumption. You can follow the examples to estimate the capacity requirements of your tables and use Amazon CloudWatch to observe the read and write capacity consumption for these use cases. For information on how to calculate the encoded size of rows in Amazon Keyspaces, see the section called “Estimate row size”. Topics • Estimate the capacity consumption of range queries in Amazon Keyspaces • Estimate the read capacity consumption of limit queries • Estimate the read capacity consumption of table scans • Estimate capacity consumption of lightweight transactions in Amazon Keyspaces • Estimate capacity consumption for static columns in Amazon Keyspaces Estimate capacity consumption 252 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Estimate and provision capacity for a multi-Region table in Amazon Keyspaces • Estimate read and write capacity consumption with Amazon CloudWatch in Amazon Keyspaces Estimate the capacity consumption of range queries in Amazon Keyspaces To look at the read capacity consumption of a range query, we use the following example table which is using on-demand capacity mode. pk1 | pk2 | pk3 | ck1 | ck2 | ck3 | value -----+-----+-----+-----+-----+-----+------- a | b | 1 | a | b | 50 | <any value that results in a row size larger than 4KB> a | b | 1 | a | b | 60 | value_1 a | b | 1 | a | b | 70 | <any value that results in a row size larger than 4KB> Now run the following query on this table. SELECT * FROM amazon_keyspaces.example_table_1 WHERE pk1='a' AND pk2='b' AND pk3=1 AND ck1='a' AND ck2='b' AND ck3 > 50 AND ck3 < 70; You receive the following result set from the query and the read operation performed by Amazon Keyspaces consumes 2 RRUs in LOCAL_QUORUM consistency mode. pk1 | pk2 | pk3 | ck1 | ck2 | ck3 | value -----+-----+-----+-----+-----+-----+------- a | b | 1 | a | b | 60 | value_1 Amazon Keyspaces consumes 2 RRUs to evaluate the rows with the values ck3=60 and ck3=70 to process the query. However, Amazon Keyspaces only returns the row where the WHERE condition specified in the query is true, which is the row with value ck3=60. To evaluate the range specified in the query, Amazon Keyspaces reads the row matching the upper bound of the range, in this case ck3 = 70, but doesn’t return that row in the result. The read |
AmazonKeyspaces-089 | AmazonKeyspaces.pdf | 89 | ck1 | ck2 | ck3 | value -----+-----+-----+-----+-----+-----+------- a | b | 1 | a | b | 60 | value_1 Amazon Keyspaces consumes 2 RRUs to evaluate the rows with the values ck3=60 and ck3=70 to process the query. However, Amazon Keyspaces only returns the row where the WHERE condition specified in the query is true, which is the row with value ck3=60. To evaluate the range specified in the query, Amazon Keyspaces reads the row matching the upper bound of the range, in this case ck3 = 70, but doesn’t return that row in the result. The read capacity consumption is based on the data read when processing the query, not on the data returned. Estimate the read capacity consumption of limit queries When processing a query that uses the LIMIT clause, Amazon Keyspaces reads rows up to the maximum page size when trying to match the condition specified in the query. If Amazon Estimate the capacity consumption of range queries 253 Amazon Keyspaces (for Apache Cassandra) Developer Guide Keyspaces can't find sufficient matching data that meets the LIMIT value on the first page, one or more paginated calls could be needed. To continue reads on the next page, you can use a pagination token. The default page size is 1MB. To consume less read capacity when using LIMIT clauses, you can reduce the page size. For more information about pagination, see the section called “Paginate results”. For an example, let's look at the following query. SELECT * FROM my_table WHERE partition_key=1234 LIMIT 1; If you don’t set the page size, Amazon Keyspaces reads 1MB of data even though it returns only 1 row to you. To only have Amazon Keyspaces read one row, you can set the page size to 1 for this query. In this case, Amazon Keyspaces would only read one row provided you don’t have expired rows based on Time-to-live settings or client-side timestamps. The PAGE SIZE parameter determines how many rows Amazon Keyspaces scans from disk for each request, not how many rows Amazon Keyspaces returns to the client. Amazon Keyspaces applies the filters you provide, for example inequality on non-key columns or a LIMIT after it scans the data on disk. If you don’t explicitly set the PAGE SIZE, Amazon Keyspaces reads up to 1MB of data before applying filters. For example, if you're using LIMIT 1 without specifying the PAGE SIZE, Amazon Keyspaces could read thousands of rows from disk before applying the limit clause and returning only a single row. To avoid over-reading, reduce the PAGE SIZE which reduces the number of rows Amazon Keyspaces scans for each fetch. For example, if you define LIMIT 5 in your query, set the PAGE SIZE to a value between 5 - 10 so that Amazon Keyspaces only scans 5 - 10 rows on each paginated call. You can modify this number to reduce the number of fetches. For limits that are larger than the page size, Amazon Keyspaces maintains the total result count with pagination state. In the case of a LIMIT of 10,000 rows, Amazon Keyspaces can fetch these results in two pages of 5,000 rows each. The 1MB limit is the upper bound for any page size set. Estimate the read capacity consumption of table scans Queries that result in full table scans, for example queries using the ALLOW FILTERING option, are another example of queries that process more reads than what they return as results. And the read capacity consumption is based on the data read, not the data returned. For the table scan example we use the following example table in on-demand capacity mode. Estimate the read capacity consumption of table scans 254 Amazon Keyspaces (for Apache Cassandra) Developer Guide pk | ck | value ---+----+--------- pk | 10 | <any value that results in a row size larger than 4KB> pk | 20 | value_1 pk | 30 | <any value that results in a row size larger than 4KB> Amazon Keyspaces creates a table in on-demand capacity mode with four partitions by default. In this example table, all the data is stored in one partition and the remaining three partitions are empty. Now run the following query on the table. SELECT * from amazon_keyspaces.example_table_2; This query results in a table scan operation where Amazon Keyspaces scans all four partitions of the table and consumes 6 RRUs in LOCAL_QUORUM consistency mode. First, Amazon Keyspaces consumes 3 RRUs for reading the three rows with pk=‘pk’. Then, Amazon Keyspaces consumes the additional 3 RRUs for scanning the three empty partitions of the table. Because this query results in a table scan, Amazon Keyspaces scans all the partitions in the table, including partitions without data. Estimate capacity consumption of lightweight transactions in Amazon Keyspaces Lightweight transactions (LWT) allow you to |
AmazonKeyspaces-090 | AmazonKeyspaces.pdf | 90 | query on the table. SELECT * from amazon_keyspaces.example_table_2; This query results in a table scan operation where Amazon Keyspaces scans all four partitions of the table and consumes 6 RRUs in LOCAL_QUORUM consistency mode. First, Amazon Keyspaces consumes 3 RRUs for reading the three rows with pk=‘pk’. Then, Amazon Keyspaces consumes the additional 3 RRUs for scanning the three empty partitions of the table. Because this query results in a table scan, Amazon Keyspaces scans all the partitions in the table, including partitions without data. Estimate capacity consumption of lightweight transactions in Amazon Keyspaces Lightweight transactions (LWT) allow you to perform conditional write operations against your table data. Conditional update operations are useful when inserting, updating and deleting records based on conditions that evaluate the current state. In Amazon Keyspaces, all write operations require LOCAL_QUORUM consistency and there is no additional charge for using LWTs. The difference for LWTs is that when a LWT condition check results in FALSE, Amazon Keyspaces consumes write capacity units (WCUs) or write request units (WRUs). The number of WCUs/WRUs consumed depends on the size of the row. For example, if the row size is 2 KB, the failed conditional write consumes two WCUs/WRUs. If the row doesn’t currently exist in the table, the operation consumes one WCUs/WRUs. To determine the number of requests that resulted in condition check failures, you can monitor the ConditionalCheckFailed metric in CloudWatch. Estimate capacity consumption of LWT 255 Amazon Keyspaces (for Apache Cassandra) Developer Guide Estimate LWT costs for tables with Time to Live (TTL) LWTs can require additional read capacity units (RCUs) or read request units (RRUs) for tables configured with TTL that don't use client-side timestamps. When using IF EXISTS or IF NOT EXISTS keywords condition check results in FALSE, the following capacity units are consumed: • RCUs/RRUs – If the row exists, the RCUs/RRUs consumed are based on the size of the existing row. • RCUs/RRUs – If the row doesn't exist, a single RCU/RRU is consumed. If the evaluated condition results in a successful write operation, WCUs/WRUs are consumed based on the size of the new row. Estimate capacity consumption for static columns in Amazon Keyspaces In an Amazon Keyspaces table with clustering columns, you can use the STATIC keyword to create a static column. The value stored in a static column is shared between all rows in a logical partition. When you update the value of this column, Amazon Keyspaces applies the change automatically to all rows in the partition. This section describes how to calculate the encoded size of data when you're writing to static columns. This process is handled separately from the process that writes data to the nonstatic columns of a row. In addition to size quotas for static data, read and write operations on static columns also affect metering and throughput capacity for tables independently. For functional differences with Apache Cassandra when using static columns and paginated range read results, see the section called “Pagination”. Topics • Calculate the static column size per logical partition in Amazon Keyspaces • Estimate capacity throughput requirements for read/write operations on static data in Amazon Keyspaces Calculate the static column size per logical partition in Amazon Keyspaces This section provides details about how to estimate the encoded size of static columns in Amazon Keyspaces. The encoded size is used when you're calculating your bill and quota use. You should also use the encoded size when you calculate provisioned throughput capacity requirements for Estimate capacity consumption of static columns 256 Amazon Keyspaces (for Apache Cassandra) Developer Guide tables. To calculate the encoded size of static columns in Amazon Keyspaces, you can use the following guidelines. • Partition keys can contain up to 2048 bytes of data. Each key column in the partition key requires up to 3 bytes of metadata. These metadata bytes count towards your static data size quota of 1 MB per partition. When calculating the size of your static data, you should assume that each partition key column uses the full 3 bytes of metadata. • Use the raw size of the static column data values based on the data type. For more information about data types, see the section called “Data types”. • Add 104 bytes to the size of the static data for metadata. • Clustering columns and regular, nonprimary key columns do not count towards the size of static data. To learn how to estimate the size of nonstatic data within rows, see the section called “Estimate row size”. The total encoded size of a static column is based on the following formula: partition key columns + static columns + metadata = total encoded size of static data Consider the following example of a table where all columns are of type integer. The table has two partition key columns, |
AmazonKeyspaces-091 | AmazonKeyspaces.pdf | 91 | types”. • Add 104 bytes to the size of the static data for metadata. • Clustering columns and regular, nonprimary key columns do not count towards the size of static data. To learn how to estimate the size of nonstatic data within rows, see the section called “Estimate row size”. The total encoded size of a static column is based on the following formula: partition key columns + static columns + metadata = total encoded size of static data Consider the following example of a table where all columns are of type integer. The table has two partition key columns, two clustering columns, one regular column, and one static column. CREATE TABLE mykeyspace.mytable(pk_col1 int, pk_col2 int, ck_col1 int, ck_col2 int, reg_col1 int, static_col1 int static, primary key((pk_col1, pk_col2),ck_col1, ck_col2)); In this example, we calculate the size of static data of the following statement: INSERT INTO mykeyspace.mytable (pk_col1, pk_col2, static_col1) values(1,2,6); To estimate the total bytes required by this write operation, you can use the following steps. 1. Calculate the size of a partition key column by adding the bytes for the data type stored in the column and the metadata bytes. Repeat this for all partition key columns. a. Calculate the size of the first column of the partition key (pk_col1): Estimate capacity consumption of static columns 257 Amazon Keyspaces (for Apache Cassandra) Developer Guide 4 bytes for the integer data type + 3 bytes for partition key metadata = 7 bytes b. Calculate the size of the second column of the partition key (pk_col2): 4 bytes for the integer data type + 3 bytes for partition key metadata = 7 bytes c. Add both columns to get the total estimated size of the partition key columns: 7 bytes + 7 bytes = 14 bytes for the partition key columns 2. Add the size of the static columns. In this example, we only have one static column that stores an integer (which requires 4 bytes). 3. Finally, to get the total encoded size of the static column data, add up the bytes for the primary key columns and static columns, and add the additional 104 bytes for metadata: 14 bytes for the partition key columns + 4 bytes for the static column + 104 bytes for metadata = 122 bytes. You can also update static and nonstatic data with the same statement. To estimate the total size of the write operation, you must first calculate the size of the nonstatic data update. Then calculate the size of the row update as shown in the example at the section called “Estimate row size”, and add the results. In this case, you can write a total of 2 MB—1 MB is the maximum row size quota, and 1 MB is the quota for the maximum static data size per logical partition. To calculate the total size of an update of static and nonstatic data in the same statement, you can use the following formula: (partition key columns + static columns + metadata = total encoded size of static data) + (partition key columns + clustering columns + regular columns + row metadata = total encoded size of row) = total encoded size of data written Consider the following example of a table where all columns are of type integer. The table has two partition key columns, two clustering columns, one regular column, and one static column. Estimate capacity consumption of static columns 258 Amazon Keyspaces (for Apache Cassandra) Developer Guide CREATE TABLE mykeyspace.mytable(pk_col1 int, pk_col2 int, ck_col1 int, ck_col2 int, reg_col1 int, static_col1 int static, primary key((pk_col1, pk_col2),ck_col1, ck_col2)); In this example, we calculate the size of data when we write a row to the table, as shown in the following statement: INSERT INTO mykeyspace.mytable (pk_col1, pk_col2, ck_col1, ck_col2, reg_col1, static_col1) values(2,3,4,5,6,7); To estimate the total bytes required by this write operation, you can use the following steps. 1. Calculate the total encoded size of static data as shown earlier. In this example, it's 122 bytes. 2. Add the size of the total encoded size of the row based on the update of nonstatic data, following the steps at the section called “Estimate row size”. In this example, the total size of the row update is 134 bytes. 122 bytes for static data + 134 bytes for nonstatic data = 256 bytes. Estimate capacity throughput requirements for read/write operations on static data in Amazon Keyspaces Static data is associated with logical partitions in Cassandra, not individual rows. Logical partitions in Amazon Keyspaces can be virtually unbound in size by spanning across multiple physical storage partitions. As a result, Amazon Keyspaces meters write operations on static and nonstatic data separately. Furthermore, writes that include both static and nonstatic data require additional underlying operations to provide data consistency. If you perform a mixed write |
AmazonKeyspaces-092 | AmazonKeyspaces.pdf | 92 | the row update is 134 bytes. 122 bytes for static data + 134 bytes for nonstatic data = 256 bytes. Estimate capacity throughput requirements for read/write operations on static data in Amazon Keyspaces Static data is associated with logical partitions in Cassandra, not individual rows. Logical partitions in Amazon Keyspaces can be virtually unbound in size by spanning across multiple physical storage partitions. As a result, Amazon Keyspaces meters write operations on static and nonstatic data separately. Furthermore, writes that include both static and nonstatic data require additional underlying operations to provide data consistency. If you perform a mixed write operation of both static and nonstatic data, this results in two separate write operations—one for nonstatic and one for static data. This applies to both on- demand and provisioned read/write capacity modes. The following example provides details about how to estimate the required read capacity units (RCUs) and write capacity units (WCUs) when you're calculating provisioned throughput capacity requirements for tables in Amazon Keyspaces that have static columns. You can estimate how much capacity your table needs to process writes that include both static and nonstatic data by using the following formula: Estimate capacity consumption of static columns 259 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2 x WCUs required for nonstatic data + 2 x WCUs required for static data For example, if your application writes 27 KBs of data per second and each write includes 25.5 KBs of nonstatic data and 1.5 KBs of static data, then your table requires 56 WCUs (2 x 26 WCUs + 2 x 2 WCUs). Amazon Keyspaces meters the reads of static and nonstatic data the same as reads of multiple rows. As a result, the price of reading static and nonstatic data in the same operation is based on the aggregate size of the data processed to perform the read. To learn how to monitor serverless resources with Amazon CloudWatch, see the section called “Monitoring with CloudWatch”. Estimate and provision capacity for a multi-Region table in Amazon Keyspaces You can configure the throughput capacity of a multi-Region table in one of two ways: • On-demand capacity mode, measured in write request units (WRUs) • Provisioned capacity mode with auto scaling, measured in write capacity units (WCUs) You can use provisioned capacity mode with auto scaling or on-demand capacity mode to help ensure that a multi-Region table has sufficient capacity to perform replicated writes to all AWS Regions. Note Changing the capacity mode of the table in one of the Regions changes the capacity mode for all replicas. By default, Amazon Keyspaces uses on-demand mode for multi-Region tables. With on-demand mode, you don't need to specify how much read and write throughput that you expect your application to perform. Amazon Keyspaces instantly accommodates your workloads as they ramp up or down to any previously reached traffic level. If a workload’s traffic level hits a new peak, Amazon Keyspaces adapts rapidly to accommodate the workload. Estimate capacity for a multi-Region table 260 Amazon Keyspaces (for Apache Cassandra) Developer Guide If you choose provisioned capacity mode for a table, you have to configure the number of read capacity units (RCUs) and write capacity units (WCUs) per second that your application requires. To plan a multi-Region table's throughput capacity needs, you should first estimate the number of WCUs per second needed for each Region. Then you add the writes from all Regions that your table is replicated in, and use the sum to provision capacity for each Region. This is required because every write that is performed in one Region must also be repeated in each replica Region. If the table doesn't have enough capacity to handle the writes from all Regions, capacity exceptions will occur. In addition, inter-Regional replication wait times will rise. For example, if you have a multi-Region table where you expect 5 writes per second in US East (N. Virginia), 10 writes per second in US East (Ohio), and 5 writes per second in Europe (Ireland), you should expect the table to consume 20 WCUs in each Region: US East (N. Virginia), US East (Ohio), and Europe (Ireland). That means that in this example, you need to provision 20 WCUs for each of the table's replicas. You can monitor your table's capacity consumption using Amazon CloudWatch. For more information, see the section called “Monitoring with CloudWatch”. Each write is billed as 1 WCU, so you would see a total of 60 WCUs billed in this example. For more information about pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. For more information about provisioned capacity with Amazon Keyspaces auto scaling, see the section called “Manage throughput capacity with auto scaling”. Note If a table is running in provisioned capacity mode with auto scaling, the provisioned write capacity is allowed to float within |
AmazonKeyspaces-093 | AmazonKeyspaces.pdf | 93 | each of the table's replicas. You can monitor your table's capacity consumption using Amazon CloudWatch. For more information, see the section called “Monitoring with CloudWatch”. Each write is billed as 1 WCU, so you would see a total of 60 WCUs billed in this example. For more information about pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. For more information about provisioned capacity with Amazon Keyspaces auto scaling, see the section called “Manage throughput capacity with auto scaling”. Note If a table is running in provisioned capacity mode with auto scaling, the provisioned write capacity is allowed to float within those auto scaling settings for each Region. Estimate read and write capacity consumption with Amazon CloudWatch in Amazon Keyspaces To estimate and monitor read and write capacity consumption, you can use a CloudWatch dashboard. For more information about available metrics for Amazon Keyspaces, see the section called “Metrics and dimensions”. To monitor read and write capacity units consumed by a specific statement with CloudWatch, you can follow these steps. Estimate capacity consumption with CloudWatch 261 Amazon Keyspaces (for Apache Cassandra) Developer Guide 1. Create a new table with sample data 2. Configure a Amazon Keyspaces CloudWatch dashboard for the table. To get started, you can use a dashboard template available on Github. 3. Run the CQL statement, for example using the ALLOW FILTERING option, and check the read capacity units consumed for the full table scan in the dashboard. Configure read/write capacity modes in Amazon Keyspaces Amazon Keyspaces has two read/write capacity modes for processing reads and writes on your tables: • On-demand (default) • Provisioned The read/write capacity mode that you choose controls how you are charged for read and write throughput and how table throughput capacity is managed. Topics • Configure on-demand capacity mode • Configure provisioned throughput capacity mode • View the capacity mode of a table in Amazon Keyspaces • Change capacity mode • Pre-warm a new table for on-demand capacity mode in Amazon Keyspaces • Pre-warm an existing table for on-demand capacity mode in Amazon Keyspaces Configure on-demand capacity mode Amazon Keyspaces (for Apache Cassandra) on-demand capacity mode is a flexible billing option capable of serving thousands of requests per second without capacity planning. This option offers pay-per-request pricing for read and write requests so that you pay only for what you use. When you choose on-demand mode, Amazon Keyspaces can scale the throughput capacity for your table up to any previously reached traffic level instantly, and then back down when application traffic decreases. If a workload’s traffic level hits a new peak, the service adapts rapidly to increase Configure read/write capacity modes 262 Amazon Keyspaces (for Apache Cassandra) Developer Guide throughput capacity for your table. You can enable on-demand capacity mode for both new and existing tables. On-demand mode is a good option if any of the following is true: • You create new tables with unknown workloads. • You have unpredictable application traffic. • You prefer the ease of paying for only what you use. To get started with on-demand mode, you can create a new table or update an existing table to use on-demand capacity mode using the console or with a few lines of Cassandra Query Language (CQL) code. For more information, see the section called “Tables”. Topics • Read request units and write request units • Peak traffic and scaling properties • Initial throughput for on-demand capacity mode Read request units and write request units With on-demand capacity mode tables, you don't need to specify how much read and write throughput you expect your application to use in advance. Amazon Keyspaces charges you for the reads and writes that you perform on your tables in terms of read request units (RRUs) and write request units (WRUs). • One RRU represents one LOCAL_QUORUM read request, or two LOCAL_ONE read requests, for a row up to 4 KB in size. If you need to read a row that is larger than 4 KB, the read operation uses additional RRUs. The total number of RRUs required depends on the row size, and whether you want to use LOCAL_QUORUM or LOCAL_ONE read consistency. For example, reading an 8 KB row requires 2 RRUs using LOCAL_QUORUM read consistency, and 1 RRU if you choose LOCAL_ONE read consistency. • One WRU represents one write for a row up to 1 KB in size. All writes are using LOCAL_QUORUM consistency, and there is no additional charge for using lightweight transactions (LWTs). If you need to write a row that is larger than 1 KB, the write operation uses additional WRUs. The total number of WRUs required depends on the row size. For example, if your row size is 2 KB, you require 2 WRUs to perform one write request. Configure on-demand capacity mode 263 Amazon Keyspaces (for |
AmazonKeyspaces-094 | AmazonKeyspaces.pdf | 94 | LOCAL_QUORUM read consistency, and 1 RRU if you choose LOCAL_ONE read consistency. • One WRU represents one write for a row up to 1 KB in size. All writes are using LOCAL_QUORUM consistency, and there is no additional charge for using lightweight transactions (LWTs). If you need to write a row that is larger than 1 KB, the write operation uses additional WRUs. The total number of WRUs required depends on the row size. For example, if your row size is 2 KB, you require 2 WRUs to perform one write request. Configure on-demand capacity mode 263 Amazon Keyspaces (for Apache Cassandra) Developer Guide For information about supported consistency levels, see the section called “Supported Cassandra consistency levels”. Peak traffic and scaling properties Amazon Keyspaces tables that use on-demand capacity mode automatically adapt to your application’s traffic volume. On-demand capacity mode instantly accommodates up to double the previous peak traffic on a table. For example, your application's traffic pattern might vary between 5,000 and 10,000 LOCAL_QUORUM reads per second, where 10,000 reads per second is the previous traffic peak. With this pattern, on-demand capacity mode instantly accommodates sustained traffic of up to 20,000 reads per second. If your application sustains traffic of 20,000 reads per second, that peak becomes your new previous peak, enabling subsequent traffic to reach up to 40,000 reads per second. If you need more than double your previous peak on a table, Amazon Keyspaces automatically allocates more capacity as your traffic volume increases. This helps ensure that your table has enough throughput capacity to process the additional requests. However, you might observe insufficient throughput capacity errors if you exceed double your previous peak within 30 minutes. For example, suppose that your application's traffic pattern varies between 5,000 and 10,000 strongly consistent reads per second, where 20,000 reads per second is the previously reached traffic peak. In this case, the service recommends that you space your traffic growth over at least 30 minutes before driving up to 40,000 reads per second. To learn how to estimate read and write capacity consumption of a table, see the section called “Estimate capacity consumption”. To learn more about default quotas for your account and how to increase them, see Quotas. Initial throughput for on-demand capacity mode If you create a new table with on-demand capacity mode enabled or switch an existing table to on- demand capacity mode for the first time, the table has the following previous peak settings, even though it hasn't served traffic previously using on-demand capacity mode: • Newly created table with on-demand capacity mode: The previous peak is 2,000 WRUs and 6,000 RRUs. You can drive up to double the previous peak immediately. Doing this enables newly created on-demand tables to serve up to 4,000 WRUs and 12,000 RRUs. Configure on-demand capacity mode 264 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Existing table switched to on-demand capacity mode: The previous peak is half the previous WCUs and RCUs provisioned for the table or the settings for a newly created table with on- demand capacity mode, whichever is higher. Configure provisioned throughput capacity mode If you choose provisioned throughput capacity mode, you specify the number of reads and writes per second that are required for your application. This helps you manage your Amazon Keyspaces usage to stay at or below a defined request rate to maintain predictability. To learn more about automatic scaling for provisioned throughput see the section called “Manage throughput capacity with auto scaling”. Provisioned throughput capacity mode is a good option if any of the following is true: • You have predictable application traffic. • You run applications whose traffic is consistent or ramps up gradually. • You can forecast capacity requirements. Read capacity units and write capacity units For provisioned throughput capacity mode tables, you specify throughput capacity in terms of read capacity units (RCUs) and write capacity units (WCUs): • One RCU represents one LOCAL_QUORUM read per second, or two LOCAL_ONE reads per second, for a row up to 4 KB in size. If you need to read a row that is larger than 4 KB, the read operation uses additional RCUs. The total number of RCUs required depends on the row size, and whether you want LOCAL_QUORUM or LOCAL_ONE reads. For example, if your row size is 8 KB, you require 2 RCUs to sustain one LOCAL_QUORUM read per second, and 1 RCU if you choose LOCAL_ONE reads. • One WCU represents one write per second for a row up to 1 KB in size. All writes are using LOCAL_QUORUM consistency, and there is no additional charge for using lightweight transactions (LWTs). If you need to write a row that is larger than 1 KB, the write operation uses additional WCUs. The total number of WCUs required depends on the |
AmazonKeyspaces-095 | AmazonKeyspaces.pdf | 95 | row size, and whether you want LOCAL_QUORUM or LOCAL_ONE reads. For example, if your row size is 8 KB, you require 2 RCUs to sustain one LOCAL_QUORUM read per second, and 1 RCU if you choose LOCAL_ONE reads. • One WCU represents one write per second for a row up to 1 KB in size. All writes are using LOCAL_QUORUM consistency, and there is no additional charge for using lightweight transactions (LWTs). If you need to write a row that is larger than 1 KB, the write operation uses additional WCUs. The total number of WCUs required depends on the row size. For example, if your row size is 2 KB, you require 2 WCUs to sustain one write request per second. For more information about Configure provisioned throughput capacity mode 265 Amazon Keyspaces (for Apache Cassandra) Developer Guide how to estimate read and write capacity consumption of a table, see the section called “Estimate capacity consumption”. If your application reads or writes larger rows (up to the Amazon Keyspaces maximum row size of 1 MB), it consumes more capacity units. To learn more about how to estimate the row size, see the section called “Estimate row size”. For example, suppose that you create a provisioned table with 6 RCUs and 6 WCUs. With these settings, your application could do the following: • Perform LOCAL_QUORUM reads of up to 24 KB per second (4 KB × 6 RCUs). • Perform LOCAL_ONE reads of up to 48 KB per second (twice as much read throughput). • Write up to 6 KB per second (1 KB × 6 WCUs). Provisioned throughput is the maximum amount of throughput capacity an application can consume from a table. If your application exceeds your provisioned throughput capacity, you might observe insufficient capacity errors. For example, a read request that doesn’t have enough throughput capacity fails with a Read_Timeout exception and is posted to the ReadThrottleEvents metric. A write request that doesn’t have enough throughput capacity fails with a Write_Timeout exception and is posted to the WriteThrottleEvents metric. You can use Amazon CloudWatch to monitor your provisioned and actual throughput metrics and insufficient capacity events. For more information about these metrics, see the section called “Metrics and dimensions”. Note Repeated errors due to insufficient capacity can lead to client-side driver specific exceptions, for example the DataStax Java driver fails with a NoHostAvailableException. To change the throughput capacity settings for tables, you can use the AWS Management Console or the ALTER TABLE statement using CQL, for more information see the section called “ALTER TABLE”. To learn more about default quotas for your account and how to increase them, see Quotas. Configure provisioned throughput capacity mode 266 Amazon Keyspaces (for Apache Cassandra) Developer Guide View the capacity mode of a table in Amazon Keyspaces You can query the system table in the Amazon Keyspaces system keyspace to review capacity mode information about a table. You can also see whether a table is using on-demand or provisioned throughput capacity mode. If the table is configured with provisioned throughput capacity mode, you can see the throughput capacity provisioned for the table. You can also use the AWS CLI to view the capacity mode of a table. To change the provisioned throughput of a table, see the section called “Change capacity mode”. Cassandra Query Language (CQL) Example SELECT * from system_schema_mcs.tables where keyspace_name = 'mykeyspace' and table_name = 'mytable'; A table configured with on-demand capacity mode returns the following. { "capacity_mode":{ "last_update_to_pay_per_request_timestamp":"1579551547603", "throughput_mode":"PAY_PER_REQUEST" } } A table configured with provisioned throughput capacity mode returns the following. { "capacity_mode":{ "last_update_to_pay_per_request_timestamp":"1579048006000", "read_capacity_units":"5000", "throughput_mode":"PROVISIONED", "write_capacity_units":"6000" } } The last_update_to_pay_per_request_timestamp value is measured in milliseconds. CLI View a table's throughput capacity mode using the AWS CLI View the capacity mode of a table 267 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws keyspaces get-table --keyspace-name myKeyspace --table-name myTable The output of the command can look similar to this for a table in provisioned capacity mode. "capacitySpecification": { "throughputMode": "PROVISIONED", "readCapacityUnits": 4000, "writeCapacityUnits": 2000 } The output for a table in on-demand mode looks like this. "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2024-10-03T10:48:19.092000+00:00" } Change capacity mode When you switch a table from provisioned capacity mode to on-demand capacity mode, Amazon Keyspaces makes several changes to the structure of your table and partitions. This process can take several minutes. During the switching period, your table delivers throughput that is consistent with the previously provisioned WCU and RCU amounts. When you switch from on-demand capacity mode back to provisioned capacity mode, your table delivers throughput that is consistent with the previous peak reached when the table was set to on-demand capacity mode. The following waiting periods apply when you switch capacity modes: • You can switch a newly created table in on-demand mode to provisioned capacity mode at any |
AmazonKeyspaces-096 | AmazonKeyspaces.pdf | 96 | capacity mode, Amazon Keyspaces makes several changes to the structure of your table and partitions. This process can take several minutes. During the switching period, your table delivers throughput that is consistent with the previously provisioned WCU and RCU amounts. When you switch from on-demand capacity mode back to provisioned capacity mode, your table delivers throughput that is consistent with the previous peak reached when the table was set to on-demand capacity mode. The following waiting periods apply when you switch capacity modes: • You can switch a newly created table in on-demand mode to provisioned capacity mode at any time. However, you can only switch it back to on-demand mode 24 hours after the table’s creation timestamp. • You can switch an existing table in on-demand mode to provisioned capacity mode at any time. However, you can switch capacity modes from provisioned to on-demand only once in a 24-hour period. Change capacity mode 268 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) Change a table's throughput capacity mode using CQL 1. To change a table's capacity mode to PROVIOSIONED you have to configure the read capacity and write capacity units based on your workloads expected peak values. the following statement is an example of this. You can also run this statement to adjust the read capacity or the write capacity units of the table. ALTER TABLE catalog.book_awards WITH CUSTOM_PROPERTIES={'capacity_mode': {'throughput_mode': 'PROVISIONED', 'read_capacity_units': 6000, 'write_capacity_units': 3000}}; To configure provisioned capacity mode with auto-scaling, see the section called “Configure automatic scaling on an existing table”. 2. To change the capacity mode of a table to on-demand mode, set the throughput mode to PAY_PER_REQUEST. The following statement is an example of this. ALTER TABLE catalog.book_awards WITH CUSTOM_PROPERTIES={'capacity_mode': {'throughput_mode': 'PAY_PER_REQUEST'}}; 3. You can use the following statement to confirm the table's capacity mode. SELECT * from system_schema_mcs.tables where keyspace_name = 'catalog' and table_name = 'book_awards'; A table configured with on-demand capacity mode returns the following. { "capacity_mode":{ "last_update_to_pay_per_request_timestamp":"1727952499092", "throughput_mode":"PAY_PER_REQUEST" } } The last_update_to_pay_per_request_timestamp value is measured in milliseconds. Change capacity mode 269 Amazon Keyspaces (for Apache Cassandra) Developer Guide CLI Change a table's throughput capacity mode using the AWS CLI 1. To change the table's capacity mode to PROVIOSIONED you have to configure the read capacity and write capacity units based on the expected peak values of your workload. The following command is an example of this. You can also run this command to adjust the read capacity or the write capacity units of the table. aws keyspaces update-table --keyspace-name catalog --table-name book_awards \--capacity-specification throughputMode=PROVISIONED,readCapacityUnits=6000,writeCapacityUnits=3000 To configure provisioned capacity mode with auto-scaling, see the section called “Configure automatic scaling on an existing table”. 2. To change the capacity mode of a table to on-demand mode, you set the throughput mode to PAY_PER_REQUEST. The following statement is an example of this. aws keyspaces update-table --keyspace-name catalog --table-name book_awards \--capacity-specification throughputMode=PAY_PER_REQUEST 3. You can use the following command to review the capacity mode that's configured for a table. aws keyspaces get-table --keyspace-name catalog --table-name book_awards The output for a table in on-demand mode looks like this. "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2024-10-03T10:48:19.092000+00:00" } Change capacity mode 270 Amazon Keyspaces (for Apache Cassandra) Developer Guide Pre-warm a new table for on-demand capacity mode in Amazon Keyspaces Amazon Keyspaces automatically scales storage partitions based on throughput, but for new tables or new throughput peaks, it can take longer to allocate the required storage partitions. To insure that tables in on-demand and provisioned capacity mode have enough storage partitions to support the sudden higher throughput, you can pre-warm a new or existing table. A common scenario for pre-warming a new table is when you're migrating data from another database, which may require loading terabytes of data in a short period of time. For on-demand tables, Amazon Keyspaces automatically allocates more capacity as your traffic volume increases. New on-demand tables can sustain up to 4,000 writes per second and 12,000 strongly consistent reads or 24,000 eventually consistent reads per second. An on-demand table grows traffic based on previously recorded throughput over time. If you anticipate a spike in peak capacity that exceeds the settings for new tables, you can pre- warm the table to the peak capacity of the expected spike. To pre-warm a new table for on-demand capacity mode in Amazon Keyspaces, you can follow these steps. To pre-warm an existing table, see the section called “Pre-warm an existing table for on-demand capacity”. Before you get started, review your account and table quotas for provisioned mode and adjust them as needed. Console How to pre-warm a new table for on-demand capacity mode 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables, and then choose Create table. 3. On the Create table |
AmazonKeyspaces-097 | AmazonKeyspaces.pdf | 97 | capacity of the expected spike. To pre-warm a new table for on-demand capacity mode in Amazon Keyspaces, you can follow these steps. To pre-warm an existing table, see the section called “Pre-warm an existing table for on-demand capacity”. Before you get started, review your account and table quotas for provisioned mode and adjust them as needed. Console How to pre-warm a new table for on-demand capacity mode 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables, and then choose Create table. 3. On the Create table page in the Table details section, select a keyspace and provide a 4. 5. name for the new table. In the Columns section, create the schema for your table. In the Primary key section, define the primary key of the table and select optional clustering columns. 6. In the Table settings section, choose Customize settings. Pre-warm a new table for on-demand capacity 271 Amazon Keyspaces (for Apache Cassandra) Developer Guide 7. Continue to Read/write capacity settings. 8. 9. For Capacity mode, choose Provisioned. In the Read capacity section, deselect Scale automatically. Set the table's Provisioned capacity units to the expected peak value. 10. In the Write capacity section, choose the same settings as defined in the previous step for read capacity, or configure capacity values manually. 11. Choose Create table. Your table is getting created with the specified capacity settings. 12. When the table's status turns to Active, you can switch the table to On-demand capacity mode. Cassandra Query Language (CQL) Pre-warm a new table for on-demand mode using CQL 1. Create a new table in provisioned mode and specify the expected peak capacity for reads and writes for the new table. The following statement is an example of this. CREATE TABLE catalog.book_awards ( year int, award text, rank int, category text, book_title text, author text, publisher text, PRIMARY KEY ((year, award), category, rank)) WITH CUSTOM_PROPERTIES = { 'capacity_mode': { 'throughput_mode': 'PROVISIONED', 'read_capacity_units': 18000, 'write_capacity_units': 6000 } }; 2. Confirm the status of the table. You can use the following statement. SELECT keyspace_name, table_name, status FROM system_schema_mcs.tables WHERE keyspace_name = 'catalog' AND table_name = 'book_awards'; Pre-warm a new table for on-demand capacity 272 Amazon Keyspaces (for Apache Cassandra) Developer Guide keyspace_name | table_name | status ---------------+-----------------+-------- catalog | book_awards | ACTIVE (1 rows) 3. When the table's status is ACTIVE, you can use the following statement to change the capacity mode of the table to on-demand mode by setting the throughput mode to PAY_PER_REQUEST. The following statement is an example of this. ALTER TABLE catalog.book_awards WITH CUSTOM_PROPERTIES={'capacity_mode': {'throughput_mode': 'PAY_PER_REQUEST'}}; 4. You can use the following statement to confirm that the table is now in on-demand mode and see the table's status. SELECT * from system_schema_mcs.tables where keyspace_name = 'catalog' and table_name = 'book_awards'; CLI Pre-warm a new table for on-demand capacity mode using the AWS CLI 1. Create a new table in provisioned mode and specify the expected peak capacity values for reads and writes for the new table. The following statement is an example of this. aws keyspaces create-table --keyspace-name catalog --table-name book_awards \--schema-definition 'allColumns=[{name=pk,type=int},{name=ck,type=int}],partitionKeys=[{name=pk}, {name=ck}]' \--capacity-specification throughputMode=PROVISIONED,readCapacityUnits=18000,writeCapacityUnits=6000 2. Confirm the status of the table. You can use the following statement. aws keyspaces get-table --keyspace-name catalog --table-name book_awards 3. When the table is active and the capacity has been provisioned, you can change the table to on-demand mode. The following is an example of this. Pre-warm a new table for on-demand capacity 273 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws keyspaces update-table --keyspace-name catalog --table-name book_awards -- capacity-specification throughputMode=PAY_PER_REQUEST 4. You can use the following statement to confirm that the table is now in on-demand mode and see the table's status. aws keyspaces get-table --keyspace-name catalog --table-name book_awards When the table is active in on-demand capacity mode, it's prepared to handle a similar throughput capacity as before in provisioned capacity mode. Pre-warm an existing table for on-demand capacity mode in Amazon Keyspaces Amazon Keyspaces automatically scales storage partitions based on throughput, but for new tables or new throughput peaks, it can take longer to allocate the required storage partitions. To insure that tables in on-demand and provisioned capacity mode have enough storage partitions to support the sudden higher throughput, you can pre-warm a new or existing table. If you anticipate a spike in peak capacity for your table that is twice as high as the previous peek withing the same 30 minutes, you can pre-warm the table to the peak capacity of the expected spike. To pre-warm an existing on-demand table in Amazon Keyspaces, you can follow these steps. To pre-warm a new table, see the section called “Pre-warm a new table for on-demand capacity”. Before you get started, review your account and table quotas for provisioned mode |
AmazonKeyspaces-098 | AmazonKeyspaces.pdf | 98 | mode have enough storage partitions to support the sudden higher throughput, you can pre-warm a new or existing table. If you anticipate a spike in peak capacity for your table that is twice as high as the previous peek withing the same 30 minutes, you can pre-warm the table to the peak capacity of the expected spike. To pre-warm an existing on-demand table in Amazon Keyspaces, you can follow these steps. To pre-warm a new table, see the section called “Pre-warm a new table for on-demand capacity”. Before you get started, review your account and table quotas for provisioned mode and adjust them as needed. Next review the required waiting periods between changing capacity modes. Note that you'll incur costs for the provisioned capacity until the table is back in on-demand mode. Console How to pre-warm an existing table in on-demand mode 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. Pre-warm an existing table for on-demand capacity 274 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2. Choose the table that you want to work with, and go to the Capacity tab. 3. In the Capacity settings section, choose Edit. 4. Under Capacity mode, change the table to Provisioned capacity mode. 5. In the Read capacity section, deselect Scale automatically. Set the table's Provisioned capacity units to the expected peak value. 6. In the Write capacity section, choose the same settings as defined in the previous step for read capacity, or configure capacity values manually. 7. When the provisioned capacity settings are defined, choose Save. After you save changes, the table's status shows as Updating... until the capacity is provisioned. Note that for large tables, the pre-warming process can take some time, because the data needs to be divided across partitions. During this time, you can continue to access the table and expect the previously configured peak capacity to be available. 8. When the table's status turns to Active, you can switch the table back to On-demand capacity mode. Cassandra Query Language (CQL) Pre-warm an existing table for on-demand mode using CQL 1. Change the table's capacity mode to PROVIOSIONED and configure the read capacity and write capacity based on your expected peak values. ALTER TABLE catalog.book_awards WITH CUSTOM_PROPERTIES={'capacity_mode': {'throughput_mode': 'PROVISIONED', 'read_capacity_units': 18000, 'write_capacity_units': 6000}}; 2. Confirm that the table is active. The following statement is an example. SELECT * from system_schema_mcs.tables where keyspace_name = 'catalog' and table_name = 'book_awards'; 3. When the table's status is ACTIVE, you can use the following statement to change the capacity mode of the table to on-demand mode by setting the throughput mode to PAY_PER_REQUEST. The following statement is an example of this. Pre-warm an existing table for on-demand capacity 275 Amazon Keyspaces (for Apache Cassandra) Developer Guide ALTER TABLE catalog.book_awards WITH CUSTOM_PROPERTIES={'capacity_mode': {'throughput_mode': 'PAY_PER_REQUEST'}}; 4. You can use the following statement to confirm that the table is now in on-demand mode and see the table's status. SELECT * from system_schema_mcs.tables where keyspace_name = 'catalog' and table_name = 'book_awards'; CLI Pre-warm an existing table for on-demand mode using the AWS CLI 1. Change the table's capacity mode to PROVIOSIONED and configure the read capacity and write capacity based on your expected peak values. The following command is an example of this. aws keyspaces update-table --keyspace-name catalog --table-name book_awards \--capacity-specification throughputMode=PROVISIONED,readCapacityUnits=18000,writeCapacityUnits=6000 2. Confirm that the status of the table is active and that the capacity has been provisioned. You can use the following statement. aws keyspaces get-table --keyspace-name catalog --table-name book_awards 3. When the table's status is ACTIVE and the capacity has been provisioned, you can use the following statement to change the capacity mode of the table to on-demand mode by setting the throughput mode to PAY_PER_REQUEST. The following statement is an example of this. aws keyspaces update-table --keyspace-name catalog --table-name book_awards \--capacity-specification throughputMode=PAY_PER_REQUEST 4. You can use the following statement to confirm that the table is now in on-demand mode and see the table's status. Pre-warm an existing table for on-demand capacity 276 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws keyspaces get-table --keyspace-name catalog --table-name book_awards When the table is active in on-demand capacity mode, it's prepared to handle a similar throughput capacity as before in provisioned capacity mode. Manage throughput capacity automatically with Amazon Keyspaces auto scaling Many database workloads are cyclical in nature or are difficult to predict in advance. For example, consider a social networking app where most of the users are active during daytime hours. The database must be able to handle the daytime activity, but there's no need for the same levels of throughput at night. Another example might be a new mobile gaming app that is experiencing rapid adoption. If the game becomes very popular, it could exceed the available database resources, which would result in slow performance and |
AmazonKeyspaces-099 | AmazonKeyspaces.pdf | 99 | in provisioned capacity mode. Manage throughput capacity automatically with Amazon Keyspaces auto scaling Many database workloads are cyclical in nature or are difficult to predict in advance. For example, consider a social networking app where most of the users are active during daytime hours. The database must be able to handle the daytime activity, but there's no need for the same levels of throughput at night. Another example might be a new mobile gaming app that is experiencing rapid adoption. If the game becomes very popular, it could exceed the available database resources, which would result in slow performance and unhappy customers. These kinds of workloads often require manual intervention to scale database resources up or down in response to varying usage levels. Amazon Keyspaces (for Apache Cassandra) helps you provision throughput capacity efficiently for variable workloads by adjusting throughput capacity automatically in response to actual application traffic. Amazon Keyspaces uses the Application Auto Scaling service to increase and decrease a table's read and write capacity on your behalf. For more information about Application Auto Scaling, see the Application Auto Scaling User Guide. Note To get started with Amazon Keyspaces automatic scaling quickly, see the section called “Configure and update auto scaling policies”. How Amazon Keyspaces automatic scaling works The following diagram provides a high-level overview of how Amazon Keyspaces automatic scaling manages throughput capacity for a table. Manage throughput capacity with auto scaling 277 Amazon Keyspaces (for Apache Cassandra) Developer Guide To enable automatic scaling for a table, you create a scaling policy. The scaling policy specifies whether you want to scale read capacity or write capacity (or both), and the minimum and maximum provisioned capacity unit settings for the table. The scaling policy also defines a target utilization. Target utilization is the ratio of consumed capacity units to provisioned capacity units at a point in time, expressed as a percentage. Automatic scaling uses a target tracking algorithm to adjust the provisioned throughput of the table upward or downward in response to actual workloads. It does this so that the actual capacity utilization remains at or near your target utilization. You can set the automatic scaling target utilization values between 20 and 90 percent for your read and write capacity. The default target utilization rate is 70 percent. You can set the target utilization to be a lower percentage if your traffic changes quickly and you want capacity to begin scaling up sooner. You can also set the target utilization rate to a higher rate if your application traffic changes more slowly and you want to reduce the cost of throughput. For more information about scaling policies, see Target tracking scaling policies for Application Auto Scaling in the Application Auto Scaling User Guide. When you create a scaling policy, Amazon Keyspaces creates two pairs of Amazon CloudWatch alarms on your behalf. Each pair represents your upper and lower boundaries for provisioned and consumed throughput settings. These CloudWatch alarms are triggered when the table's actual utilization deviates from your target utilization for a sustained period of time. To learn more about Amazon CloudWatch, see the Amazon CloudWatch User Guide. How Amazon Keyspaces automatic scaling works 278 Amazon Keyspaces (for Apache Cassandra) Developer Guide When one of the CloudWatch alarms is triggered, Amazon Simple Notification Service (Amazon SNS) sends you a notification (if you have enabled it). The CloudWatch alarm then invokes Application Auto Scaling to evaluate your scaling policy. This in turn issues an Alter Table request to Amazon Keyspaces to adjust the table's provisioned capacity upward or downward as appropriate. To learn more about Amazon SNS notifications, see Setting up Amazon SNS notifications. Amazon Keyspaces processes the Alter Table request by increasing (or decreasing) the table's provisioned throughput capacity so that it approaches your target utilization. Note Amazon Keyspaces auto scaling modifies provisioned throughput settings only when the actual workload stays elevated (or depressed) for a sustained period of several minutes. The target tracking algorithm seeks to keep the target utilization at or near your chosen value over the long term. Sudden, short-duration spikes of activity are accommodated by the table's built-in burst capacity. How auto scaling works for multi-Region tables To ensure that there's always enough read and write capacity for all table replicas in all AWS Regions of a multi-Region table in provisioned capacity mode, we recommend that you configure Amazon Keyspaces auto scaling. When you use a multi-Region table in provisioned mode with auto scaling, you can't disable auto scaling for a single table replica. But you can adjust the table's read auto scaling settings for different Regions. For example, you can specify different read capacity and read auto scaling settings for each Region that the table is replicated in. The read auto scaling settings that you configure for a table replica in a specified Region |
AmazonKeyspaces-100 | AmazonKeyspaces.pdf | 100 | write capacity for all table replicas in all AWS Regions of a multi-Region table in provisioned capacity mode, we recommend that you configure Amazon Keyspaces auto scaling. When you use a multi-Region table in provisioned mode with auto scaling, you can't disable auto scaling for a single table replica. But you can adjust the table's read auto scaling settings for different Regions. For example, you can specify different read capacity and read auto scaling settings for each Region that the table is replicated in. The read auto scaling settings that you configure for a table replica in a specified Region overwrite the general auto scaling settings of the table. The write capacity, however, has to remain synchronized across all table replicas to ensure that there's enough capacity to replicate writes in all Regions. Amazon Keyspaces auto scaling independently updates the provisioned capacity of the table in each AWS Region based on the usage in that Region. As a result, the provisioned capacity in each Region for a multi-Region table might be different when auto scaling is active. How auto scaling works for multi-Region tables 279 Amazon Keyspaces (for Apache Cassandra) Developer Guide You can configure the auto scaling settings of a multi-Region table and its replicas using the Amazon Keyspaces console, API, AWS CLI, or CQL. For more information on how to create and update auto scaling settings for multi-Region tables, see the section called “Update provisioned capacity and auto scaling settings for a multi-Region table”. Note If you use auto scaling for multi-Region tables, you must always use Amazon Keyspaces API operations to configure auto scaling settings. If you use Application Auto Scaling API operations directly to configure auto scaling settings, you don't have the ability to specify the AWS Regions of the multi-Region table. This can result in unsupported configurations. Usage notes Before you begin using Amazon Keyspaces automatic scaling, you should be aware of the following: • Amazon Keyspaces automatic scaling can increase read capacity or write capacity as often as necessary, in accordance with your scaling policy. All Amazon Keyspaces quotas remain in effect, as described in Quotas. • Amazon Keyspaces automatic scaling doesn't prevent you from manually modifying provisioned throughput settings. These manual adjustments don't affect any existing CloudWatch alarms that are attached to the scaling policy. • If you use the console to create a table with provisioned throughput capacity, Amazon Keyspaces automatic scaling is enabled by default. You can modify your automatic scaling settings at any time. For more information, see the section called “Turn off Amazon Keyspaces auto scaling for a table”. • If you're using AWS CloudFormation to create scaling policies, you should manage the scaling policies from AWS CloudFormation so that the stack is in sync with the stack template. If you change scaling policies from Amazon Keyspaces, they will get overwritten with the original values from the AWS CloudFormation stack template when the stack is reset. • If you use CloudTrail to monitor Amazon Keyspaces automatic scaling, you might see alerts for calls made by Application Auto Scaling as part of its configuration validation process. You can filter out these alerts by using the invokedBy field, which contains application- autoscaling.amazonaws.com for these validation checks. Usage notes 280 Amazon Keyspaces (for Apache Cassandra) Developer Guide Configure and update Amazon Keyspaces automatic scaling policies You can use the console, CQL, or the AWS Command Line Interface (AWS CLI) to configure Amazon Keyspaces automatic scaling for new and existing tables. You can also modify automatic scaling settings or disable automatic scaling. For more advanced features like setting scale-in and scale-out cooldown times, we recommend that you use CQL or the AWS CLI to manage Amazon Keyspaces scaling policies. Topics • Configure permissions for Amazon Keyspaces automatic scaling • Create a new table with automatic scaling • Configure automatic scaling on an existing table • View your table's Amazon Keyspaces auto scaling configuration • Turn off Amazon Keyspaces auto scaling for a table • View auto scaling activity for a Amazon Keyspaces table in Amazon CloudWatch Configure permissions for Amazon Keyspaces automatic scaling To get started, confirm that the principal has the appropriate permissions to create and manage automatic scaling settings. In AWS Identity and Access Management (IAM), the AWS managed policy AmazonKeyspacesFullAccess is required to manage Amazon Keyspaces scaling policies. Important application-autoscaling:* permissions are required to disable automatic scaling on a table. You must turn off auto scaling for a table before you can delete it. To set up an IAM user or role for Amazon Keyspaces console access and Amazon Keyspaces automatic scaling, add the following policy. To attach the AmazonKeyspacesFullAccess policy 1. Sign in to the AWS Management Console and open the IAM console at https:// console.aws.amazon.com/iam/. Configure and update auto scaling policies 281 Amazon Keyspaces (for Apache Cassandra) Developer |
AmazonKeyspaces-101 | AmazonKeyspaces.pdf | 101 | Identity and Access Management (IAM), the AWS managed policy AmazonKeyspacesFullAccess is required to manage Amazon Keyspaces scaling policies. Important application-autoscaling:* permissions are required to disable automatic scaling on a table. You must turn off auto scaling for a table before you can delete it. To set up an IAM user or role for Amazon Keyspaces console access and Amazon Keyspaces automatic scaling, add the following policy. To attach the AmazonKeyspacesFullAccess policy 1. Sign in to the AWS Management Console and open the IAM console at https:// console.aws.amazon.com/iam/. Configure and update auto scaling policies 281 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2. On the IAM console dashboard, choose Users, and then choose your IAM user or role from the list. 3. On the Summary page, choose Add permissions. 4. Choose Attach existing policies directly. 5. From the list of policies, choose AmazonKeyspacesFullAccess, and then choose Next: Review. 6. Choose Add permissions. Create a new table with automatic scaling When you create a new Amazon Keyspaces table, you can automatically enable auto scaling for the table's write or read capacity. This allows Amazon Keyspaces to contact Application Auto Scaling on your behalf to register the table as a scalable target and adjust the provisioned write or read capacity. For more information on how to create a multi-Region table and configure different auto scaling settings for table replicas, see the section called “Create a multi-Region table in provisioned mode”. Note Amazon Keyspaces automatic scaling requires the presence of a service-linked role (AWSServiceRoleForApplicationAutoScaling_CassandraTable) that performs automatic scaling actions on your behalf. This role is created automatically for you. For more information, see the section called “Using service-linked roles”. Console Create a new table with automatic scaling enabled using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables, and then choose Create table. 3. On the Create table page in the Table details section, select a keyspace and provide a 4. 5. name for the new table. In the Columns section, create the schema for your table. In the Primary key section, define the primary key of the table and select optional clustering columns. Configure and update auto scaling policies 282 Amazon Keyspaces (for Apache Cassandra) Developer Guide 6. In the Table settings section, choose Customize settings. 7. Continue to Read/write capacity settings. 8. 9. For Capacity mode, choose Provisioned. In the Read capacity section, confirm that Scale automatically is selected. In this step, you select the minimum and maximum read capacity units for the table, as well as the target utilization. • Minimum capacity units – Enter the value for the minimum level of throughput that the table should always be ready to support. The value must be between 1 and the maximum throughput per second quota for your account (40,000 by default). • Maximum capacity units – Enter the maximum amount of throughput you want to provision for the table. The value must be between 1 and the maximum throughput per second quota for your account (40,000 by default). • Target utilization – Enter a target utilization rate between 20% and 90%. When traffic exceeds the defined target utilization rate, capacity is automatically scaled up. When traffic falls below the defined target, it is automatically scaled down again. Note To learn more about default quotas for your account and how to increase them, see Quotas. 10. In the Write capacity section, choose the same settings as defined in the previous step for read capacity, or configure capacity values manually. 11. Choose Create table. Your table is created with the specified automatic scaling parameters. Cassandra Query Language (CQL) Create a new table with Amazon Keyspaces automatic scaling using CQL To configure auto scaling settings for a table programmatically, you use the AUTOSCALING_SETTINGS statement that contains the parameters for Amazon Keyspaces auto scaling. The parameters define the conditions that direct Amazon Keyspaces to adjust your table's provisioned throughput, and what additional optional actions to take. In this example, you define the auto scaling settings for mytable. Configure and update auto scaling policies 283 Amazon Keyspaces (for Apache Cassandra) Developer Guide The policy contains the following elements: • AUTOSCALING_SETTINGS – Specifies if Amazon Keyspaces is allowed to adjust throughput capacity on your behalf. The following values are required: • provisioned_write_capacity_autoscaling_update: • minimum_units • maximum_units • provisioned_read_capacity_autoscaling_update: • minimum_units • maximum_units • scaling_policy – Amazon Keyspaces supports the target tracking policy. To define the target tracking policy, you configure the following parameters. • target_value – Amazon Keyspaces auto scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define target_value as a percentage. • disableScaleIn: (Optional) A boolean that specifies if scale-in is disabled or enabled for the table. This parameter is |
AmazonKeyspaces-102 | AmazonKeyspaces.pdf | 102 | – Specifies if Amazon Keyspaces is allowed to adjust throughput capacity on your behalf. The following values are required: • provisioned_write_capacity_autoscaling_update: • minimum_units • maximum_units • provisioned_read_capacity_autoscaling_update: • minimum_units • maximum_units • scaling_policy – Amazon Keyspaces supports the target tracking policy. To define the target tracking policy, you configure the following parameters. • target_value – Amazon Keyspaces auto scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define target_value as a percentage. • disableScaleIn: (Optional) A boolean that specifies if scale-in is disabled or enabled for the table. This parameter is disabled by default. To turn on scale-in, set the boolean value to FALSE. This means that capacity is automatically scaled down for a table on your behalf. • scale_out_cooldown – A scale-out activity increases the provisioned throughput of your table. To add a cooldown period for scale-out activities, specify a value, in seconds, for scale_out_cooldown. If you don't specify a value, the default value is 0. For more information about target tracking and cooldown periods, see Target Tracking Scaling Policies in the Application Auto Scaling User Guide. • scale_in_cooldown – A scale-in activity decreases the provisioned throughput of your table. To add a cooldown period for scale-in activities, specify a value, in seconds, for scale_in_cooldown. If you don't specify a value, the default value is 0. For more information about target tracking and cooldown periods, see Target Tracking Scaling Policies in the Application Auto Scaling User Guide. Configure and update auto scaling policies 284 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note To further understand how target_value works, suppose that you have a table with a provisioned throughput setting of 200 write capacity units. You decide to create a scaling policy for this table, with a target_value of 70 percent. Now suppose that you begin driving write traffic to the table so that the actual write throughput is 150 capacity units. The consumed-to-provisioned ratio is now (150 / 200), or 75 percent. This ratio exceeds your target, so auto scaling increases the provisioned write capacity to 215 so that the ratio is (150 / 215), or 69.77 percent—as close to your target_value as possible, but not exceeding it. For mytable, you set TargetValue for both read and write capacity to 50 percent. Amazon Keyspaces auto scaling adjusts the table's provisioned throughput within the range of 5– 10 capacity units so that the consumed-to-provisioned ratio remains at or near 50 percent. For read capacity, you set the values for ScaleOutCooldown and ScaleInCooldown to 60 seconds. You can use the following statement to create a new Amazon Keyspaces table with auto scaling enabled. CREATE TABLE mykeyspace.mytable(pk int, ck int, PRIMARY KEY (pk, ck)) WITH CUSTOM_PROPERTIES = { 'capacity_mode': { 'throughput_mode': 'PROVISIONED', 'read_capacity_units': 1, 'write_capacity_units': 1 } } AND AUTOSCALING_SETTINGS = { 'provisioned_write_capacity_autoscaling_update': { 'maximum_units': 10, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 50 } } }, 'provisioned_read_capacity_autoscaling_update': { 'maximum_units': 10, Configure and update auto scaling policies 285 Amazon Keyspaces (for Apache Cassandra) Developer Guide 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 50, 'scale_in_cooldown': 60, 'scale_out_cooldown': 60 } } } }; CLI Create a new table with Amazon Keyspaces automatic scaling using the AWS CLI To configure auto scaling settings for a table programmatically, you use the autoScalingSpecification action that defines the parameters for Amazon Keyspaces auto scaling. The parameters define the conditions that direct Amazon Keyspaces to adjust your table's provisioned throughput, and what additional optional actions to take. In this example, you define the auto scaling settings for mytable. The policy contains the following elements: • autoScalingSpecification – Specifies if Amazon Keyspaces is allowed to adjust capacity throughput on your behalf. You can enable auto scaling for read and for write capacity separately. Then you must specify the following parameters for autoScalingSpecification: • writeCapacityAutoScaling – The maximum and minimum write capacity units. • readCapacityAutoScaling – The maximum and minimum read capacity units. • scalingPolicy – Amazon Keyspaces supports the target tracking policy. To define the target tracking policy, you configure the following parameters. • targetValue – Amazon Keyspaces auto scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define targetValue as a percentage. • disableScaleIn: (Optional) A boolean that specifies if scale-in is disabled or enabled for the table. This parameter is disabled by default. To turn on scale-in, set the boolean value to FALSE. This means that capacity is automatically scaled down for a table on your behalf. Configure and update auto scaling policies 286 Amazon Keyspaces (for Apache Cassandra) Developer Guide • scaleOutCooldown – A scale-out activity increases the provisioned throughput of your table. To add a cooldown period for scale-out activities, specify a value, in seconds, for ScaleOutCooldown. The default value is 0. For more information about target tracking |
AmazonKeyspaces-103 | AmazonKeyspaces.pdf | 103 | percentage. • disableScaleIn: (Optional) A boolean that specifies if scale-in is disabled or enabled for the table. This parameter is disabled by default. To turn on scale-in, set the boolean value to FALSE. This means that capacity is automatically scaled down for a table on your behalf. Configure and update auto scaling policies 286 Amazon Keyspaces (for Apache Cassandra) Developer Guide • scaleOutCooldown – A scale-out activity increases the provisioned throughput of your table. To add a cooldown period for scale-out activities, specify a value, in seconds, for ScaleOutCooldown. The default value is 0. For more information about target tracking and cooldown periods, see Target Tracking Scaling Policies in the Application Auto Scaling User Guide. • scaleInCooldown – A scale-in activity decreases the provisioned throughput of your table. To add a cooldown period for scale-in activities, specify a value, in seconds, for ScaleInCooldown. The default value is 0. For more information about target tracking and cooldown periods, see Target Tracking Scaling Policies in the Application Auto Scaling User Guide. Note To further understand how TargetValue works, suppose that you have a table with a provisioned throughput setting of 200 write capacity units. You decide to create a scaling policy for this table, with a TargetValue of 70 percent. Now suppose that you begin driving write traffic to the table so that the actual write throughput is 150 capacity units. The consumed-to-provisioned ratio is now (150 / 200), or 75 percent. This ratio exceeds your target, so auto scaling increases the provisioned write capacity to 215 so that the ratio is (150 / 215), or 69.77 percent—as close to your TargetValue as possible, but not exceeding it. For mytable, you set TargetValue for both read and write capacity to 50 percent. Amazon Keyspaces auto scaling adjusts the table's provisioned throughput within the range of 5– 10 capacity units so that the consumed-to-provisioned ratio remains at or near 50 percent. For read capacity, you set the values for ScaleOutCooldown and ScaleInCooldown to 60 seconds. When creating tables with complex auto scaling settings, it's helpful to load the auto scaling settings from a JSON file. For the following example, you can download the example JSON file from auto-scaling.zip and extract auto-scaling.json, taking note of the path to the file. In this example, the JSON file is located in the current directory. For different file path options, see How to load parameters from a file. aws keyspaces create-table --keyspace-name mykeyspace --table-name mytable Configure and update auto scaling policies 287 Amazon Keyspaces (for Apache Cassandra) Developer Guide \ --schema-definition 'allColumns=[{name=pk,type=int}, {name=ck,type=int}],partitionKeys=[{name=pk},{name=ck}]' \ --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 \ --auto-scaling-specification file://auto-scaling.json Configure automatic scaling on an existing table You can update an existing Amazon Keyspaces table to turn on auto scaling for the table's write or read capacity. If you're updating a table that is currently in on-demand capacity mode, than you first have to change the table's capacity mode to provisioned capacity mode. For more information on how to update auto scaling settings for a multi-Region table, see the section called “Update provisioned capacity and auto scaling settings for a multi-Region table”. Amazon Keyspaces automatic scaling requires the presence of a service-linked role (AWSServiceRoleForApplicationAutoScaling_CassandraTable) that performs automatic scaling actions on your behalf. This role is created automatically for you. For more information, see the section called “Using service-linked roles”. Console Configure Amazon Keyspaces automatic scaling for an existing table 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. Choose the table that you want to work with, and go to the Capacity tab. 3. In the Capacity settings section, choose Edit. 4. Under Capacity mode, make sure that the table is using Provisioned capacity mode. 5. Select Scale automatically and see step 6 in the section called “Create a new table with automatic scaling” to edit read and write capacity. 6. When the automatic scaling settings are defined, choose Save. Cassandra Query Language (CQL) Configure an existing table with Amazon Keyspaces automatic scaling using CQL Configure and update auto scaling policies 288 Amazon Keyspaces (for Apache Cassandra) Developer Guide You can use the ALTER TABLE statement for an existing Amazon Keyspaces table to configure auto scaling for the table's write or read capacity. If you're updating a table that is currently in on-demand capacity mode, you have to set capacity_mode to provisioned. If your table is already in provisioned capacity mode, this field can be omitted. In the following example, the statement updates the table mytable, which is in on-demand capacity mode. The statement changes the capacity mode of the table to provisioned mode with auto scaling enabled. The write capacity is configured within the range of 5–10 capacity units with a target value of 50%. The read capacity is also configured within the range of 5–10 capacity units with |
AmazonKeyspaces-104 | AmazonKeyspaces.pdf | 104 | or read capacity. If you're updating a table that is currently in on-demand capacity mode, you have to set capacity_mode to provisioned. If your table is already in provisioned capacity mode, this field can be omitted. In the following example, the statement updates the table mytable, which is in on-demand capacity mode. The statement changes the capacity mode of the table to provisioned mode with auto scaling enabled. The write capacity is configured within the range of 5–10 capacity units with a target value of 50%. The read capacity is also configured within the range of 5–10 capacity units with a target value of 50%. For read capacity, you set the values for scale_out_cooldown and scale_in_cooldown to 60 seconds. ALTER TABLE mykeyspace.mytable WITH CUSTOM_PROPERTIES = { 'capacity_mode': { 'throughput_mode': 'PROVISIONED', 'read_capacity_units': 1, 'write_capacity_units': 1 } } AND AUTOSCALING_SETTINGS = { 'provisioned_write_capacity_autoscaling_update': { 'maximum_units': 10, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 50 } } }, 'provisioned_read_capacity_autoscaling_update': { 'maximum_units': 10, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 50, 'scale_in_cooldown': 60, 'scale_out_cooldown': 60 } } } Configure and update auto scaling policies 289 Amazon Keyspaces (for Apache Cassandra) Developer Guide }; CLI Configure an existing table with Amazon Keyspaces automatic scaling using the AWS CLI For an existing Amazon Keyspaces table, you can turn on auto scaling for the table's write or read capacity using the UpdateTable operation. You can use the following command to turn on Amazon Keyspaces auto scaling for an existing table. The auto scaling settings for the table are loaded from a JSON file. For the following example, you can download the example JSON file from auto-scaling.zip and extract auto- scaling.json, taking note of the path to the file. In this example, the JSON file is located in the current directory. For different file path options, see How to load parameters from a file. For more information about the auto scaling settings used in the following example, see the section called “Create a new table with automatic scaling”. aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable \ --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 \ --auto-scaling-specification file://auto-scaling.json View your table's Amazon Keyspaces auto scaling configuration You can use the console, CQL, or the AWS CLI to view and update the Amazon Keyspaces automatic scaling settings of a table. Console View automatic scaling settings using the console 1. Choose the table you want to view and go to the Capacity tab. 2. In the Capacity settings section, choose Edit. You can now modify the settings in the Read capacity or Write capacity sections. For more information about these settings, see the section called “Create a new table with automatic scaling”. Configure and update auto scaling policies 290 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) View your table's Amazon Keyspaces automatic scaling policy using CQL To view details of the auto scaling configuration of a table, use the following command. SELECT * FROM system_schema_mcs.autoscaling WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable'; The output for this command looks like this. keyspace_name | table_name | provisioned_read_capacity_autoscaling_update | provisioned_write_capacity_autoscaling_update ---------------+------------ +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mykeyspace | mytable | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}} CLI View your table's Amazon Keyspaces automatic scaling policy using the AWS CLI To view the auto scaling configuration of a table, you can use the get-table-auto-scaling- settings operation. The following CLI command is an example of this. aws keyspaces get-table-auto-scaling-settings --keyspace-name mykeyspace --table- name mytable The output for this command looks like this. { "keyspaceName": "mykeyspace", "tableName": "mytable", "resourceArn": "arn:aws:cassandra:us-east-1:5555-5555-5555:/keyspace/mykeyspace/ table/mytable", Configure and update auto scaling policies 291 Amazon Keyspaces (for Apache Cassandra) Developer Guide "autoScalingSpecification": { "writeCapacityAutoScaling": { "autoScalingDisabled": false, "minimumUnits": 5, "maximumUnits": 10, "scalingPolicy": { "targetTrackingScalingPolicyConfiguration": { "disableScaleIn": false, "scaleInCooldown": 0, "scaleOutCooldown": 0, "targetValue": 50.0 } } }, "readCapacityAutoScaling": { "autoScalingDisabled": false, "minimumUnits": 5, "maximumUnits": 10, "scalingPolicy": { "targetTrackingScalingPolicyConfiguration": { "disableScaleIn": false, "scaleInCooldown": 60, "scaleOutCooldown": 60, "targetValue": 50.0 } } } } } Turn off Amazon Keyspaces auto scaling for a table You can turn off Amazon Keyspaces auto scaling for your table at any time. If you no longer need to scale your table's read or write capacity, you should consider turning off auto scaling so that Amazon Keyspaces doesn't continue modifying your table’s read or write capacity settings. You can update the table using the console, CQL, or the AWS CLI. Turning off auto scaling also deletes the CloudWatch alarms that were created on your behalf. To delete the service-linked role used by Application Auto Scaling to access your Amazon Keyspaces table, follow the steps in the section called “Deleting a service-linked role for Amazon Keyspaces”. Configure and update auto scaling policies 292 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note To delete the |
AmazonKeyspaces-105 | AmazonKeyspaces.pdf | 105 | or write capacity, you should consider turning off auto scaling so that Amazon Keyspaces doesn't continue modifying your table’s read or write capacity settings. You can update the table using the console, CQL, or the AWS CLI. Turning off auto scaling also deletes the CloudWatch alarms that were created on your behalf. To delete the service-linked role used by Application Auto Scaling to access your Amazon Keyspaces table, follow the steps in the section called “Deleting a service-linked role for Amazon Keyspaces”. Configure and update auto scaling policies 292 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note To delete the service-linked role that Application Auto Scaling uses, you must disable automatic scaling on all tables in the account across all AWS Regions. Console Turn off Amazon Keyspaces automatic scaling for your table using the console Using the Amazon Keyspaces console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. Choose the table you want to update and go to the Capacity tab. 3. 4. In the Capacity settings section, choose Edit. To disable Amazon Keyspaces automatic scaling, clear the Scale automatically check box. Disabling automatic scaling deregisters the table as a scalable target with Application Auto Scaling. Cassandra Query Language (CQL) Turn off Amazon Keyspaces automatic scaling for your table using CQL The following statement turns off auto scaling for write capacity of the table mytable. ALTER TABLE mykeyspace.mytable WITH AUTOSCALING_SETTINGS = { 'provisioned_write_capacity_autoscaling_update': { 'autoscaling_disabled': true } }; CLI Turn off Amazon Keyspaces automatic scaling for your table using the AWS CLI The following command turns off auto scaling for the table's read capacity. It also deletes the CloudWatch alarms that were created on your behalf. Configure and update auto scaling policies 293 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable \ --auto-scaling-specification readCapacityAutoScaling={autoScalingDisabled=true} View auto scaling activity for a Amazon Keyspaces table in Amazon CloudWatch You can monitor how Amazon Keyspaces automatic scaling uses resources by using Amazon CloudWatch, which generates metrics about your usage and performance. Follow the steps in the Application Auto Scaling User Guide to create a CloudWatch dashboard. Use burst capacity effectively in Amazon Keyspaces Amazon Keyspaces provides some flexibility in your per-partition throughput provisioning by providing burst capacity. Whenever you're not fully using a partition's throughput, Amazon Keyspaces reserves a portion of that unused capacity for later bursts of throughput to handle usage spikes. Amazon Keyspaces currently retains up to 5 minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table. Amazon Keyspaces can also consume burst capacity for background maintenance and other tasks without prior notice. Note that these burst capacity details might change in the future. Use burst capacity 294 Amazon Keyspaces (for Apache Cassandra) Developer Guide Working with Amazon Keyspaces (for Apache Cassandra) features This chapter provides details about working with Amazon Keyspaces and various database features, for example backup and restore, Time to Live, and multi-Region replication. • Time to Live – Amazon Keyspaces expires data from tables automatically based on the Time to Live value you set. Learn how to configure TTL and how to use it in your tables. • PITR – Protect your Amazon Keyspaces tables from accidental write or delete operations by creating continuous backups of your table data. Learn how to configure PITR on your tables and how to restore a table to a specific point in time or how to restore a table that has been accidentally deleted. • Working with multi-Region tables – Multi-Region tables in Amazon Keyspaces must have write throughput capacity configured in either on-demand or provisioned capacity mode with auto scaling. Plan the throughput capacity needs by estimating the required write capacity units (WCUs) for each Region, and provision the sum of writes from all Regions to ensure sufficient capacity for replicated writes. • Static columns – Amazon Keyspaces handles static columns differently from regular columns. This section covers calculating the encoded size of static columns, metering read/write operations on static data, and guidelines for working with static columns. • Queries and pagination – Amazon Keyspaces supports advanced querying capabilities like using the IN operator with SELECT statements, ordering results with ORDER BY, and automatic pagination of large result sets. This section explains how Amazon Keyspaces processes these queries and provides examples. • Partitioners – Amazon Keyspaces provides three partitioners: Murmur3Partitioner (default), RandomPartitioner, and DefaultPartitioner. You can change the partitioner per Region at the account level using the AWS Management Console or Cassandra Query Language (CQL). • Client-side timestamps – Client-side timestamps are Cassandra-compatible timestamps that Amazon Keyspaces persists for each cell in your table. Use |
AmazonKeyspaces-106 | AmazonKeyspaces.pdf | 106 | with static columns. • Queries and pagination – Amazon Keyspaces supports advanced querying capabilities like using the IN operator with SELECT statements, ordering results with ORDER BY, and automatic pagination of large result sets. This section explains how Amazon Keyspaces processes these queries and provides examples. • Partitioners – Amazon Keyspaces provides three partitioners: Murmur3Partitioner (default), RandomPartitioner, and DefaultPartitioner. You can change the partitioner per Region at the account level using the AWS Management Console or Cassandra Query Language (CQL). • Client-side timestamps – Client-side timestamps are Cassandra-compatible timestamps that Amazon Keyspaces persists for each cell in your table. Use client-side timestamps for conflict resolution and to let your client application determine the order of writes. • User-defined types (UDTs) – With UDTs you can define data structures in your applications that represent real-world data hierarchies. • Tagging resources – You can label Amazon Keyspaces resources like keyspaces and tables using tags. Tags help categorize resources, enable cost tracking, and let you configure access control 295 Amazon Keyspaces (for Apache Cassandra) Developer Guide based on tags. This section covers tagging restrictions, operations, and best practices for Amazon Keyspaces. • AWS CloudFormation templates – AWS CloudFormation helps you model and set up your Amazon Keyspaces keyspaces and tables so that you can spend less time creating and managing your resources and infrastructure. Topics • System keyspaces in Amazon Keyspaces • User-defined types (UDTs) in Amazon Keyspaces • Working with CQL queries in Amazon Keyspaces • Working with partitioners in Amazon Keyspaces • Client-side timestamps in Amazon Keyspaces • Multi-Region replication for Amazon Keyspaces (for Apache Cassandra) • Backup and restore data with point-in-time recovery for Amazon Keyspaces • Expire data with Time to Live (TTL) for Amazon Keyspaces (for Apache Cassandra) • Using this service with an AWS SDK • Working with tags and labels for Amazon Keyspaces resources • Create Amazon Keyspaces resources with AWS CloudFormation • Using NoSQL Workbench with Amazon Keyspaces (for Apache Cassandra) System keyspaces in Amazon Keyspaces This section provides details about working with system keyspaces in Amazon Keyspaces (for Apache Cassandra). Amazon Keyspaces uses four system keyspaces: • system • system_schema • system_schema_mcs • system_multiregion_info System keyspaces 296 Amazon Keyspaces (for Apache Cassandra) Developer Guide The following sections provide details about the system keyspaces and the system tables that are supported in Amazon Keyspaces. system This is a Cassandra keyspace. Amazon Keyspaces uses the following tables. Table names Column names Comments local peers key, bootstrap ped, broadcast _address, cluster_n ame, cql_versi on, data_cent er, gossip_ge neration, host_id, listen_address, native_protocol_ve rsion, partition er, rack, release_v ersion, rpc_addre ss, schema_version, thrift_version, tokens, truncated_at peer, data_center, host_id, preferred _ip, rack, release_v ersion, rpc_addre ss, schema_version, tokens Information about the local keyspace. Query this table to see the available endpoints . For example, if you're connecting through a public endpoint, you see a list of nine available IP addresses. If you're connecting through a FIPS endpoint, you see a list of three IP addresses. If you're connecting through an AWS PrivateLink VPC endpoint, you see the system 297 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table names Column names Comments list of IP addresses that you have configured. For more information, see the section called “Populati ng system.peers table entries with interface VPC endpoint information”. This table defines the total size and number of partition s for each token range for size_estimates keyspace_name, table_name, range_sta rt, range_end, mean_partition_size, partitions_count every table. This is needed for the Apache Cassandra Spark prepared_statements prepared_id, logged_keyspace, query_string Connector, which uses the estimated partition size to distribute the work. This table contains informati on about saved queries. system_schema This is a Cassandra keyspace. Amazon Keyspaces uses the following tables. Table names Column names Comments keyspaces tables keyspace_name, durable_writes, replication Information about a specific keyspace. keyspace_name, table_name, bloom_fil Information about a specific table. ter_fp_chance, caching, comment, system_schema 298 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table names Column names Comments compaction, compressi on, crc_check _chance, dclocal_r ead_repair_chance, default_time_to_li ve, extensions, flags, gc_grace_ seconds, id, max_index_interval , memtable_flush_per iod_in_ms, min_index _interval, read_repa ir_chance, speculati ve_retry keyspace_name, type_name, field_nam es, field_types types columns keyspace_name, table_name, column_na me, clusterin g_order, column_na me_bytes, kind, position, type Information about a specific user-defined type (UDT). Information about a specific column. system_schema_mcs This is an Amazon Keyspaces keyspace that stores information about AWS or Amazon Keyspaces specific settings. system_schema_mcs 299 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table names Column names Comments keyspaces tables keyspace_name, durable_writes, replication Query this table to find out programmatically if a keyspace has been created. For more information, see the section called “Check keyspace creation status”. keyspace_name, creation_time, Query this table to find out the status of a specific table. speculative_retry, For more information, see the cdc, gc_grace_ seconds, crc_check _chance, min_index _interval, bloom_fil ter_fp_chance, flags, custom_pr |
AmazonKeyspaces-107 | AmazonKeyspaces.pdf | 107 | specific user-defined type (UDT). Information about a specific column. system_schema_mcs This is an Amazon Keyspaces keyspace that stores information about AWS or Amazon Keyspaces specific settings. system_schema_mcs 299 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table names Column names Comments keyspaces tables keyspace_name, durable_writes, replication Query this table to find out programmatically if a keyspace has been created. For more information, see the section called “Check keyspace creation status”. keyspace_name, creation_time, Query this table to find out the status of a specific table. speculative_retry, For more information, see the cdc, gc_grace_ seconds, crc_check _chance, min_index _interval, bloom_fil ter_fp_chance, flags, custom_pr operties, dclocal_r ead_repair_chance, table_name, caching, default_time_to_li ve, read_repa ir_chance, max_index _interval, extension s, compaction, section called “Check table creation status”. You can also query this table to list settings that are specific to Amazon Keyspaces and are stored as custom_pr operties . For example: • capacity_mode • client_side_timest amps • encryption_specifi cation comment, id, compressi • point_in_time_reco on, memtable_ very flush_period_in_ms, • ttl status tables_history keyspace_name, table_name, event_tim e, creation_time, custom_properties, event Query this table to learn about schema changes for a specific table. system_schema_mcs 300 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table names Column names Comments columns keyspace_name, table_name, column_na This table is identical to the Cassandra table in the me, clusterin system_schema keyspace. tags types g_order, column_na me_bytes, kind, position, type resource_id, keyspace_name, resource_name, Query this table to find out if a keyspace has tags. For more information, see the section resource_type, tags called “View table tags”. keyspace_name, type_name, field_nam Query this table to find out information about user- es, field_types, defined types (UDTs). For max_nesting_depth, last_modified_time stamp, status, direct_referring_t ables, direct_pa rent_types example you can query this table to list all UDTs for a given keyspace. For more information, see the section called “User-defined types (UDTs)”. system_schema_mcs 301 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table names Column names Comments autoscaling keyspace_name, table_name, provision Query this table to get the auto scaling settings of a ed_read_capacity_a provisioned table. Note utoscaling_update, that these settings won't provisioned_write_ be available until the table capacity_autoscali is active. To query this ng_update table, you have to specify keyspace_name and table_name in the WHERE clause. For more informati on, see the section called “View your table's Amazon Keyspaces auto scaling configuration”. system_multiregion_info This is an Amazon Keyspaces keyspace that stores information about multi-Region replication. Table names Column names Comments tables keyspace_name, table_name, region, status This table contains informati on about multi-Region tables —for example, the AWS Regions that the table is replicated in and the table's status. You can also query this table to list settings that are specific to Amazon Keyspaces that are stored as custom_pr operties . For example: • capacity_mode system_multiregion_info 302 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table names Column names Comments keyspaces keyspace_name, region, status, tables_replication _progress To query this table, you have to specify keyspace_ name and table_name in the WHERE clause. For more information, see the section called “Create a multi-Region keyspace”. This table contains informati on about the progress of an ALTER KEYSPACE operation that adds a replica to a keyspace — for example, how many tables have already been created in the new Region, and how many tables are still in progress. For an examples, see the section called “Check replication progress”. system_multiregion_info 303 Amazon Keyspaces (for Apache Cassandra) Developer Guide Table names Column names Comments autoscaling keyspace_name, table_name, provision Query this table to get the auto scaling settings of ed_read_capacity_a a multi-Region provision utoscaling_update, ed table. Note that these provisioned_write_ settings won't be available capacity_autoscali until the table is active. To ng_update, region query this table, you have to specify keyspace_name and table_name in the WHERE clause. For more information, see the section called “Update provisioned capacity and auto scaling settings for a multi-Region table”. types keyspace_name, type_name, field_nam Query this table to find out information about user-defi es, field_types, ned types (UDTs) in multi-Reg max_nesting_depth, last_modified_time stamp, status, direct_referring_t ables, direct_pa rent_types, region ion keyspaces. For example, you can query this table to list all table replicas and their respective AWS Regions that use UDTs for a given keyspace. For more informati on, see the section called “User-defined types (UDTs)”. User-defined types (UDTs) in Amazon Keyspaces A user-defined type (UDT) is a grouping of fields and data types that you can use to define a single column in Amazon Keyspaces. Valid data types for UDTs are all supported Cassandra data types, including collections and other UDTs that you've already created in the same keyspace. For more User-defined types (UDTs) 304 Amazon Keyspaces (for Apache Cassandra) Developer Guide information about supported Cassandra data types, see the section called “Cassandra data type support”. You can use user-defined types (UDTs) in Amazon Keyspaces to organize data in a more efficient way. For example, |
AmazonKeyspaces-108 | AmazonKeyspaces.pdf | 108 | User-defined types (UDTs) in Amazon Keyspaces A user-defined type (UDT) is a grouping of fields and data types that you can use to define a single column in Amazon Keyspaces. Valid data types for UDTs are all supported Cassandra data types, including collections and other UDTs that you've already created in the same keyspace. For more User-defined types (UDTs) 304 Amazon Keyspaces (for Apache Cassandra) Developer Guide information about supported Cassandra data types, see the section called “Cassandra data type support”. You can use user-defined types (UDTs) in Amazon Keyspaces to organize data in a more efficient way. For example, you can create UDTs with nested collections which allows you to implement more complex data modeling in your applications. You can also use the frozen keyword for defining UDTs. UDTs are bound to a keyspace and available to all tables and UDTs in the same keyspace. You can create UDTs in single-Region and multi-Region keyspaces. You can create new tables or alter existing tables and add new columns that use a UDT. To create a UDT with a nested UDT, the nested UDT has to be frozen. To review how many UDTs are supported per keyspace, supported levels of nesting, and other default values and quotas related to UDTs, see the section called “Quotas and default values for user-defined types (UDTs) in Amazon Keyspaces”. For information about how to calculate the encoded size of UDTs, see the section called “Estimate the encoded size of data values based on data type”. For more information about CQL syntax, see the section called “Types”. To learn more about UDTs and point-in time restore, see the section called “PITR and UDTs”. Topics • Configure permissions to work with user-defined types (UDTs) in Amazon Keyspaces • Create a user-defined type (UDT) in Amazon Keyspaces • View user-defined types (UDTs) in Amazon Keyspaces • Delete a user-defined type (UDT) in Amazon Keyspaces Configure permissions to work with user-defined types (UDTs) in Amazon Keyspaces Like tables, UDTs are bound to a specific keyspace. But unlike tables, you can't define permissions directly for UDTs. UDTs are not considered resources in AWS and they have no unique identifiers in the format of an Amazon Resource Name (ARN). Instead, to give an IAM principal permissions to Configure permissions 305 Amazon Keyspaces (for Apache Cassandra) Developer Guide perform specific actions on a UDT, you have to define permissions for the keyspace that the UDT is bound to. To work with UDTs in multi-Region keyspaces, additional permissions are required. To be able to create, view, or delete UDTs, the principal, for example an IAM user or role, needs the same permissions that are required to perform the same action on the keyspace that the UDT is bound to. For more information about AWS Identity and Access Management, see the section called “AWS Identity and Access Management”. Permissions to create a UDT To create a UDT in a single-Region keyspace, the principal needs Create permissions for the keyspace. The following IAM policy is an example of this. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "cassandra:Create", "Resource": [ "arn:aws:cassandra:aws-region:111122223333:/keyspace/my_keyspace/" ] } ] } To create a UDT in a multi-Region keyspace, in addition to Create permissions the principal also needs permissions for the action CreateMultiRegionResource for the specified keyspace. The following IAM policy is an example of this. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cassandra:Create", "cassandra:CreateMultiRegionResource" ], Configure permissions 306 Amazon Keyspaces (for Apache Cassandra) "Resource": [ Developer Guide "arn:aws:cassandra:aws-region:111122223333:/keyspace/my_keyspace/" ] } ] } Permissions to view a UDT To view or list UDTs in a single-Region keyspace, the principal needs read permissions for the system keyspace. For more information, see the section called “system_schema_mcs”. The following IAM policy is an example of this. { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":"cassandra:Select", "Resource":[ "arn:aws:cassandra:aws-region:111122223333:/keyspace/system*" ] } ] } To view or list UDTs for a multi-Region keyspace, the principal needs permissions for the actions SELECT and SelectMultiRegionResource for the system keyspace. For more information, see the section called “system_multiregion_info”. The following IAM policy is an example of this. { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action": ["cassandra:Select", "cassandra:SelectMultiRegionResource"], "Resource":[ "arn:aws:cassandra:aws-region:111122223333:/keyspace/system*" ] Configure permissions 307 Amazon Keyspaces (for Apache Cassandra) Developer Guide } ] } Permissions to delete a UDT To delete a UDT from a single-Region keyspace, the principal needs permissions for the Drop action for the specified keyspace. The following IAM policy is an example of this. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "cassandra:Drop", "Resource": [ "arn:aws:cassandra:aws-region:111122223333:/keyspace/my_keyspace/" ] } ] } To delete a UDT from a multi-Region keyspace, the principal needs permissions for the Drop action and for the DropMultiRegionResource action for the specified keyspace. The following IAM policy is an example of this. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cassandra:Drop", "cassandra:DropMultiRegionResource" ], |
AmazonKeyspaces-109 | AmazonKeyspaces.pdf | 109 | ] } Permissions to delete a UDT To delete a UDT from a single-Region keyspace, the principal needs permissions for the Drop action for the specified keyspace. The following IAM policy is an example of this. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "cassandra:Drop", "Resource": [ "arn:aws:cassandra:aws-region:111122223333:/keyspace/my_keyspace/" ] } ] } To delete a UDT from a multi-Region keyspace, the principal needs permissions for the Drop action and for the DropMultiRegionResource action for the specified keyspace. The following IAM policy is an example of this. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cassandra:Drop", "cassandra:DropMultiRegionResource" ], "Resource": [ "arn:aws:cassandra:aws-region:111122223333:/keyspace/my_keyspace/" ] } ] } Configure permissions 308 Amazon Keyspaces (for Apache Cassandra) Developer Guide Create a user-defined type (UDT) in Amazon Keyspaces To create a UDT in a single-Region keyspace, you can use the CREATE TYPE statement in CQL, the create-type command with the AWS CLI, or the console. UDT names must contain 48 characters or less, must begin with an alphabetic character, and can only contain alpha-numeric characters and underscores. Amazon Keyspaces converts upper case characters automatically into lower case characters. Alternatively, you can declare a UDT name in double quotes. When declaring a UDT name inside double quotes, Amazon Keyspaces preserves upper casing and allows special characters. You can also use double quotes as part of the name when you create the UDT, but you must escape each double quote character with an additional double quote character. The following table shows examples of allowed UDT names. The first columns shows how to enter the name when you create the type, the second column shows how Amazon Keyspaces formats the name internally. Amazon Keyspaces expects the formatted name for operations like GetType. Entered name Formatted name Note MY_UDT my_udt Without double-quotes, Amazon Keyspaces converts all upper-case characters to lower-case. "MY_UDT" MY_UDT "1234" 1234 With double-quotes, Amazon Keyspaces respects the upper-case characters, and removes the double-quotes from the formatted name. With double-quotes, the name can begin with a number, and Amazon Keyspaces removes the double-quotes from the formatted name. "Special_ Special_C Ch@r@cter h@r@cters s<>!!" <>!! With double-quotes, the name can contain special characters, and Amazon Keyspaces removes the double- quotes from the formatted name. Create a UDT 309 Amazon Keyspaces (for Apache Cassandra) Developer Guide Entered name Formatted name Note "nested"" nested""" """"quote quotes Amazon Keyspaces removes the outer double-quotes and the escape double-quotes from the formatted name. s" Console Create a user-defined type (UDT) with the Amazon Keyspaces console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces, and then choose a keyspace from the list. 3. Choose the UDTs tab. 4. Choose Create UDT 5. Under UDT details, enter the name for the UDT. Under UDT fields you define the schema of the UDT. 6. To finish, choose Create UDT. Cassandra Query Language (CQL) Create a user-defined type (UDT) with CQL In this example we create a new version of the book awards table used in the section called “Create a table”. In this table, we store all awards an author receives for a given book. We create two UDTs that are nested and contain information about the book that received an award. 1. Create a keyspace with the name catalog. CREATE KEYSPACE catalog WITH REPLICATION = {'class': 'SingleRegionStrategy'}; 2. Create the first type. This type stores BISAC codes, which are used to define the genre of books. A BISAC code consists out of an alpha-numeric code and up to four subject matter areas. CREATE TYPE catalog.bisac ( Create a UDT 310 Amazon Keyspaces (for Apache Cassandra) Developer Guide bisac_code text, subject1 text, subject2 text, subject3 text, subject4 text ); 3. Create a second type for book awards that uses the first UDT. The nested UDT has to be frozen. CREATE TYPE catalog.book ( award_title text, book_title text, publication_date date, page_count int, ISBN text, genre FROZEN <bisac> ); 4. Create a table with a column for the author's name and uses a list type for the book awards. Note that the UDT used in the list has to be frozen. CREATE TABLE catalog.authors ( author_name text PRIMARY KEY, awards list <FROZEN <book>> ); 5. In this step we insert one row of data into the new table. CONSISTENCY LOCAL_QUORUM; INSERT INTO catalog.authors (author_name, awards) VALUES ( 'John Stiles' , [{ award_title: 'Wolf', book_title: 'Yesterday', publication_date: '2020-10-10', page_count: 345, ISBN: '026204630X', genre: { bisac_code:'FIC014090', subject1: 'FICTION', subject2: 'Historical', subject3: '20th Century', subject4: 'Post-World War II'} }, Create a UDT 311 Amazon Keyspaces (for Apache Cassandra) Developer Guide {award_title: 'Richard Roe', book_title: 'Who ate the cake?', publication_date: '2019-05-13', page_count: 193, ISBN: '9780262046305', genre: { bisac_code:'FIC022130', subject1: 'FICTION', subject2: 'Mystery & Detective', subject3: 'Cozy', subject4: 'Culinary'} }] ); 6. In the last step we |
AmazonKeyspaces-110 | AmazonKeyspaces.pdf | 110 | <book>> ); 5. In this step we insert one row of data into the new table. CONSISTENCY LOCAL_QUORUM; INSERT INTO catalog.authors (author_name, awards) VALUES ( 'John Stiles' , [{ award_title: 'Wolf', book_title: 'Yesterday', publication_date: '2020-10-10', page_count: 345, ISBN: '026204630X', genre: { bisac_code:'FIC014090', subject1: 'FICTION', subject2: 'Historical', subject3: '20th Century', subject4: 'Post-World War II'} }, Create a UDT 311 Amazon Keyspaces (for Apache Cassandra) Developer Guide {award_title: 'Richard Roe', book_title: 'Who ate the cake?', publication_date: '2019-05-13', page_count: 193, ISBN: '9780262046305', genre: { bisac_code:'FIC022130', subject1: 'FICTION', subject2: 'Mystery & Detective', subject3: 'Cozy', subject4: 'Culinary'} }] ); 6. In the last step we read the data from the table. SELECT * FROM catalog.authors; The output of the command should look like this. author_name | awards ------------- +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- John Stiles | [{award_title: 'Wolf', book_title: 'Yesterday', publication_date: 2020-10-10, page_count: 345, isbn: '026204630X', genre: {bisac_code: 'FIC014090', subject1: 'FICTION', subject2: 'Historical', subject3: '20th Century', subject4: 'Post-World War II'}}, {award_title: 'Richard Roe', book_title: 'Who ate the cake?', publication_date: 2019-05-13, page_count: 193, isbn: '9780262046305', genre: {bisac_code: 'FIC022130', subject1: 'FICTION', subject2: 'Mystery & Detective', subject3: 'Cozy', subject4: 'Culinary'}}] (1 rows) For more information about CQL syntax, see the section called “CREATE TYPE”. CLI Create a user-defined type (UDT) with the AWS CLI 1. To create a type you can use the following syntax. aws keyspaces create-type --keyspace-name 'my_keyspace' --type-name 'my_udt' --field-definitions Create a UDT 312 Amazon Keyspaces (for Apache Cassandra) Developer Guide '[ {"name" : "field1", "type" : "int"}, {"name" : "field2", "type" : "text"} ]' 2. The output of that command looks similar to this example. Note that typeName returns the formatted name of the UDT. { "keyspaceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/ my_keyspace/", "typeName": "my_udt" } View user-defined types (UDTs) in Amazon Keyspaces To view or list all UDTs in a single-Region keyspace, you can query the table system_schema_mcs.types in the system keyspace using a statement in CQL, or use the get- type and list-type commands with the AWS CLI, or the console. For either option, the IAM principal needs read permissions to the system keyspace. For more information, see the section called “Configure permissions”. Console View user-defined types (UDT) with the Amazon Keyspaces console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces, and then choose a keyspace from the list. 3. Choose the UDTs tab to review the list of all UDTs in the keyspace. 4. To review one UDT in detail, choose a UDT from the list. 5. On the Schematab you can review the schema. On the Used in tab you can see if this UDT is used in tables or other UDTs. Note that you can only delete UDTs that are not in use by either tables or other UDTs. View UDTs 313 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) View the user-defined types (UDTs) of a single-Region keyspace with CQL 1. To see the types that are available in a given keyspace, you can use the following statement. SELECT type_name FROM system_schema_mcs.types WHERE keyspace_name = 'my_keyspace'; 2. To view the details about a specific type, you can use the following statement. SELECT keyspace_name, type_name, field_names, field_types, max_nesting_depth, last_modified_timestamp, status, direct_referring_tables, direct_parent_types FROM system_schema_mcs.types WHERE keyspace_name = 'my_keyspace' AND type_name = 'my_udt'; 3. You can list all UDTs that exist in the account using DESC TYPE. DESC TYPES; Keyspace my_keyspace --------------------------- my_udt1 my_udt2 Keyspace my_keyspace2 --------------------------- my_udt1 4. You can list all UDTs in the current selected keyspace using DESC TYPE. USE my_keyspace; my_keyspace DESC TYPES; View UDTs 314 Amazon Keyspaces (for Apache Cassandra) my_udt1 my_udt2 Developer Guide 5. To list all UDTs in a multi-Region keyspace, you can query the system table types in the system_multiregion_info keyspace. The following query is an example of this. SELECT keyspace_name, type_name, region, status FROM system_multiregion_info.types WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable'; The output of this command looks similar to this. keyspace_name | table_name | region | status mykeyspace | mytable | us-east-1 | ACTIVE mykeyspace | mytable | ap-southeast-1 | ACTIVE mykeyspace | mytable | eu-west-1 | ACTIVE CLI View user-defined types (UDTs) with the AWS CLI 1. To list the types available in a keyspace, you can use the list-types command. aws keyspaces list-types --keyspace-name 'my_keyspace' The output of that command looks similar to this example. { "types": [ "my_udt", "parent_udt" ] } 2. To view the details about a given type you can use the get-type command. aws keyspaces get-type --type-name 'my_udt' --keyspace-name 'my_keyspace' View UDTs 315 Amazon Keyspaces (for Apache Cassandra) Developer Guide The output of this command looks similar to this example. { "keyspaceName": "my_keyspace", "typeName": "my_udt", "fieldDefinitions": [ { "name": "a", "type": "int" }, { "name": "b", "type": "text" } ], "lastModifiedTimestamp": 1721328225776, "maxNestingDepth": 3 "status": "ACTIVE", "directReferringTables": [], "directParentTypes": [ "parent_udt" ], "keyspaceArn": |
AmazonKeyspaces-111 | AmazonKeyspaces.pdf | 111 | the list-types command. aws keyspaces list-types --keyspace-name 'my_keyspace' The output of that command looks similar to this example. { "types": [ "my_udt", "parent_udt" ] } 2. To view the details about a given type you can use the get-type command. aws keyspaces get-type --type-name 'my_udt' --keyspace-name 'my_keyspace' View UDTs 315 Amazon Keyspaces (for Apache Cassandra) Developer Guide The output of this command looks similar to this example. { "keyspaceName": "my_keyspace", "typeName": "my_udt", "fieldDefinitions": [ { "name": "a", "type": "int" }, { "name": "b", "type": "text" } ], "lastModifiedTimestamp": 1721328225776, "maxNestingDepth": 3 "status": "ACTIVE", "directReferringTables": [], "directParentTypes": [ "parent_udt" ], "keyspaceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/ my_keyspace/" } Delete a user-defined type (UDT) in Amazon Keyspaces To delete a UDT in a keyspace, you can use the DROP TYPE statement in CQL, the delete-type command with the AWS CLI, or the console. Console Delete a user-defined type (UDT) with the Amazon Keyspaces console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces, and then choose a keyspace from the list. 3. Choose the UDTs tab. Delete a UDT 316 Amazon Keyspaces (for Apache Cassandra) Developer Guide 4. Choose the UDT that you want to delete. On the Used in you can confirm that the type you want to delete isn't currently used by a table or other UDT. 5. Choose Delete above the Summary. 6. Type Delete in the dialog that appears, and choose Delete UDT. Cassandra Query Language (CQL) Delete a user-defined type (UDT) with CQL • To delete a type, you can use the following statement. DROP TYPE my_keyspace.my_udt; For more information about CQL syntax, see the section called “DROP TYPE”. CLI Delete a user-defined type (UDT) with the AWS CLI 1. To delete a type, you can use the following command. aws keyspaces delete-type --keyspace-name 'my_keyspace' --type-name 'my_udt' 2. The output of the command looks similar to this example. { "keyspaceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/ my_keyspace/", "typeName": "my_udt" } Working with CQL queries in Amazon Keyspaces This section gives an introduction into working with queries in Amazon Keyspaces (for Apache Cassandra). The CQL statements available to query, transform, and manage data are SELECT, INSERT, UPDATE, and DELETE. The following topics outline some of the more complex options Working with CQL queries 317 Amazon Keyspaces (for Apache Cassandra) Developer Guide available when working with queries. For the complete language syntax with examples, see the section called “DML statements”. Topics • Use the IN operator with the SELECT statement in a query in Amazon Keyspaces • Order results with ORDER BY in Amazon Keyspaces • Paginate results in Amazon Keyspaces Use the IN operator with the SELECT statement in a query in Amazon Keyspaces SELECT IN You can query data from tables using the SELECT statement, which reads one or more columns for one or more rows in a table and returns a result-set containing the rows matching the request. A SELECT statement contains a select_clause that determines which columns to read and to return in the result-set. The clause can contain instructions to transform the data before returning it. The optional WHERE clause specifies which rows must be queried and is composed of relations on the columns that are part of the primary key. Amazon Keyspaces supports the IN keyword in the WHERE clause. This section uses examples to show how Amazon Keyspaces processes SELECT statements with the IN keyword. This examples demonstrates how Amazon Keyspaces breaks down the SELECT statement with the IN keyword into subqueries. In this example we use a table with the name my_keyspace.customers. The table has one primary key column department_id, two clustering columns sales_region_id and sales_representative_id, and one column that contains the name of the customer in the customer_name column. SELECT * FROM my_keyspace.customers; department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 0 | 0 | 0 | a 0 | 0 | 1 | b 0 | 1 | 0 | c 0 | 1 | 1 | d 1 | 0 | 0 | e 1 | 0 | 1 | f Use IN SELECT 318 Amazon Keyspaces (for Apache Cassandra) Developer Guide 1 | 1 | 0 | g 1 | 1 | 1 | h Using this table, you can run the following SELECT statement to find the customers in the departments and sales regions that you are interested in with the IN keyword in the WHERE clause. The following statement is an example of this. SELECT * FROM my_keyspace.customers WHERE department_id IN (0, 1) AND sales_region_id IN (0, 1); Amazon Keyspaces divides this statement into four subqueries as shown in the following output. SELECT * FROM my_keyspace.customers WHERE department_id = 0 AND sales_region_id = 0; department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 0 | 0 | 0 | a 0 |
AmazonKeyspaces-112 | AmazonKeyspaces.pdf | 112 | 1 | h Using this table, you can run the following SELECT statement to find the customers in the departments and sales regions that you are interested in with the IN keyword in the WHERE clause. The following statement is an example of this. SELECT * FROM my_keyspace.customers WHERE department_id IN (0, 1) AND sales_region_id IN (0, 1); Amazon Keyspaces divides this statement into four subqueries as shown in the following output. SELECT * FROM my_keyspace.customers WHERE department_id = 0 AND sales_region_id = 0; department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 0 | 0 | 0 | a 0 | 0 | 1 | b SELECT * FROM my_keyspace.customers WHERE department_id = 0 AND sales_region_id = 1; department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 0 | 1 | 0 | c 0 | 1 | 1 | d SELECT * FROM my_keyspace.customers WHERE department_id = 1 AND sales_region_id = 0; department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 1 | 0 | 0 | e 1 | 0 | 1 | f SELECT * FROM my_keyspace.customers WHERE department_id = 1 AND sales_region_id = 1; department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 1 | 1 | 0 | g 1 | 1 | 1 | h When the IN keyword is used, Amazon Keyspaces automatically paginates the results in any of the following cases: Use IN SELECT 319 Amazon Keyspaces (for Apache Cassandra) Developer Guide • After every 10th subquery is processed. • After processing 1MB of logical IO. • If you configured a PAGE SIZE, Amazon Keyspaces paginates after reading the number of queries for processing based on the set PAGE SIZE. • When you use the LIMIT keyword to reduce the number of rows returned, Amazon Keyspaces paginates after reading the number of queries for processing based on the set LIMIT. The following table is used to illustrate this with an example. For more information about pagination, see the section called “Paginate results”. SELECT * FROM my_keyspace.customers; department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 2 | 0 | 0 | g 2 | 1 | 1 | h 2 | 2 | 2 | i 0 | 0 | 0 | a 0 | 1 | 1 | b 0 | 2 | 2 | c 1 | 0 | 0 | d 1 | 1 | 1 | e 1 | 2 | 2 | f 3 | 0 | 0 | j 3 | 1 | 1 | k 3 | 2 | 2 | l You can run the following statement on this table to see how pagination works. SELECT * FROM my_keyspace.customers WHERE department_id IN (0, 1, 2, 3) AND sales_region_id IN (0, 1, 2) AND sales_representative_id IN (0, 1); Amazon Keyspaces processes this statement as 24 subqueries, because the cardinality of the Cartesian product of all the IN terms contained in this query is 24. department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 0 | 0 | 0 | a Use IN SELECT 320 Amazon Keyspaces (for Apache Cassandra) Developer Guide 0 | 1 | 1 | b 1 | 0 | 0 | d 1 | 1 | 1 | e ---MORE--- department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 2 | 0 | 0 | g 2 | 1 | 1 | h 3 | 0 | 0 | j ---MORE--- department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 3 | 1 | 1 | k This example shows how you can use the ORDER BY clause in a SELECT statement with the IN keyword. SELECT * FROM my_keyspace.customers WHERE department_id IN (3, 2, 1) ORDER BY sales_region_id DESC; department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 3 | 2 | 2 | l 3 | 1 | 1 | k 3 | 0 | 0 | j 2 | 2 | 2 | i 2 | 1 | 1 | h 2 | 0 | 0 | g 1 | 2 | 2 | f 1 | 1 | 1 | e 1 | 0 | 0 | d Subqueries are processed in the order in which the partition key and clustering key columns are presented in the query. In the example below, subqueries for partition key value ”2“ are processed first, followed by subqueries for partition key value ”3“ and ”1“. Results of a given subquery are ordered according to the query's ordering clause, if present, or the table's clustering order defined during table creation. SELECT * FROM my_keyspace.customers WHERE department_id IN (2, 3, 1) ORDER BY sales_region_id DESC; Use IN SELECT 321 Amazon Keyspaces (for Apache Cassandra) Developer Guide department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 2 | 2 | 2 | i 2 | 1 | 1 | h 2 | 0 | |
AmazonKeyspaces-113 | AmazonKeyspaces.pdf | 113 | the query. In the example below, subqueries for partition key value ”2“ are processed first, followed by subqueries for partition key value ”3“ and ”1“. Results of a given subquery are ordered according to the query's ordering clause, if present, or the table's clustering order defined during table creation. SELECT * FROM my_keyspace.customers WHERE department_id IN (2, 3, 1) ORDER BY sales_region_id DESC; Use IN SELECT 321 Amazon Keyspaces (for Apache Cassandra) Developer Guide department_id | sales_region_id | sales_representative_id | customer_name ---------------+-----------------+-------------------------+-------------- 2 | 2 | 2 | i 2 | 1 | 1 | h 2 | 0 | 0 | g 3 | 2 | 2 | l 3 | 1 | 1 | k 3 | 0 | 0 | j 1 | 2 | 2 | f 1 | 1 | 1 | e 1 | 0 | 0 | d Order results with ORDER BY in Amazon Keyspaces The ORDER BY clause specifies the sort order of the results returned in a SELECT statement. The statement takes a list of column names as arguments and for each column you can specify the sort order for the data. You can only specify clustering columns in ordering clauses, non-clustering columns are not allowed. The two available sort order options for the returned results are ASC for ascending and DESC for descending sort order. SELECT * FROM my_keyspace.my_table ORDER BY (col1 ASC, col2 DESC, col3 ASC); col1 | col2 | col3 ------+------+------ 0 | 6 | a 1 | 5 | b 2 | 4 | c 3 | 3 | d 4 | 2 | e 5 | 1 | f 6 | 0 | g SELECT * FROM my_keyspace.my_table ORDER BY (col1 DESC, col2 ASC, col3 DESC); col1 | col2 | col3 ------+------+------ 6 | 0 | g 5 | 1 | f 4 | 2 | e Order results 322 Amazon Keyspaces (for Apache Cassandra) Developer Guide 3 | 3 | d 2 | 4 | c 1 | 5 | b 0 | 6 | a If you don't specify the sort order in the query statement, the default ordering of the clustering column is used. The possible sort orders you can use in an ordering clause depend on the sort order assigned to each clustering column at table creation. Query results can only be sorted in the order defined for all clustering columns at table creation or the inverse of the defined sort order. Other possible combinations are not allowed. For example, if the table's CLUSTERING ORDER is (col1 ASC, col2 DESC, col3 ASC), then the valid parameters for ORDER BY are either (col1 ASC, col2 DESC, col3 ASC) or (col1 DESC, col2 ASC, col3 DESC). For more information on CLUSTERING ORDER, see table_options under the section called “CREATE TABLE”. Paginate results in Amazon Keyspaces Amazon Keyspaces automatically paginates the results from SELECT statements when the data read to process the SELECT statement exceeds 1 MB. With pagination, the SELECT statement results are divided into "pages" of data that are 1 MB in size (or less). An application can process the first page of results, then the second page, and so on. Clients should always check for pagination tokens when processing SELECT queries that return multiple rows. If a client supplies a PAGE SIZE that requires reading more than 1 MB of data, Amazon Keyspaces breaks up the results automatically into multiple pages based on the 1 MB data-read increments. For example, if the average size of a row is 100 KB and you specify a PAGE SIZE of 20, Amazon Keyspaces paginates data automatically after it reads 10 rows (1000 KB of data read). Because Amazon Keyspaces paginates results based on the number of rows that it reads to process a request and not the number of rows returned in the result set, some pages may not contain any rows if you are running filtered queries. For example, if you set PAGE SIZE to 10 and Keyspaces evaluates 30 rows to process your SELECT query, Amazon Keyspaces will return three pages. If only a subset of the rows matched your query, some pages may have less than 10 rows. For an example how the PAGE SIZE of LIMIT queries Paginate results 323 Amazon Keyspaces (for Apache Cassandra) Developer Guide can affect read capacity, see the section called “Estimate the read capacity consumption of limit queries”. For a comparison with Apache Cassandra pagination, see the section called “Pagination”. Working with partitioners in Amazon Keyspaces In Apache Cassandra, partitioners control which nodes data is stored on in the cluster. Partitioners create a numeric token using a hashed value of the partition key. Cassandra uses this token to distribute data across nodes. Clients can also use these tokens in SELECT operations and WHERE clauses to |
AmazonKeyspaces-114 | AmazonKeyspaces.pdf | 114 | how the PAGE SIZE of LIMIT queries Paginate results 323 Amazon Keyspaces (for Apache Cassandra) Developer Guide can affect read capacity, see the section called “Estimate the read capacity consumption of limit queries”. For a comparison with Apache Cassandra pagination, see the section called “Pagination”. Working with partitioners in Amazon Keyspaces In Apache Cassandra, partitioners control which nodes data is stored on in the cluster. Partitioners create a numeric token using a hashed value of the partition key. Cassandra uses this token to distribute data across nodes. Clients can also use these tokens in SELECT operations and WHERE clauses to optimize read and write operations. For example, clients can efficiently perform parallel queries on large tables by specifying distinct token ranges to query in each parallel job. Amazon Keyspaces provides three different partitioners. Murmur3Partitioner (Default) Apache Cassandra-compatible Murmur3Partitioner. The Murmur3Partitioner is the default Cassandra partitioner in Amazon Keyspaces and in Cassandra 1.2 and later versions. RandomPartitioner Apache Cassandra-compatible RandomPartitioner. The RandomPartitioner is the default Cassandra partitioner for versions earlier than Cassandra 1.2. Keyspaces Default Partitioner The DefaultPartitioner returns the same token function results as the RandomPartitioner. The partitioner setting is applied per Region at the account level. For example, if you change the partitioner in US East (N. Virginia), the change is applied to all tables in the same account in this Region. You can safely change your partitioner at any time. Note that the configuration change takes approximately 10 minutes to complete. You do not need to reload your Amazon Keyspaces data when you change the partitioner setting. Clients will automatically use the new partitioner setting the next time they connect. How to change the partitioner in Amazon Keyspaces You can change the partitioner by using the AWS Management Console or Cassandra Query Language (CQL). Working with partitioners 324 Amazon Keyspaces (for Apache Cassandra) AWS Management Console Developer Guide To change the partitioner using the Amazon Keyspaces console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Configuration. 3. On the Configuration page, go to Edit partitioner. 4. Select the partitioner compatible with your version of Cassandra. The partitioner change takes approximately 10 minutes to apply. Note After the configuration change is complete, you have to disconnect and reconnect to Amazon Keyspaces for requests to use the new partitioner. Cassandra Query Language (CQL) 1. To see which partitioner is configured for the account, you can use the following query. SELECT partitioner from system.local; If the partitioner hasn't been changed, the query has the following output. partitioner -------------------------------------------- com.amazonaws.cassandra.DefaultPartitioner 2. To update the partitioner to the Murmur3 partitioner, you can use the following statement. UPDATE system.local set partitioner='org.apache.cassandra.dht.Murmur3Partitioner' where key='local'; 3. Note that this configuration change takes approximately 10 minutes to complete. To confirm that the partitioner has been set, you can run the SELECT query again. Note that due to eventual read consistency, the response might not reflect the results of the recently completed partitioner change yet. If you repeat the SELECT operation again after a short time, the response should return the latest data. Change the partitioner 325 Amazon Keyspaces (for Apache Cassandra) Developer Guide SELECT partitioner from system.local; Note You have to disconnect and reconnect to Amazon Keyspaces so that requests use the new partitioner. Client-side timestamps in Amazon Keyspaces In Amazon Keyspaces, client-side timestamps are Cassandra-compatible timestamps that are persisted for each cell in your table. You can use client-side timestamps for conflict resolution by letting your client applications determine the order of writes. For example, when clients of a globally distributed application make updates to the same data, client-side timestamps persist the order in which the updates were made on the clients. Amazon Keyspaces uses these timestamps to process the writes. Amazon Keyspaces client-side timestamps are fully managed. You don’t have to manage low-level system settings such as clean-up and compaction strategies. When you delete data, the rows are marked for deletion with a tombstone. Amazon Keyspaces removes tombstoned data automatically (typically within 10 days) without impacting your application performance or availability. Tombstoned data isn't available for data manipulation language (DML) statements. As you continue to perform reads and writes on rows that contain tombstoned data, the tombstoned data continues to count towards storage, read capacity units (RCUs), and write capacity units (WCUs) until it's deleted from storage. After client-side timestamps have been turned on for a table, you can specify a timestamp with the USING TIMESTAMP clause in your Data Manipulation Language (DML) CQL query. For more information, see the section called “Use client-side timestamps in queries”. If you do not specify a timestamp in your CQL query, Amazon Keyspaces uses the timestamp passed by your client driver. If the client driver doesn’t supply timestamps, Amazon Keyspaces assigns a cell-level timestamp |
AmazonKeyspaces-115 | AmazonKeyspaces.pdf | 115 | that contain tombstoned data, the tombstoned data continues to count towards storage, read capacity units (RCUs), and write capacity units (WCUs) until it's deleted from storage. After client-side timestamps have been turned on for a table, you can specify a timestamp with the USING TIMESTAMP clause in your Data Manipulation Language (DML) CQL query. For more information, see the section called “Use client-side timestamps in queries”. If you do not specify a timestamp in your CQL query, Amazon Keyspaces uses the timestamp passed by your client driver. If the client driver doesn’t supply timestamps, Amazon Keyspaces assigns a cell-level timestamp automatically, because timestamps can't be NULL. To query for timestamps, you can use the WRITETIME function in your DML statement. Amazon Keyspaces doesn't charge extra to turn on client-side timestamps. However, with client- side timestamps you store and write additional data for each value in your row. This can lead to Client-side timestamps 326 Amazon Keyspaces (for Apache Cassandra) Developer Guide additional storage usage and in some cases additional throughput usage. For more information about Amazon Keyspaces pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. When client-side timestamps are turned on in Amazon Keyspaces, every column of every row stores a timestamp. These timestamps take up approximately 20–40 bytes (depending on your data), and contribute to the storage and throughput cost for the row. These metadata bytes also count towards your 1-MB row size quota. To determine the overall increase in storage space (to ensure that the row size stays under 1 MB), consider the number of columns in your table and the number of collection elements in each row. For example, if a table has 20 columns, with each column storing 40 bytes of data, the size of the row increases from 800 bytes to 1200 bytes. For more information on how to estimate the size of a row, see the section called “Estimate row size”. In addition to the extra 400 bytes for storage, in this example, the number of write capacity units (WCUs) consumed per write increases from 1 WCU to 2 WCUs. For more information on how to calculate read and write capacity, see the section called “Configure read/write capacity modes”. After client-side timestamps have been turned on for a table, you can't turn it off. To learn more about how to use client-side timestamps in queries, see the section called “Use client-side timestamps in queries”. Topics • How Amazon Keyspaces client-side timestamps integrate with AWS services • Create a new table with client-side timestamps in Amazon Keyspaces • Configure client-side timestamps for a table in Amazon Keyspaces • Use client-side timestamps in queries in Amazon Keyspaces How Amazon Keyspaces client-side timestamps integrate with AWS services The following client-side timestamps metric is available in Amazon CloudWatch to enable continuous monitoring. • SystemReconciliationDeletes – The number of delete operations required to remove tombstoned data. For more information about how to monitor CloudWatch metrics, see the section called “Monitoring with CloudWatch”. Integration with AWS services 327 Amazon Keyspaces (for Apache Cassandra) Developer Guide When you use AWS CloudFormation, you can enable client-side timestamps when creating a Amazon Keyspaces table. For more information, see the AWS CloudFormation User Guide. Create a new table with client-side timestamps in Amazon Keyspaces Follow these examples to create a new Amazon Keyspaces table with client-side timestamps enabled using the Amazon Keyspaces AWS Management Console, Cassandra Query Language (CQL), or the AWS Command Line Interface Console Create a new table with client-side timestamps (console) 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables, and then choose Create table. 3. On the Create table page in the Table details section, select a keyspace and provide a name for the new table. 4. 5. In the Schema section, create the schema for your table. In the Table settings section, choose Customize settings. 6. Continue to Client-side timestamps. Choose Turn on client-side timestamps to turn on client-side timestamps for the table. 7. Choose Create table. Your table is created with client-side timestamps turned on. Cassandra Query Language (CQL) Create a new table using CQL 1. To create a new table with client-side timestamps enabled using CQL, you can use the following example. CREATE TABLE my_keyspace.my_table ( userid uuid, time timeuuid, subject text, body text, user inet, PRIMARY KEY (userid, time) Create table with client-side timestamps 328 Amazon Keyspaces (for Apache Cassandra) Developer Guide ) WITH CUSTOM_PROPERTIES = {'client_side_timestamps': {'status': 'enabled'}}; 2. To confirm the client-side timestamps settings for the new table, use a SELECT statement to review the custom_properties as shown in the following example. SELECT custom_properties from system_schema_mcs.tables where keyspace_name = 'my_keyspace' and table_name = 'my_table'; The output of this statement shows the status for client-side timestamps. 'client_side_timestamps': {'status': 'enabled'} AWS CLI |
AmazonKeyspaces-116 | AmazonKeyspaces.pdf | 116 | enabled using CQL, you can use the following example. CREATE TABLE my_keyspace.my_table ( userid uuid, time timeuuid, subject text, body text, user inet, PRIMARY KEY (userid, time) Create table with client-side timestamps 328 Amazon Keyspaces (for Apache Cassandra) Developer Guide ) WITH CUSTOM_PROPERTIES = {'client_side_timestamps': {'status': 'enabled'}}; 2. To confirm the client-side timestamps settings for the new table, use a SELECT statement to review the custom_properties as shown in the following example. SELECT custom_properties from system_schema_mcs.tables where keyspace_name = 'my_keyspace' and table_name = 'my_table'; The output of this statement shows the status for client-side timestamps. 'client_side_timestamps': {'status': 'enabled'} AWS CLI Create a new table using the AWS CLI 1. To create a new table with client-side timestamps enabled, you can use the following example. ./aws keyspaces create-table \ --keyspace-name my_keyspace \ --table-name my_table \ --client-side-timestamps 'status=ENABLED' \ --schema-definition 'allColumns=[{name=id,type=int},{name=date,type=timestamp}, {name=name,type=text}],partitionKeys=[{name=id}]' 2. To confirm that client-side timestamps are turned on for the new table, run the following code. ./aws keyspaces get-table \ --keyspace-name my_keyspace \ --table-name my_table The output should look similar to this example. { "keyspaceName": "my_keyspace", "tableName": "my_table", "resourceArn": "arn:aws:cassandra:us-east-2:555555555555:/keyspace/ my_keyspace/table/my_table", Create table with client-side timestamps 329 Amazon Keyspaces (for Apache Cassandra) Developer Guide "creationTimestamp": 1662681206.032, "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "id", "type": "int" }, { "name": "date", "type": "timestamp" }, { "name": "name", "type": "text" } ], "partitionKeys": [ { "name": "id" } ], "clusteringKeys": [], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": 1662681206.032 }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "clientSideTimestamps": { "status": "ENABLED" }, "ttl": { "status": "ENABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" Create table with client-side timestamps 330 Amazon Keyspaces (for Apache Cassandra) Developer Guide } } Configure client-side timestamps for a table in Amazon Keyspaces Follow these examples to turn on client-side timestamps for existing tables using the Amazon Keyspaces AWS Management Console, Cassandra Query Language (CQL), or the AWS Command Line Interface. Console To turn on client-side timestamps for an existing table (console) 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. Choose the table that you want to update, and then choose Additional settings tab. 3. On the Additional settings tab, go to Modify client-side timestamps and select Turn on client-side timestamps 4. Choose Save changes to change the settings of the table. Cassandra Query Language (CQL) Using a CQL statement 1. Turn on client-side timestamps for an existing table with the ALTER TABLE CQL statement. ALTER TABLE my_table WITH custom_properties = {'client_side_timestamps': {'status': 'enabled'}};; 2. To confirm the client-side timestamps settings for the new table, use a SELECT statement to review the custom_properties as shown in the following example. SELECT custom_properties from system_schema_mcs.tables where keyspace_name = 'my_keyspace' and table_name = 'my_table'; The output of this statement shows the status for client-side timestamps. Configure client-side timestamps 331 Amazon Keyspaces (for Apache Cassandra) Developer Guide 'client_side_timestamps': {'status': 'enabled'} AWS CLI Using the AWS CLI 1. You can turn on client-side timestamps for an existing table using the AWS CLI using the following example. ./aws keyspaces update-table \ --keyspace-name my_keyspace \ --table-name my_table \ --client-side-timestamps 'status=ENABLED' 2. To confirm that client-side timestamps are turned on for the table, run the following code. ./aws keyspaces get-table \ --keyspace-name my_keyspace \ --table-name my_table The output should look similar to this example and state the status for client-side timestamps as ENABLED. { "keyspaceName": "my_keyspace", "tableName": "my_table", "resourceArn": "arn:aws:cassandra:us-east-2:555555555555:/keyspace/ my_keyspace/table/my_table", "creationTimestamp": 1662681312.906, "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "id", "type": "int" }, { "name": "date", "type": "timestamp" }, Configure client-side timestamps 332 Amazon Keyspaces (for Apache Cassandra) Developer Guide { "name": "name", "type": "text" } ], "partitionKeys": [ { "name": "id" } ], "clusteringKeys": [], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": 1662681312.906 }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "clientSideTimestamps": { "status": "ENABLED" }, "ttl": { "status": "ENABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" } } Use client-side timestamps in queries in Amazon Keyspaces After you have turned on client-side timestamps, you can pass the timestamp in your INSERT, UPDATE, and DELETE statements with the USING TIMESTAMP clause. The timestamp value is a bigint representing a number of microseconds since the standard base time known as the epoch: January 1 1970 at 00:00:00 GMT. A timestamp that is supplied by the Use client-side timestamps in queries 333 Amazon Keyspaces (for Apache Cassandra) Developer Guide client has to fall between the range of 2 days in the past and 5 minutes in the future from the current wall clock time. Amazon Keyspaces keeps timestamp metadata for the life of the data. You can use the WRITETIME function to look up timestamps that occurred years in the past. For more information about CQL syntax, see the section called “DML statements”. The following CQL |
AmazonKeyspaces-117 | AmazonKeyspaces.pdf | 117 | standard base time known as the epoch: January 1 1970 at 00:00:00 GMT. A timestamp that is supplied by the Use client-side timestamps in queries 333 Amazon Keyspaces (for Apache Cassandra) Developer Guide client has to fall between the range of 2 days in the past and 5 minutes in the future from the current wall clock time. Amazon Keyspaces keeps timestamp metadata for the life of the data. You can use the WRITETIME function to look up timestamps that occurred years in the past. For more information about CQL syntax, see the section called “DML statements”. The following CQL statement is an example of how to use a timestamp as an update_parameter. INSERT INTO catalog.book_awards (year, award, rank, category, book_title, author, publisher) VALUES (2022, 'Wolf', 4, 'Non-Fiction', 'Science Update', 'Ana Carolina Silva', 'SomePublisher') USING TIMESTAMP 1669069624; If you do not specify a timestamp in your CQL query, Amazon Keyspaces uses the timestamp passed by your client driver. If no timestamp is supplied by the client driver, Amazon Keyspaces assigns a server-side timestamp for your write operation. To see the timestamp value that is stored for a specific column, you can use the WRITETIME function in a SELECT statement as shown in the following example. SELECT year, award, rank, category, book_title, author, publisher, WRITETIME(year), WRITETIME(award), WRITETIME(rank), WRITETIME(category), WRITETIME(book_title), WRITETIME(author), WRITETIME(publisher) from catalog.book_awards; Multi-Region replication for Amazon Keyspaces (for Apache Cassandra) You can use Amazon Keyspaces multi-Region replication to replicate your data with automated, fully managed, active-active replication across the AWS Regions of your choice. With active-active replication, each Region is able to perform reads and writes in isolation. You can improve both availability and resiliency from Regional degradation, while also benefiting from low-latency local reads and writes for global applications. With multi-Region replication, Amazon Keyspaces asynchronously replicates data between Regions, and data is typically propagated across Regions within a second. Also, with multi- Multi-Region replication 334 Amazon Keyspaces (for Apache Cassandra) Developer Guide Region replication, you no longer have the difficult work of resolving conflicts and correcting data divergence issues, so you can focus on your application. By default, Amazon Keyspaces replicates data across three Availability Zones within the same AWS Region for durability and high availability. With multi-Region replication, you can create multi- Region keyspaces that replicate your tables in different geographic AWS Regions of your choice. Topics • Benefits of using multi-Region replication • Capacity modes and pricing • How multi-Region replication works in Amazon Keyspaces • Amazon Keyspaces multi-Region replication usage notes • Configure multi-Region replication for Amazon Keyspaces (for Apache Cassandra) Benefits of using multi-Region replication Multi-Region replication provides the following benefits. • Global reads and writes with single-digit millisecond latency – In Amazon Keyspaces, replication is active-active. You can serve both reads and writes locally from the Regions closest to your customers with single-digit millisecond latency at any scale. You can use Amazon Keyspaces multi-Region tables for global applications that need a fast response time anywhere in the world. • Improved business continuity and protection from single-Region degradation – With multi- Region replication, you can recover from degradation in a single AWS Region by redirecting your application to a different Region in your multi-Region keyspace. Because Amazon Keyspaces offers active-active replication, there is no impact to your reads and writes. Amazon Keyspaces keeps track of any writes that have been performed on your multi-Region keyspace but haven't been propagated to all replica Regions. After the Region comes back online, Amazon Keyspaces automatically syncs any missing changes so that you can recover without any application impact. • High-speed replication across Regions – Multi-Region replication uses fast, storage-based physical replication of data across Regions, with a replication lag that is typically less than 1 second. Benefits 335 Amazon Keyspaces (for Apache Cassandra) Developer Guide Replication in Amazon Keyspaces has little to no impact on your database queries because it doesn’t share compute resources with your application. This means that you can address high- write throughput use cases or use cases with sudden spikes or bursts in throughput without any application impact. • Consistency and conflict resolution – Any changes made to data in any Region are replicated to the other Regions in a multi-Region keyspace. If applications update the same data in different Regions at the same time, conflicts can arise. To help provide eventual consistency, Amazon Keyspaces uses cell-level timestamps and a last writer wins reconciliation between concurrent updates. Conflict resolution is fully managed and happens in the background without any application impact. For more information about supported configurations and features, see the section called “Usage notes”. Capacity modes and pricing For a multi-Region keyspace, you can either use on-demand capacity mode or provisioned capacity mode. For more information, see the section called “Configure read/write capacity modes”. For on-demand mode, you're billed 1 write request unit |
AmazonKeyspaces-118 | AmazonKeyspaces.pdf | 118 | update the same data in different Regions at the same time, conflicts can arise. To help provide eventual consistency, Amazon Keyspaces uses cell-level timestamps and a last writer wins reconciliation between concurrent updates. Conflict resolution is fully managed and happens in the background without any application impact. For more information about supported configurations and features, see the section called “Usage notes”. Capacity modes and pricing For a multi-Region keyspace, you can either use on-demand capacity mode or provisioned capacity mode. For more information, see the section called “Configure read/write capacity modes”. For on-demand mode, you're billed 1 write request unit (WRU) to write up to 1 KB of data per row the same way as for single-Region tables. But you're billed for writes in each Region of your multi- Region keyspace. For example, writing a row of 3 KB of data in a multi-Region keyspace with two Regions requires 6 WRUs: 3 * 2 = 6 WRUs. Additionally, writes that include both static and non- static data require additional write operations. For provisioned mode, you're billed 1 write capacity unit (WCU) to write up to 1 KB of data per row, the same way as for single-Region tables. But you're billed for writes in each Region of your multi- Region keyspace. For example, writing a row of 3 KB of data per second in a multi-Region keyspace with two Regions requires 6 WCUs: 3 * 2 = 6 WCUs. Additionally, writes that include both static and non-static data require additional write operations. For more information about pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. How multi-Region replication works in Amazon Keyspaces This section provides an overview of how Amazon Keyspaces multi-Region replication works. For more information about pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. Capacity modes and pricing 336 Amazon Keyspaces (for Apache Cassandra) Developer Guide Topics • How multi-Region replication works in Amazon Keyspaces • Multi-Region replication conflict resolution • Multi-Region replication disaster recovery • Multi-Region replication in AWS Regions disabled by default • Multi-Region replication and integration with point-in-time recovery (PITR) • Multi-Region replication and integration with AWS services How multi-Region replication works in Amazon Keyspaces Amazon Keyspaces multi-Region replication implements a data resiliency architecture that distributes your data across independent and geographically distributed AWS Regions. It uses active-active replication, which provides local low latency with each Region being able to perform reads and writes in isolation. When you create an Amazon Keyspaces multi-Region keyspace, you can select additional Regions where the data is going to be replicated to. Each table you create in a multi-Region keyspace consists of multiple replica tables (one per Region) that Amazon Keyspaces considers as a single unit. Every replica has the same table name and the same primary key schema. When an application writes data to a local table in one Region, the data is durably written using the LOCAL_QUORUM consistency level. Amazon Keyspaces automatically replicates the data asynchronously to the other replication Regions. The replication lag across Regions is typically less than one second and doesn't impact your application’s performance or throughput. After the data is written, you can read it from the multi-Region table in another replication Region with the LOCAL_ONE/LOCAL_QUORUM consistency levels. For more information about supported configurations and features, see the section called “Usage notes”. How it works 337 Amazon Keyspaces (for Apache Cassandra) Developer Guide Multi-Region replication conflict resolution Amazon Keyspaces multi-Region replication is fully managed, which means that you don't have to perform replication tasks such as regularly running repair operations to clean-up data synchronization issues. Amazon Keyspaces monitors data consistency between tables in different AWS Regions by detecting and repairing conflicts, and synchronizes replicas automatically. Amazon Keyspaces uses the last writer wins method of data reconciliation. With this conflict resolution mechanism, all of the Regions in a multi-Region keyspace agree on the latest update and converge toward a state in which they all have identical data. The reconciliation process has no impact on application performance. To support conflict resolution, client-side timestamps are automatically turned on for multi-Region tables and can't be turned off. For more information, see the section called “Client-side timestamps”. How it works 338 Amazon Keyspaces (for Apache Cassandra) Developer Guide Multi-Region replication disaster recovery With Amazon Keyspaces multi-Region replication, writes are replicated asynchronously across each Region. In the rare event of a single Region degradation or failure, multi-Region replication helps you to recover from disaster with little to no impact to your application. Recovery from disaster is typically measured using values for Recovery time objective (RTO) and Recovery point objective (RPO). Recovery time objective – The time it takes a system to return to a working state after a disaster. RTO measures the amount of downtime your workload can tolerate, measured in time. For disaster recovery plans that use multi-Region replication |
AmazonKeyspaces-119 | AmazonKeyspaces.pdf | 119 | replication disaster recovery With Amazon Keyspaces multi-Region replication, writes are replicated asynchronously across each Region. In the rare event of a single Region degradation or failure, multi-Region replication helps you to recover from disaster with little to no impact to your application. Recovery from disaster is typically measured using values for Recovery time objective (RTO) and Recovery point objective (RPO). Recovery time objective – The time it takes a system to return to a working state after a disaster. RTO measures the amount of downtime your workload can tolerate, measured in time. For disaster recovery plans that use multi-Region replication to fail over to an unaffected Region, the RTO can be nearly zero. The RTO is limited by how quickly your application can detect the failure condition and redirect traffic to another Region. Recovery point objective – The amount of data that can be lost (measured in time). For disaster recovery plans that use multi-Region replication to fail over to an unaffected Region, the RPO is typically single-digit seconds. The RPO is limited by replication latency to the failover target replica. In the event of a Regional failure or degradation, you don't need to promote a secondary Region or perform database failover procedures because replication in Amazon Keyspaces is active-active. Instead, you can use Amazon Route 53 to route your application to the nearest healthy Region. To learn more about Route 53, see What is Amazon Route 53?. If a single AWS Region becomes isolated or degraded, your application can redirect traffic to a different Region using Route 53 to perform reads and writes against a different replica table. You can also apply custom business logic to determine when to redirect requests to other Regions. An example of this is making your application aware of the multiple endpoints that are available. When the Region comes back online, Amazon Keyspaces resumes propagating any pending writes from that Region to the replica tables in other Regions. It also resumes propagating writes from other replica tables to the Region that is now back online. Multi-Region replication in AWS Regions disabled by default Amazon Keyspaces multi-Region replication is supported in the following AWS Regions that are disabled by default: • Africa (Cape Town) Region How it works 339 Amazon Keyspaces (for Apache Cassandra) Developer Guide Before you can use a Region that's disabled by default with Amazon Keyspaces multi-Region replication, you first have to enable the Region. For more information, see Enable or disable AWS Regions in your account in the AWS Organizations User Guide. After you've enabled a Region, you can create new Amazon Keyspaces resources in the Region and add the Region to a multi-Region keyspace. When you disable a Region that is used by Amazon Keyspaces multi-Region replication, Amazon Keyspaces initiates a 24-hour grace period. During this time window, you can expect the following behavior: • Amazon Keyspaces continues to perform data manipulation language (DML) operations in enabled Regions. • Amazon Keyspaces pauses replicating data updates from enabled Regions to the disabled Region. • Amazon Keyspaces blocks all data definition language (DDL) requests in the disabled Region. If you disabled the Region in error, you can re-enable the Region within 24 hours. If you re-enable the Region during the 24-hour grace period, Amazon Keyspaces is going to take the following actions: • Automatically resume all replications to the re-enabled Region. • Replicate any data updates that took place in enabled Regions while the Region was disabled to ensure data consistency. • Continue all additional multi-Region replication operations automatically. In the case that the Region remains disabled after the 24-hour window closes, Amazon Keyspaces takes the following actions to permanently remove the Region from multi-Region replication: • Remove the disabled Region from all multi-Region replication keyspaces. • Convert multi-Region replication table replicas in the disabled Region into single-Region keyspaces and tables. • Amazon Keyspaces doesn't delete any resources from the disabled Region. After Amazon Keyspaces has permanently removed the disabled Region from the multi-Region keyspace, you can't add the disabled Region back. How it works 340 Amazon Keyspaces (for Apache Cassandra) Developer Guide Multi-Region replication and integration with point-in-time recovery (PITR) Point-in-time recovery is supported for multi-Region tables. To successfully restore a multi-Region table with PITR, the following conditions have to be met. • The source and the target table must be configured as multi-Region tables. • The replication Regions for the keyspace of the source table and for the keyspace of the target table must be the same. • PITR has to be enabled on all replicas of the source table. You can run the restore statement from any of the Regions that the source table is available in. Amazon Keyspaces automatically restores the target table in each Region. For more information about PITR, see the section called “How it works”. |
AmazonKeyspaces-120 | AmazonKeyspaces.pdf | 120 | multi-Region table with PITR, the following conditions have to be met. • The source and the target table must be configured as multi-Region tables. • The replication Regions for the keyspace of the source table and for the keyspace of the target table must be the same. • PITR has to be enabled on all replicas of the source table. You can run the restore statement from any of the Regions that the source table is available in. Amazon Keyspaces automatically restores the target table in each Region. For more information about PITR, see the section called “How it works”. When you create a multi-Region table, the PITR settings that you define during the creation process are automatically applied to all tables in all Regions. When you change PITR settings using ALTER TABLE, Amazon Keyspaces applies the update only to the local table and not to the replicas in other Regions. To enable PITR for an existing multi-Region table, you have to repeat the ALTER TABLE statement for all replicas. Multi-Region replication and integration with AWS services You can monitor replication performance between tables in different AWS Regions by using Amazon CloudWatch metrics. The following metric provides continuous monitoring of multi- Region keyspaces. • ReplicationLatency – This metric measures the time it took to replicate updates, inserts, or deletes from one replica table to another replica table in a multi-Region keyspace. For more information about how to monitor CloudWatch metrics, see the section called “Monitoring with CloudWatch”. Amazon Keyspaces multi-Region replication usage notes Consider the following when you're using multi-Region replication with Amazon Keyspaces. • You can select any of the available public AWS Regions. For more information about AWS Regions that are disabled by default, see the section called “Regions disabled by default”. Usage notes 341 Amazon Keyspaces (for Apache Cassandra) Developer Guide • AWS GovCloud (US) Regions and China Regions are not supported. • Consider the following workarounds until the features become available: Configure Time to Live (TTL) when creating the multi-Region table. You won't be able to enable and disable TTL, or adjust the TTL value later. For more information, see the section called “Expire data with Time to Live”. • For encryption at rest, use an AWS owned key. Customer managed keys are currently not supported for multi-Region tables. For more information, see the section called “How it works”. • You can use ALTER KEYSPACE to add a Region to a single-Region or a multi-Region keyspace. For more information, see the section called “Add a Region to a keyspace”. • Before adding a Region to a single-Region keyspace, ensure that no tables under the keyspace are configured with customer managed keys. • Any existing tags configured for keyspaces or tables are not replicated to the new Region. • When you're using provisioned capacity management with Amazon Keyspaces auto scaling, make sure to use the Amazon Keyspaces API operations to create and configure your multi-Region tables. The underlying Application Auto Scaling API operations that Amazon Keyspaces calls on your behalf don't have multi-Region capabilities. For more information, see the section called “Update provisioned capacity and auto scaling settings for a multi-Region table”. For more information on how to estimate the write capacity throughput of provisioned multi-Region tables, see the section called “Estimate capacity for a multi-Region table”. • Although data is automatically replicated across the selected Regions of a multi-Region table, when a client connects to an endpoint in one Region and queries the system.peers table, the query returns only local information. The query result appears like a single data center cluster to the client. • Amazon Keyspaces multi-Region replication is asynchronous, and it supports LOCAL_QUORUM consistency for writes. LOCAL_QUORUM consistency requires that an update to a row is durably persisted on two replicas in the local Region before returning success to the client. The propagation of writes to the replicated Region (or Regions) is then performed asynchronously. Amazon Keyspaces multi-Region replication doesn't support synchronous replication or QUORUM consistency. Usage notes 342 Amazon Keyspaces (for Apache Cassandra) Developer Guide • When you create a multi-Region keyspace or table, any tags that you define during the creation process are automatically applied to all keyspaces and tables in all Regions. When you change the existing tags using ALTER KEYSPACE or ALTER TABLE, the update is only applied to the keyspace or table in the Region where you're making the change. • Amazon CloudWatch provides a ReplicationLatency metric for each replicated Region. It calculates this metric by tracking arriving rows, comparing their arrival time with their initial write time, and computing an average. Timings are stored within CloudWatch in the source Region. For more information, see the section called “Monitoring with CloudWatch”. It can be useful to view the average and maximum timings to determine the average and worst- case |
AmazonKeyspaces-121 | AmazonKeyspaces.pdf | 121 | Regions. When you change the existing tags using ALTER KEYSPACE or ALTER TABLE, the update is only applied to the keyspace or table in the Region where you're making the change. • Amazon CloudWatch provides a ReplicationLatency metric for each replicated Region. It calculates this metric by tracking arriving rows, comparing their arrival time with their initial write time, and computing an average. Timings are stored within CloudWatch in the source Region. For more information, see the section called “Monitoring with CloudWatch”. It can be useful to view the average and maximum timings to determine the average and worst- case replication lag. There is no SLA on this latency. • When using a multi-Region table in on-demand mode, you may observe an increase in latency for asynchronous replication of writes if a table replica experiences a new traffic peak. Similar to how Amazon Keyspaces automatically adapts the capacity of a single-Region on-demand table to the application traffic it receives, Amazon Keyspaces automatically adapts the capacity of a multi-Region on-demand table replica to the traffic that it receives. The increase in replication latency is transient because Amazon Keyspaces automatically allocates more capacity as your traffic volume increases. Once all replicas have adapted to your traffic volume, replication latency should return back to normal. For more information, see the section called “Peak traffic and scaling properties”. • When using a multi-Region table in provisioned mode, if your application exceeds your provisioned throughput capacity, you may observe insufficient capacity errors and an increase in replication latency. To ensure that there's always enough read and write capacity for all table replicas in all AWS Regions of a multi-Region table, we recommend that you configure Amazon Keyspaces auto scaling. Amazon Keyspaces auto scaling helps you provision throughput capacity efficiently for variable workloads by adjusting throughput capacity automatically in response to actual application traffic. For more information, see the section called “How auto scaling works for multi-Region tables”. Configure multi-Region replication for Amazon Keyspaces (for Apache Cassandra) You can use the console, Cassandra Query Language (CQL), or the AWS Command Line Interface to create and manage multi-Region keyspaces and tables in Amazon Keyspaces. Configure multi-Region replication 343 Amazon Keyspaces (for Apache Cassandra) Developer Guide This section provides examples of how to create and manage multi-Region keyspaces and tables. All tables that you create in a multi-Region keyspace automatically inherit the multi-Region settings from the keyspace. For more information about supported configurations and features, see the section called “Usage notes”. Topics • Configure the IAM permissions required to create multi-Region keyspaces and tables • Configure the IAM permissions required to add an AWS Region to a keyspace • Create a multi-Region keyspace in Amazon Keyspaces • Add an AWS Region to a keyspace in Amazon Keyspaces • Check the replication progress when adding a new Region to a keyspace • Create a multi-Region table with default settings in Amazon Keyspaces • Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces • Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces • View the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces • Turn off auto scaling for a table in Amazon Keyspaces • Set the provisioned capacity of a multi-Region table manually in Amazon Keyspaces Configure the IAM permissions required to create multi-Region keyspaces and tables To successfully create multi-Region keyspaces and tables, the IAM principal needs to be able to create a service-linked role. This service-linked role is a unique type of IAM role that is predefined by Amazon Keyspaces. It includes all the permissions that Amazon Keyspaces requires to perform actions on your behalf. For more information about the service-linked role, see the section called “Multi-Region Replication”. To create the service-linked role required by multi-Region replication, the policy for the IAM principal requires the following elements: • iam:CreateServiceLinkedRole – The action the principal can perform. Configure multi-Region replication 344 Amazon Keyspaces (for Apache Cassandra) Developer Guide • arn:aws:iam::*:role/aws-service-role/replication.cassandra.amazonaws.com/ AWSServiceRoleForKeyspacesReplication – The resource that the action can be performed on. • iam:AWSServiceName": "replication.cassandra.amazonaws.com – The only AWS service that this role can be attached to is Amazon Keyspaces. The following is an example of the policy that grants the minimum required permissions to a principal to create multi-Region keyspaces and tables. { "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "arn:aws:iam::*:role/aws-service-role/ replication.cassandra.amazonaws.com/AWSServiceRoleForKeyspacesReplication", "Condition": {"StringLike": {"iam:AWSServiceName": "replication.cassandra.amazonaws.com"}} } For additional IAM permissions for multi-Region keyspaces and tables, see the Actions, resources, and condition keys for Amazon Keyspaces (for Apache Cassandra) in the Service Authorization Reference. Configure the IAM permissions required to add an AWS Region to a keyspace To add a Region to a keyspace, the IAM principal needs the following permissions: • cassandra:Alter • cassandra:AlterMultiRegionResource • cassandra:Create • cassandra:CreateMultiRegionResource • cassandra:Select • cassandra:SelectMultiRegionResource • |
AmazonKeyspaces-122 | AmazonKeyspaces.pdf | 122 | an example of the policy that grants the minimum required permissions to a principal to create multi-Region keyspaces and tables. { "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "arn:aws:iam::*:role/aws-service-role/ replication.cassandra.amazonaws.com/AWSServiceRoleForKeyspacesReplication", "Condition": {"StringLike": {"iam:AWSServiceName": "replication.cassandra.amazonaws.com"}} } For additional IAM permissions for multi-Region keyspaces and tables, see the Actions, resources, and condition keys for Amazon Keyspaces (for Apache Cassandra) in the Service Authorization Reference. Configure the IAM permissions required to add an AWS Region to a keyspace To add a Region to a keyspace, the IAM principal needs the following permissions: • cassandra:Alter • cassandra:AlterMultiRegionResource • cassandra:Create • cassandra:CreateMultiRegionResource • cassandra:Select • cassandra:SelectMultiRegionResource • cassandra:Modify • cassandra:ModifyMultiRegionResource If the table is configured in provisioned mode with auto scaling enabled, the following additional permissions are needed. Configure multi-Region replication 345 Amazon Keyspaces (for Apache Cassandra) Developer Guide • application-autoscaling:RegisterScalableTarget • application-autoscaling:DeregisterScalableTarget • application-autoscaling:DescribeScalableTargets • application-autoscaling:PutScalingPolicy • application-autoscaling:DescribeScalingPolicies To successfully add a Region to a single-Region keyspace, the IAM principal also needs to be able to create a service-linked role. This service-linked role is a unique type of IAM role that is predefined by Amazon Keyspaces. It includes all the permissions that Amazon Keyspaces requires to perform actions on your behalf. For more information about the service-linked role, see the section called “Multi-Region Replication”. To create the service-linked role required by multi-Region replication, the policy for the IAM principal requires the following elements: • iam:CreateServiceLinkedRole – The action the principal can perform. • arn:aws:iam::*:role/aws-service-role/replication.cassandra.amazonaws.com/ AWSServiceRoleForKeyspacesReplication – The resource that the action can be performed on. • iam:AWSServiceName": "replication.cassandra.amazonaws.com – The only AWS service that this role can be attached to is Amazon Keyspaces. The following is an example of the policy that grants the minimum required permissions to a principal to add a Region to a keyspace. { "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "arn:aws:iam::*:role/aws-service-role/ replication.cassandra.amazonaws.com/AWSServiceRoleForKeyspacesReplication", "Condition": {"StringLike": {"iam:AWSServiceName": "replication.cassandra.amazonaws.com"}} } Configure multi-Region replication 346 Amazon Keyspaces (for Apache Cassandra) Developer Guide Create a multi-Region keyspace in Amazon Keyspaces This section provides examples of how to create a multi-Region keyspace. You can do this on the Amazon Keyspaces console, using CQL or the AWS CLI. All tables that you create in a multi-Region keyspace automatically inherit the multi-Region settings from the keyspace. Note When creating a multi-Region keyspace, Amazon Keyspaces creates a service-linked role with the name AWSServiceRoleForAmazonKeyspacesReplication in your account. This role allows Amazon Keyspaces to replicate writes to all replicas of a multi-Region table on your behalf. To learn more, see the section called “Multi-Region Replication”. Console Create a multi-Region keyspace (console) 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. 3. 4. In the navigation pane, choose Keyspaces, and then choose Create keyspace. For Keyspace name, enter the name for the keyspace. In the Multi-Region replication section, you can add the additional Regions that are available in the list. 5. To finish, choose Create keyspace. Cassandra Query Language (CQL) Create a multi-Region keyspace using CQL 1. To create a multi-Region keyspace, use NetworkTopologyStrategy to specify the AWS Regions that the keyspace is going to be replicated in. You must include your current Region and at least one additional Region. All tables in the keyspace inherit the replication strategy from the keyspace. You can't change the replication strategy at the table level. Configure multi-Region replication 347 Amazon Keyspaces (for Apache Cassandra) Developer Guide NetworkTopologyStrategy – The replication factor for each Region is three because Amazon Keyspaces replicates data across three Availability Zones within the same AWS Region, by default. The following CQL statement is an example of this. CREATE KEYSPACE mykeyspace WITH REPLICATION = {'class':'NetworkTopologyStrategy', 'us-east-1':'3', 'ap- southeast-1':'3','eu-west-1':'3' }; 2. You can use a CQL statement to query the tables table in the system_multiregion_info keyspace to programmatically list the Regions and the status of the multi-Region table that you specify. The following code is an example of this. SELECT * from system_multiregion_info.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable'; The output of the statement looks like the following: keyspace_name | table_name | region | status ----------------+----------------+----------------+-------- mykeyspace | mytable | us-east-1 | ACTIVE mykeyspace | mytable | ap-southeast-1 | ACTIVE mykeyspace | mytable | eu-west-1 | ACTIVE CLI Create a new multi-Region keyspace using the AWS CLI • To create a multi-Region keyspace, you can use the following CLI statement. Specify your current Region and at least one additional Region in the regionList. aws keyspaces create-keyspace --keyspace-name mykeyspace \ --replication-specification replicationStrategy=MULTI_REGION,regionList=us- east-1,eu-west-1 Configure multi-Region replication 348 Amazon Keyspaces (for Apache Cassandra) Developer Guide To create a multi-Region table, see the section called “Create a multi-Region table with default settings” and the section called “Create a multi-Region table in provisioned mode”. Add an AWS Region to a keyspace in Amazon Keyspaces You can add a new AWS Region to a keyspace that is either a single or |
AmazonKeyspaces-123 | AmazonKeyspaces.pdf | 123 | CLI • To create a multi-Region keyspace, you can use the following CLI statement. Specify your current Region and at least one additional Region in the regionList. aws keyspaces create-keyspace --keyspace-name mykeyspace \ --replication-specification replicationStrategy=MULTI_REGION,regionList=us- east-1,eu-west-1 Configure multi-Region replication 348 Amazon Keyspaces (for Apache Cassandra) Developer Guide To create a multi-Region table, see the section called “Create a multi-Region table with default settings” and the section called “Create a multi-Region table in provisioned mode”. Add an AWS Region to a keyspace in Amazon Keyspaces You can add a new AWS Region to a keyspace that is either a single or a multi-Region keyspace. The new replica Region is applied to all tables in the keyspace. To change a single-Region to a multi-Region keyspace, you have to enable client-side timestamps for all tables in the keyspace. For more information, see the section called “Client-side timestamps”. If you're adding an additional Region to a multi-Region keyspace, Amazon Keyspaces has to replicate the existing table(s) into the new Region using a one-time cross-Region restore for each existing table. The restore charges for each table are billed per GB, for more information see Backup and restore on the Amazon Keyspaces (for Apache Cassandra) pricing page. There's no charge for data transfer across Regions for this restore operation. In addition to data, all table properties with the exception of tags are going to be replicated to the new Region. You can use the ALTER KEYSPACE statement in CQL, the update-keyspace command with the AWS CLI, or the console to add a new Region to a single or to a multi-Region keyspace in Amazon Keyspaces. In order to run the statement successfully, the account you're using has to be located in one of the Regions where the keyspace is already available. While the replica is being added, you can't perform any other data definition language (DDL) operations on the resources that are being updated and replicated. For more information about the permissions required to add a Region, see the section called “Configure IAM permissions for add Region”. Note When adding an additional Region to a single-Region keyspace, Amazon Keyspaces creates a service-linked role with the name AWSServiceRoleForAmazonKeyspacesReplication in your account. This role allows Amazon Keyspaces to replicate tables to new Regions and to replicate writes from one table to all replicas of a multi-Region table on your behalf. To learn more, see the section called “Multi-Region Replication”. Configure multi-Region replication 349 Amazon Keyspaces (for Apache Cassandra) Developer Guide Console Follow these steps to add a Region to a keyspace using the Amazon Keyspaces console. Add a Region to a keyspace (console) 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces, and then choose a keyspace from the list. 3. Choose the AWS Regions tab. 4. On the AWS Regions tab, choose Add Region. 5. In the Add Region dialog, choose the additional Region that you want to add to the keyspace. 6. To finish, choose Add. Cassandra Query Language (CQL) Add a Region to a keyspace using CQL • To add a new Region to a keyspace, you can use the following statement. In this example, the keyspace is already available in the US East (N. Virginia) Region and US West (Oregon) Region Regions, and the CQL statement is adding the Region US West (N. California) Region. ALTER KEYSPACE my_keyspace WITH REPLICATION = { 'class': 'NetworkTopologyStrategy', 'us-east-1': '3', 'us-west-2': '3', 'us-west-1': '3' } AND CLIENT_SIDE_TIMESTAMPS = {'status': 'ENABLED'}; Configure multi-Region replication 350 Amazon Keyspaces (for Apache Cassandra) Developer Guide CLI Add a Region to a keyspace using the AWS CLI • To add a new Region to a keyspace using the CLI, you can use the following example. Note that the default value for client-side-timestamps is DISABLED. With the update- keyspace command, you must change the value to ENABLED. aws keyspaces update-keyspace \ --keyspace-name my_keyspace \ --replication-specification '{"replicationStrategy": "MULTI_REGION", "regionList": ["us-east-1", "eu-west-1", "eu-west-3"] }' \ --client-side-timestamps '{"status": "ENABLED"}' Check the replication progress when adding a new Region to a keyspace Adding a new Region to an Amazon Keyspaces keyspace is a long running operation. To track progress you can use the queries shown in this section. Cassandra Query Language (CQL) Using CQL to verify the add Region progress • To verify the progress of the creation of the new table replicas in a given keyspace, you can query the system_multiregion_info.keyspaces table. The following CQL statement is an example of this. SELECT keyspace_name, region, status, tables_replication_progress FROM system_multiregion_info.keyspaces WHERE keyspace_name = 'my_keyspace'; While a replication operation is in progress, the status shows the progress of table creation in the new Region. This is an example where 5 out of 10 tables have been replicated to the new Region. keyspace_name | region | status |
AmazonKeyspaces-124 | AmazonKeyspaces.pdf | 124 | shown in this section. Cassandra Query Language (CQL) Using CQL to verify the add Region progress • To verify the progress of the creation of the new table replicas in a given keyspace, you can query the system_multiregion_info.keyspaces table. The following CQL statement is an example of this. SELECT keyspace_name, region, status, tables_replication_progress FROM system_multiregion_info.keyspaces WHERE keyspace_name = 'my_keyspace'; While a replication operation is in progress, the status shows the progress of table creation in the new Region. This is an example where 5 out of 10 tables have been replicated to the new Region. keyspace_name | region | status | tables_replication_progress ---------------+-----------+-----------+------------------------- my_keyspace | us-east-1 | Updating | my_keyspace | us-west-2 | Updating | Configure multi-Region replication 351 Amazon Keyspaces (for Apache Cassandra) Developer Guide my_keyspace | eu-west-1 | Creating | 50% After the replication process has completed successfully, the output should look like this example. keyspace_name | region | status ---------------+-----------+----------- my_keyspace | us-east-1 | Active my_keyspace | us-west-2 | Active my_keyspace | eu-west-1 | Active CLI Using the AWS CLI to verify the add Region progress • To confirm the status of table replica creation for a given keyspace, you can use the following example. aws keyspaces get-keyspace \ --keyspace-name my_keyspace The output should look similar to this example. { "keyspaceName": "my_keyspace", "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/ my_keyspace/", "replicationStrategy": "MULTI_REGION", "replicationRegions": [ "us-east-1", "eu-west-1" ] "replicationGroupStatus": [ { "RegionName": "us-east-1", "KeyspaceStatus": "Active" }, { "RegionName": "eu-west-1", "KeyspaceStatus": "Creating", Configure multi-Region replication 352 Amazon Keyspaces (for Apache Cassandra) Developer Guide "TablesReplicationProgress": "50.0%" } ] } Create a multi-Region table with default settings in Amazon Keyspaces This section provides examples of how to create a multi-Region table in on-demand mode with all default settings. You can do this on the Amazon Keyspaces console, using CQL or the AWS CLI. All tables that you create in a multi-Region keyspace automatically inherit the multi-Region settings from the keyspace. To create a multi-Region keyspace, see the section called “Create a multi-Region keyspace”. Console Create a multi-Region table with default settings (console) 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. Choose a multi-Region keyspace. 3. On the Tables tab, choose Create table. 4. For Table name, enter the name for the table. The AWS Regions that this table is being replicated in are shown in the info box. 5. Continue with the table schema. 6. Under Table settings, continue with the Default settings option. Note the following default settings for multi-Region tables. • Capacity mode – The default capacity mode is On-demand. For more information about configuring provisioned mode, see the section called “Create a multi-Region table in provisioned mode”. • Encryption key management – Only the AWS owned key option is supported. • Client-side timestamps – This feature is required for multi-Region tables. • Choose Customize settings if you need to turn on Time to Live (TTL) for the table and all its replicas. Configure multi-Region replication 353 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note You won't be able to change TTL settings on an existing multi-Region table. 7. To finish, choose Create table. Cassandra Query Language (CQL) Create a multi-Region table in on-demand mode with default settings • To create a multi-Region table with default settings, you can use the following CQL statement. CREATE TABLE mykeyspace.mytable(pk int, ck int, PRIMARY KEY (pk, ck)) WITH CUSTOM_PROPERTIES = { 'capacity_mode':{ 'throughput_mode':'PAY_PER_REQUEST' }, 'point_in_time_recovery':{ 'status':'enabled' }, 'encryption_specification':{ 'encryption_type':'AWS_OWNED_KMS_KEY' }, 'client_side_timestamps':{ 'status':'enabled' } }; CLI Using the AWS CLI 1. To create a multi-Region table with default settings, you only need to specify the schema. You can use the following example. aws keyspaces create-table --keyspace-name mykeyspace --table-name mytable \ --schema-definition 'allColumns=[{name=pk,type=int}],partitionKeys={name= pk}' Configure multi-Region replication 354 Amazon Keyspaces (for Apache Cassandra) Developer Guide The output of the command is: { "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/ mykeyspace/table/mytable" } 2. To confirm the table's settings, you can use the following statement. aws keyspaces get-table --keyspace-name mykeyspace --table-name mytable The output shows all default settings of a multi-Region table. { "keyspaceName": "mykeyspace", "tableName": "mytable", "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/ mykeyspace/table/mytable", "creationTimestamp": "2023-12-19T16:50:37.639000+00:00", "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "pk", "type": "int" } ], "partitionKeys": [ { "name": "pk" } ], "clusteringKeys": [], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2023-12-19T16:50:37.639000+00:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, Configure multi-Region replication 355 Amazon Keyspaces (for Apache Cassandra) Developer Guide "pointInTimeRecovery": { "status": "DISABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" }, "clientSideTimestamps": { "status": "ENABLED" }, "replicaSpecifications": [ { "region": "us-east-1", "status": "ACTIVE", "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": 1702895811.469 } }, { "region": "eu-north-1", "status": "ACTIVE", "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": 1702895811.121 } } ] } Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces This section provides examples of how to create a multi-Region table in provisioned mode with auto scaling. |
AmazonKeyspaces-125 | AmazonKeyspaces.pdf | 125 | "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2023-12-19T16:50:37.639000+00:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, Configure multi-Region replication 355 Amazon Keyspaces (for Apache Cassandra) Developer Guide "pointInTimeRecovery": { "status": "DISABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" }, "clientSideTimestamps": { "status": "ENABLED" }, "replicaSpecifications": [ { "region": "us-east-1", "status": "ACTIVE", "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": 1702895811.469 } }, { "region": "eu-north-1", "status": "ACTIVE", "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": 1702895811.121 } } ] } Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces This section provides examples of how to create a multi-Region table in provisioned mode with auto scaling. You can do this on the Amazon Keyspaces console, using CQL or the AWS CLI. For more information about supported configurations and multi-Region replication features, see the section called “Usage notes”. To create a multi-Region keyspace, see the section called “Create a multi-Region keyspace”. Configure multi-Region replication 356 Amazon Keyspaces (for Apache Cassandra) Developer Guide When you create a new multi-Region table in provisioned mode with auto scaling settings, you can specify the general settings for the table that are valid for all AWS Regions that the table is replicated in. You can then overwrite read capacity settings and read auto scaling settings for each replica. The write capacity, however, remains synchronized between all replicas to ensure that there's enough capacity to replicate writes across all Regions. Note Amazon Keyspaces automatic scaling requires the presence of a service-linked role (AWSServiceRoleForApplicationAutoScaling_CassandraTable) that performs automatic scaling actions on your behalf. This role is created automatically for you. For more information, see the section called “Using service-linked roles”. Console Create a new multi-Region table with automatic scaling enabled 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. Choose a multi-Region keyspace. 3. On the Tables tab, choose Create table. 4. On the Create table page in the Table details section, select a keyspace and provide a 5. 6. name for the new table. In the Columns section, create the schema for your table. In the Primary key section, define the primary key of the table and select optional clustering columns. 7. In the Table settings section, choose Customize settings. 8. Continue to Read/write capacity settings. 9. For Capacity mode, choose Provisioned. 10. In the Read capacity section, confirm that Scale automatically is selected. You can select to configure the same read capacity units for all AWS Regions that the table is replicated in. Alternatively, you can clear the check box and configure the read capacity for each Region differently. Configure multi-Region replication 357 Amazon Keyspaces (for Apache Cassandra) Developer Guide If you choose to configure each Region differently, you select the minimum and maximum read capacity units for each table replica, as well as the target utilization. • Minimum capacity units – Enter the value for the minimum level of throughput that the table should always be ready to support. The value must be between 1 and the maximum throughput per second quota for your account (40,000 by default). • Maximum capacity units – Enter the maximum amount of throughput that you want to provision for the table. The value must be between 1 and the maximum throughput per second quota for your account (40,000 by default). • Target utilization – Enter a target utilization rate between 20% and 90%. When traffic exceeds the defined target utilization rate, capacity is automatically scaled up. When traffic falls below the defined target, it is automatically scaled down again. • Clear the Scale automatically check box if you want to provision the table's read capacity manually. This setting applies to all replicas of the table. Note To ensure that there's enough read capacity for all replicas, we recommend Amazon Keyspaces automatic scaling for provisioned multi-Region tables. Note To learn more about default quotas for your account and how to increase them, see Quotas. 11. In the Write capacity section, confirm that Scale automatically is selected. Then configure the capacity units for the table. The write capacity units stay synced across all AWS Regions to ensure that there is enough capacity to replicate write events across the Regions. • Clear Scale automatically if you want to provision the table's write capacity manually. This setting applies to all replicas of the table. Configure multi-Region replication 358 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note To ensure that there's enough write capacity for all replicas, we recommend Amazon Keyspaces automatic scaling for provisioned multi-Region tables. 12. Choose Create table. Your table is created with the specified automatic scaling parameters. Cassandra Query Language (CQL) Create a multi-Region table with provisioned capacity mode and auto scaling using CQL • To create a multi-Region table in provisioned mode with auto scaling, you must first specify the capacity mode by defining CUSTOM_PROPERTIES for the table. |
AmazonKeyspaces-126 | AmazonKeyspaces.pdf | 126 | write capacity manually. This setting applies to all replicas of the table. Configure multi-Region replication 358 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note To ensure that there's enough write capacity for all replicas, we recommend Amazon Keyspaces automatic scaling for provisioned multi-Region tables. 12. Choose Create table. Your table is created with the specified automatic scaling parameters. Cassandra Query Language (CQL) Create a multi-Region table with provisioned capacity mode and auto scaling using CQL • To create a multi-Region table in provisioned mode with auto scaling, you must first specify the capacity mode by defining CUSTOM_PROPERTIES for the table. After specifying provisioned capacity mode, you can configure the auto scaling settings for the table using AUTOSCALING_SETTINGS. For detailed information about auto scaling settings, the target tracking policy, target value, and optional settings, see the section called “Create a new table with automatic scaling”. To define the read capacity for a table replica in a specific Region, you can configure the following parameters as part of the table's replica_updates: • The Region • The provisioned read capacity units (optional) • Auto scaling settings for read capacity (optional) The following example shows a CREATE TABLE statement for a multi-Region table in provisioned mode. The general write and read capacity auto scaling settings are the same. However, the read auto scaling settings specify additional cooldown periods of 60 seconds before scaling the table's read capacity up or down. In addition, the read capacity auto scaling settings for the Region US East (N. Virginia) are higher than those for other replicas. Also, the target value is set to 70% instead of 50%. CREATE TABLE mykeyspace.mytable(pk int, ck int, PRIMARY KEY (pk, ck)) WITH CUSTOM_PROPERTIES = { 'capacity_mode': { Configure multi-Region replication 359 Amazon Keyspaces (for Apache Cassandra) Developer Guide 'throughput_mode': 'PROVISIONED', 'read_capacity_units': 5, 'write_capacity_units': 5 } } AND AUTOSCALING_SETTINGS = { 'provisioned_write_capacity_autoscaling_update': { 'maximum_units': 10, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 50 } } }, 'provisioned_read_capacity_autoscaling_update': { 'maximum_units': 10, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 50, 'scale_in_cooldown': 60, 'scale_out_cooldown': 60 } } }, 'replica_updates': { 'us-east-1': { 'provisioned_read_capacity_autoscaling_update': { 'maximum_units': 20, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 70 } } } } } }; Configure multi-Region replication 360 Amazon Keyspaces (for Apache Cassandra) Developer Guide CLI Create a new multi-Region table in provisioned mode with auto scaling using the AWS CLI • To create a multi-Region table in provisioned mode with auto scaling configuration, you can use the AWS CLI. Note that you must use the Amazon Keyspaces CLI create-table command to configure multi-Region auto scaling settings. This is because Application Auto Scaling, the service that Amazon Keyspaces uses to perform auto scaling on your behalf, doesn't support multiple Regions. For more information about auto scaling settings, the target tracking policy, target value, and optional settings, see the section called “Create a new table with automatic scaling”. To define the read capacity for a table replica in a specific Region, you can configure the following parameters as part of the table's replicaSpecifications: • The Region • The provisioned read capacity units (optional) • Auto scaling settings for read capacity (optional) When you're creating provisioned multi-Region tables with complex auto scaling settings and different configurations for table replicas, it's helpful to load the table's auto scaling settings and replica configurations from JSON files. To use the following code example, you can download the example JSON files from auto- scaling.zip, and extract auto-scaling.json and replication.json. Take note of the path to the files. In this example, the JSON files are located in the current directory. For different file path options, see How to load parameters from a file. aws keyspaces create-table --keyspace-name mykeyspace --table-name mytable \ --schema-definition 'allColumns=[{name=pk,type=int}, {name=ck,type=int}],partitionKeys=[{name=pk},{name=ck}]' \ --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 \ --auto-scaling-specification file://auto-scaling.json \ --replica-specifications file://replication.json Configure multi-Region replication 361 Amazon Keyspaces (for Apache Cassandra) Developer Guide Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces This section includes examples of how to use the console, CQL, and the AWS CLI to manage the Amazon Keyspaces auto scaling settings of provisioned multi-Region tables. For more information about general auto scaling configuration options and how they work, see the section called “Manage throughput capacity with auto scaling”. Note that if you're using provisioned capacity mode for multi-Region tables, you must always use Amazon Keyspaces API calls to configure auto scaling. This is because the underlying Application Auto Scaling API operations are not Region-aware. For more information on how to estimate write capacity throughput of provisioned multi-Region tables, see the section called “Estimate capacity for a multi-Region table”. For more information about the Amazon Keyspaces API, see Amazon Keyspaces API Reference. When you update the provisioned mode or auto scaling settings of a multi-Region table, you can update read capacity settings and the read auto scaling |
AmazonKeyspaces-127 | AmazonKeyspaces.pdf | 127 | auto scaling”. Note that if you're using provisioned capacity mode for multi-Region tables, you must always use Amazon Keyspaces API calls to configure auto scaling. This is because the underlying Application Auto Scaling API operations are not Region-aware. For more information on how to estimate write capacity throughput of provisioned multi-Region tables, see the section called “Estimate capacity for a multi-Region table”. For more information about the Amazon Keyspaces API, see Amazon Keyspaces API Reference. When you update the provisioned mode or auto scaling settings of a multi-Region table, you can update read capacity settings and the read auto scaling configuration for each replica of the table. The write capacity, however, remains synchronized between all replicas to ensure that there's enough capacity to replicate writes across all Regions. Cassandra Query Language (CQL) Update the provisioned capacity and auto scaling settings of a multi-Region table using CQL • You can use ALTER TABLE to update the capacity mode and auto scaling settings of an existing table. If you're updating a table that is currently in on-demand capacity mode, capacity_mode is required. If your table is already in provisioned capacity mode, this field can be omitted. For detailed information about auto scaling settings, the target tracking policy, target value, and optional settings, see the section called “Create a new table with automatic scaling”. In the same statement, you can also update the read capacity and auto scaling settings of table replicas in specific Regions by updating the table's replica_updates property. The following statement is an example of this. ALTER TABLE mykeyspace.mytable Configure multi-Region replication 362 Amazon Keyspaces (for Apache Cassandra) Developer Guide WITH CUSTOM_PROPERTIES = { 'capacity_mode': { 'throughput_mode': 'PROVISIONED', 'read_capacity_units': 1, 'write_capacity_units': 1 } } AND AUTOSCALING_SETTINGS = { 'provisioned_write_capacity_autoscaling_update': { 'maximum_units': 10, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 50 } } }, 'provisioned_read_capacity_autoscaling_update': { 'maximum_units': 10, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 50, 'scale_in_cooldown': 60, 'scale_out_cooldown': 60 } } }, 'replica_updates': { 'us-east-1': { 'provisioned_read_capacity_autoscaling_update': { 'maximum_units': 20, 'minimum_units': 5, 'scaling_policy': { 'target_tracking_scaling_policy_configuration': { 'target_value': 70 } } } } } }; Configure multi-Region replication 363 Amazon Keyspaces (for Apache Cassandra) Developer Guide CLI Update the provisioned capacity and auto scaling settings of a multi-Region table using the AWS CLI • To update the provisioned mode and auto scaling configuration of an existing table, you can use the AWS CLI update-table command. Note that you must use the Amazon Keyspaces CLI commands to create or modify multi- Region auto scaling settings. This is because Application Auto Scaling, the service that Amazon Keyspaces uses to perform auto scaling of table capacity on your behalf, doesn't support multiple AWS Regions. To update the read capacity for a table replica in a specific Region, you can change one of the following optional parameters of the table's replicaSpecifications: • The provisioned read capacity units (optional) • Auto scaling settings for read capacity (optional) When you're updating multi-Region tables with complex auto scaling settings and different configurations for table replicas, it's helpful to load the table's auto scaling settings and replica configurations from JSON files. To use the following code example, you can download the example JSON files from auto- scaling.zip, and extract auto-scaling.json and replication.json. Take note of the path to the files. In this example, the JSON files are located in the current directory. For different file path options, see How to load parameters from a file. aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable \ --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 \ --auto-scaling-specification file://auto-scaling.json \ --replica-specifications file://replication.json Configure multi-Region replication 364 Amazon Keyspaces (for Apache Cassandra) Developer Guide View the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces You can view a multi-Region table's provisioned capacity and auto scaling settings on the Amazon Keyspaces console, using CQL, or the AWS CLI. This section provides examples of how to do this using CQL and the AWS CLI. Cassandra Query Language (CQL) View the provisioned capacity and auto scaling settings of a multi-Region table using CQL • To view the auto scaling configuration of a multi-Region table, use the following command. SELECT * FROM system_multiregion_info.autoscaling WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable'; The output for this command looks like the following: keyspace_name | table_name | region | provisioned_read_capacity_autoscaling_update | provisioned_write_capacity_autoscaling_update ----------------+------------+---------------- +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mykeyspace | mytable | ap-southeast-1 | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}} mykeyspace | mytable | us-east-1 | {'minimum_units': 5, 'maximum_units': 20, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 70, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}} mykeyspace | mytable | eu-west-1 | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': Configure multi-Region replication 365 Amazon Keyspaces (for |
AmazonKeyspaces-128 | AmazonKeyspaces.pdf | 128 | keyspace_name | table_name | region | provisioned_read_capacity_autoscaling_update | provisioned_write_capacity_autoscaling_update ----------------+------------+---------------- +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mykeyspace | mytable | ap-southeast-1 | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}} mykeyspace | mytable | us-east-1 | {'minimum_units': 5, 'maximum_units': 20, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 70, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}} mykeyspace | mytable | eu-west-1 | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': Configure multi-Region replication 365 Amazon Keyspaces (for Apache Cassandra) Developer Guide {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}} CLI View the provisioned capacity and auto scaling settings of a multi-Region table using the AWS CLI • To view the auto scaling configuration of a multi-Region table, you can use the get- table-auto-scaling-settings operation. The following CLI command is an example of this. aws keyspaces get-table-auto-scaling-settings --keyspace-name mykeyspace -- table-name mytable You should see the following output. { "keyspaceName": "mykeyspace", "tableName": "mytable", "resourceArn": "arn:aws:cassandra:us-east-1:777788889999:/keyspace/ mykeyspace/table/mytable", "autoScalingSpecification": { "writeCapacityAutoScaling": { "autoScalingDisabled": false, "minimumUnits": 5, "maximumUnits": 10, "scalingPolicy": { "targetTrackingScalingPolicyConfiguration": { "disableScaleIn": false, "scaleInCooldown": 0, "scaleOutCooldown": 0, "targetValue": 50.0 } } }, "readCapacityAutoScaling": { "autoScalingDisabled": false, Configure multi-Region replication 366 Amazon Keyspaces (for Apache Cassandra) Developer Guide "minimumUnits": 5, "maximumUnits": 20, "scalingPolicy": { "targetTrackingScalingPolicyConfiguration": { "disableScaleIn": false, "scaleInCooldown": 60, "scaleOutCooldown": 60, "targetValue": 70.0 } } } }, "replicaSpecifications": [ { "region": "us-east-1", "autoScalingSpecification": { "writeCapacityAutoScaling": { "autoScalingDisabled": false, "minimumUnits": 5, "maximumUnits": 10, "scalingPolicy": { "targetTrackingScalingPolicyConfiguration": { "disableScaleIn": false, "scaleInCooldown": 0, "scaleOutCooldown": 0, "targetValue": 50.0 } } }, "readCapacityAutoScaling": { "autoScalingDisabled": false, "minimumUnits": 5, "maximumUnits": 20, "scalingPolicy": { "targetTrackingScalingPolicyConfiguration": { "disableScaleIn": false, "scaleInCooldown": 60, "scaleOutCooldown": 60, "targetValue": 70.0 } } } } }, Configure multi-Region replication 367 Amazon Keyspaces (for Apache Cassandra) Developer Guide { "region": "eu-north-1", "autoScalingSpecification": { "writeCapacityAutoScaling": { "autoScalingDisabled": false, "minimumUnits": 5, "maximumUnits": 10, "scalingPolicy": { "targetTrackingScalingPolicyConfiguration": { "disableScaleIn": false, "scaleInCooldown": 0, "scaleOutCooldown": 0, "targetValue": 50.0 } } }, "readCapacityAutoScaling": { "autoScalingDisabled": false, "minimumUnits": 5, "maximumUnits": 10, "scalingPolicy": { "targetTrackingScalingPolicyConfiguration": { "disableScaleIn": false, "scaleInCooldown": 60, "scaleOutCooldown": 60, "targetValue": 50.0 } } } } } ] } Turn off auto scaling for a table in Amazon Keyspaces This section provides examples of how to turn off auto scaling for a multi-Region table in provisioned capacity mode. You can do this on the Amazon Keyspaces console, using CQL or the AWS CLI. Configure multi-Region replication 368 Amazon Keyspaces (for Apache Cassandra) Developer Guide Important We recommend using auto scaling for multi-Region tables that use provisioned capacity mode. For more information, see the section called “Estimate capacity for a multi-Region table”. Note To delete the service-linked role that Application Auto Scaling uses, you must disable automatic scaling on all tables in the account across all AWS Regions. Console Turn off Amazon Keyspaces automatic scaling for an existing multi-Region table on the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. Choose the table that you want to work with and choose the Capacity tab. 3. 4. In the Capacity settings section, choose Edit. To disable Amazon Keyspaces automatic scaling, clear the Scale automatically check box. Disabling automatic scaling deregisters the table as a scalable target with Application Auto Scaling. To delete the service-linked role that Application Auto Scaling uses to access your Amazon Keyspaces table, follow the steps in the section called “Deleting a service-linked role for Amazon Keyspaces”. 5. When the automatic scaling settings are defined, choose Save. Cassandra Query Language (CQL) Turn off auto scaling for a multi-Region table using CQL • You can use ALTER TABLE to turn off auto scaling for an existing table. Note that you can't turn off auto scaling for an individual table replica. In the following example, auto scaling is turned off for the table's read capacity. Configure multi-Region replication 369 Amazon Keyspaces (for Apache Cassandra) Developer Guide ALTER TABLE mykeyspace.mytable WITH AUTOSCALING_SETTINGS = { 'provisioned_read_capacity_autoscaling_update': { 'autoscaling_disabled': true } }; CLI Turn off auto scaling for a multi-Region table using the AWS CLI • You can use the AWS CLI update-table command to turn off auto scaling for an existing table. Note that you can't turn off auto scaling for an individual table replica. In the following example, auto scaling is turned off for the table's read capacity. aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable \ --auto-scaling-specification readCapacityAutoScaling={autoScalingDisabled=true} Set the provisioned capacity of a multi-Region table manually in Amazon Keyspaces If you have to turn off auto scaling for a multi-Region table, you can provision the table's read capacity for a replica table manually using CQL or the AWS CLI. Note We recommend using auto scaling |
AmazonKeyspaces-129 | AmazonKeyspaces.pdf | 129 | use the AWS CLI update-table command to turn off auto scaling for an existing table. Note that you can't turn off auto scaling for an individual table replica. In the following example, auto scaling is turned off for the table's read capacity. aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable \ --auto-scaling-specification readCapacityAutoScaling={autoScalingDisabled=true} Set the provisioned capacity of a multi-Region table manually in Amazon Keyspaces If you have to turn off auto scaling for a multi-Region table, you can provision the table's read capacity for a replica table manually using CQL or the AWS CLI. Note We recommend using auto scaling for multi-Region tables that use provisioned capacity mode. For more information, see the section called “Estimate capacity for a multi-Region table”. Configure multi-Region replication 370 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) Setting the provisioned capacity of a multi-Region table manually using CQL • You can use ALTER TABLE to provision the table's read capacity for a replica table manually. ALTER TABLE mykeyspace.mytable WITH CUSTOM_PROPERTIES = { 'capacity_mode': { 'throughput_mode': 'PROVISIONED', 'read_capacity_units': 1, 'write_capacity_units': 1 }, 'replica_updates': { 'us-east-1': { 'read_capacity_units': 2 } } }; CLI Set the provisioned capacity of a multi-Region table manually using the AWS CLI • If you have to turn off auto scaling for a multi-Region table, you can use update-table to provision the table's read capacity for a replica table manually. aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable \ --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 \ --replica-specifications region="us-east-1",readCapacityUnits=5 Backup and restore data with point-in-time recovery for Amazon Keyspaces Point-in-time recovery (PITR) helps protect your Amazon Keyspaces tables from accidental write or delete operations by providing you continuous backups of your table data. Backup and restore with point-in-time recovery 371 Amazon Keyspaces (for Apache Cassandra) Developer Guide For example, suppose that a test script writes accidentally to a production Amazon Keyspaces table. With point-in-time recovery, you can restore that table's data to any second in time since PITR was enabled within the last 35 days. If you delete a table with point-in-time recovery enabled, you can query for the deleted table's data for 35 days (at no additional cost), and restore it to the state it was in just before the point of deletion. You can restore an Amazon Keyspaces table to a point in time by using the console, the AWS SDK and the AWS Command Line Interface (AWS CLI), or Cassandra Query Language (CQL). For more information, see Use point-in-time recovery in Amazon Keyspaces. Point-in-time operations have no performance or availability impact on the base table, and restoring a table doesn't consume additional throughput. For information about PITR quotas, see Quotas. For information about pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. Topics • How point-in-time recovery works in Amazon Keyspaces • Use point-in-time recovery in Amazon Keyspaces How point-in-time recovery works in Amazon Keyspaces This section provides an overview of how Amazon Keyspaces point-in-time recovery (PITR) works. For more information about pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. Topics • Time window for PITR continuous backups • PITR restore settings • PITR restore of encrypted tables • PITR restore of multi-Region tables • PITR restore of tables with user-defined types (UDTs) • Table restore time with PITR • Amazon Keyspaces PITR and integration with AWS services How it works 372 Amazon Keyspaces (for Apache Cassandra) Developer Guide Time window for PITR continuous backups Amazon Keyspaces PITR uses two timestamps to maintain the time frame for which restorable backups are available for a table. • Earliest restorable time – Marks the time of the earliest restorable backup. The earliest restorable backup goes back up to 35 days or when PITR was enabled, whichever is more recent. The maximum backup window of 35 days can't be modified. • Current time – The timestamp for the latest restorable backup is the current time. If no timestamp is provided during a restore, current time is used. When PITR is enabled, you can restore to any point in time between EarliestRestorableDateTime and CurrentTime. You can only restore table data to a time when PITR was enabled. If you disable PITR and later reenable it again, you reset the start time for the first available backup to when PITR was reenabled. This means that disabling PITR erases your backup history. Note Data definition language (DDL) operations on tables, such as schema changes, are performed asynchronously. You can only see completed operations in your restored table data, but you might see additional actions on your source table if they were in progress at the time of the restore. For a list of DDL statements, see the section called “DDL statements”. A table doesn't have to be active to be restored. You can also restore deleted tables if PITR was enabled on the deleted table and the deletion occurred |
AmazonKeyspaces-130 | AmazonKeyspaces.pdf | 130 | when PITR was reenabled. This means that disabling PITR erases your backup history. Note Data definition language (DDL) operations on tables, such as schema changes, are performed asynchronously. You can only see completed operations in your restored table data, but you might see additional actions on your source table if they were in progress at the time of the restore. For a list of DDL statements, see the section called “DDL statements”. A table doesn't have to be active to be restored. You can also restore deleted tables if PITR was enabled on the deleted table and the deletion occurred within the backup window (or within the last 35 days). Note If a new table is created with the same qualified name (for example, mykeyspace.mytable) as a previously deleted table, the deleted table will no longer be restorable. If you attempt to do this from the console, a warning is displayed. How it works 373 Amazon Keyspaces (for Apache Cassandra) PITR restore settings Developer Guide When you restore a table using PITR, Amazon Keyspaces restores your source table's schema and data to the state based on the selected timestamp (day:hour:minute:second) to a new table. PITR doesn't overwrite existing tables. In addition to the table's schema and data, PITR restores the custom_properties from the source table. Unlike the table's data, which is restored based on the selected timestamp between earliest restore time and current time, custom properties are always restored based on the table's settings as of the current time. The settings of the restored table match the settings of the source table with the timestamp of when the restore was initiated. If you want to overwrite these settings during restore, you can do so using WITH custom_properties. Custom properties include the following settings. • Read/write capacity mode • Provisioned throughput capacity settings • PITR settings If the table is in provisioned capacity mode with auto scaling enabled, the restore operation also restores the table's auto scaling settings. You can overwrite them using the autoscaling_settings parameter in CQL or autoScalingSpecification with the CLI. For more information on auto scaling settings, see the section called “Manage throughput capacity with auto scaling”. When you do a full table restore, all table settings for the restored table come from the current settings of the source table at the time of the restore. For example, suppose that a table's provisioned throughput was recently lowered to 50 read capacity units and 50 write capacity units. You then restore the table's state to three weeks ago. At this time, its provisioned throughput was set to 100 read capacity units and 100 write capacity units. In this case, Amazon Keyspaces restores your table data to that point in time, but uses the current provisioned throughput settings (50 read capacity units and 50 write capacity units). The following settings are not restored, and you must configure them manually for the new table. • AWS Identity and Access Management (IAM) policies • Amazon CloudWatch metrics and alarms How it works 374 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Tags (can be added to the CQL RESTORE statement using WITH TAGS) PITR restore of encrypted tables When you restore a table using PITR, Amazon Keyspaces restores your source table's encryption settings. If the table was encrypted with an AWS owned key (default), the table is restored with the same setting automatically. If the table you want to restore was encrypted using a customer managed key, the same customer managed key needs to be accessible to Amazon Keyspaces to restore the table data. You can change the encryption settings of the table at the time of restore. To change from an AWS owned key to a customer managed key, you need to supply a valid and accessible customer managed key at the time of restore. If you want to change from a customer managed key to an AWS owned key, confirm that Amazon Keyspaces has access to the customer managed key of the source table to restore the table with an AWS owned key. For more information about encryption at rest settings for tables, see the section called “How it works”. Note If the table was deleted because Amazon Keyspaces lost access to your customer managed key, you need to ensure the customer managed key is accessible to Amazon Keyspaces before trying to restore the table. A table that was encrypted with a customer managed key can't be restored if Amazon Keyspaces doesn't have access to that key. For more information, see Troubleshooting key access in the AWS Key Management Service Developer Guide. PITR restore of multi-Region tables You can restore a multi-Region table using PITR. For the restore operation to be successful, PITR has to be enabled on all replicas of the source table and both the source and the |
AmazonKeyspaces-131 | AmazonKeyspaces.pdf | 131 | access to your customer managed key, you need to ensure the customer managed key is accessible to Amazon Keyspaces before trying to restore the table. A table that was encrypted with a customer managed key can't be restored if Amazon Keyspaces doesn't have access to that key. For more information, see Troubleshooting key access in the AWS Key Management Service Developer Guide. PITR restore of multi-Region tables You can restore a multi-Region table using PITR. For the restore operation to be successful, PITR has to be enabled on all replicas of the source table and both the source and the destination table have to be replicated to the same AWS Regions. Amazon Keyspaces restores the settings of the source table in each of the replicated Regions that are part of the keyspace. You can also override settings during the restore operation. For more information about settings that can be changed during the restore, see the section called “Restore settings”. How it works 375 Amazon Keyspaces (for Apache Cassandra) Developer Guide For more information about multi-Region replication, see the section called “How it works”. PITR restore of tables with user-defined types (UDTs) You can restore a table that uses UDTs. For the restore operation to be successful, the referenced UDTs have to exist and be valid in the keyspace. If any required UDT is missing when you attempt to restore a table, Amazon Keyspaces tries to restore the UDT schema automatically and then continues to restore the table. If you removed and recreated the UDT, Amazon Keyspaces restores the UDT with the new schema of the UDT and rejects the request to restore the table using the original UDT schema. In this case, if you wish to restore the table with the old UDT schema, you can restore the table to a new keyspace. When you delete and recreate a UDT, even if the schema of the recreated UDT is the same as the schema of the deleted UDT, the recreated UDT is considered a new UDT. In this case, Amazon Keyspaces rejects the request to restore the table with the old UDT schema. If the UDT is missing and Amazon Keyspaces attempts to restore the UDT, the attempt fails if you have reached the maximum number of UDTs for the account in the Region. For more information about UDT quotas and default values, see the section called “Quotas and default values for user-defined types (UDTs) in Amazon Keyspaces”. For more information about working with UDTs, see the section called “User-defined types (UDTs)”. Table restore time with PITR The time it takes you to restore a table is based on multiple factors and isn't always correlated directly to the size of the table. The following are some considerations for restore times. • You restore backups to a new table. It can take up to 20 minutes (even if the table is empty) to perform all the actions to create the new table and initiate the restore process. • Restore times for large tables with well-distributed data models can be several hours or longer. • If your source table contains data that is significantly skewed, the time to restore might increase. For example, if your table’s primary key is using the month of the year as a partition key, and all your data is from the month of December, you have skewed data. A best practice when planning for disaster recovery is to regularly document average restore completion times and establish how these times affect your overall Recovery Time Objective. How it works 376 Amazon Keyspaces (for Apache Cassandra) Developer Guide Amazon Keyspaces PITR and integration with AWS services The following PITR operations are logged using AWS CloudTrail to enable continuous monitoring and auditing. • Create a new table with PITR enabled or disabled. • Enable or disable PITR on an existing table. • Restore an active or a deleted table. For more information, see Logging Amazon Keyspaces API calls with AWS CloudTrail. You can perform the following PITR actions using AWS CloudFormation. • Create a new table with PITR enabled or disabled. • Enable or disable PITR on an existing table. For more information, see the Cassandra Resource Type Reference in the AWS CloudFormation User Guide. Use point-in-time recovery in Amazon Keyspaces With Amazon Keyspaces (for Apache Cassandra), you can restore tables to a specific point in time using Point-in-Time Restore (PITR). PITR enables you to restore a table to a prior state within the last 35 days, providing data protection and recovery capabilities. This feature is valuable in cases such as accidental data deletion, application errors, or for testing purposes. You can quickly and efficiently recover data, minimizing downtime and data loss. The following sections guide you through the process of restoring tables using PITR in Amazon Keyspaces, ensuring data |
AmazonKeyspaces-132 | AmazonKeyspaces.pdf | 132 | the AWS CloudFormation User Guide. Use point-in-time recovery in Amazon Keyspaces With Amazon Keyspaces (for Apache Cassandra), you can restore tables to a specific point in time using Point-in-Time Restore (PITR). PITR enables you to restore a table to a prior state within the last 35 days, providing data protection and recovery capabilities. This feature is valuable in cases such as accidental data deletion, application errors, or for testing purposes. You can quickly and efficiently recover data, minimizing downtime and data loss. The following sections guide you through the process of restoring tables using PITR in Amazon Keyspaces, ensuring data integrity and business continuity. Topics • Configure restore table IAM permissions for Amazon Keyspaces PITR • Configure PITR for a table in Amazon Keyspaces • Turn off PITR for an Amazon Keyspaces table • Restore a table from backup to a specified point in time in Amazon Keyspaces • Restore a deleted table using Amazon Keyspaces PITR Use point-in-time recovery 377 Amazon Keyspaces (for Apache Cassandra) Developer Guide Configure restore table IAM permissions for Amazon Keyspaces PITR This section summarizes how to configure permissions for an AWS Identity and Access Management (IAM) principal to restore Amazon Keyspaces tables. In IAM, the AWS managed policy AmazonKeyspacesFullAccess includes the permissions to restore Amazon Keyspaces tables. To implement a custom policy with minimum required permissions, consider the requirements outlined in the next section. To successfully restore a table, the IAM principal needs the following minimum permissions: • cassandra:Restore – The restore action is required for the target table to be restored. • cassandra:Select – The select action is required to read from the source table. • cassandra:TagResource – The tag action is optional, and only required if the restore operation adds tags. This is an example of a policy that grants minimum required permissions to a user to restore tables in keyspace mykeyspace. { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "cassandra:Restore", "cassandra:Select" ], "Resource":[ "arn:aws:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/*", "arn:aws:cassandra:us-east-1:111122223333:/keyspace/system*" ] } ] } Additional permissions to restore a table might be required based on other selected features. For example, if the source table is encrypted at rest with a customer managed key, Amazon Keyspaces must have permissions to access the customer managed key of the source table to successfully restore the table. For more information, see the section called “PITR and encrypted tables”. Use point-in-time recovery 378 Amazon Keyspaces (for Apache Cassandra) Developer Guide If you are using IAM policies with condition keys to restrict incoming traffic to specific sources, you must ensure that Amazon Keyspaces has permission to perform a restore operation on your principal's behalf. You must add an aws:ViaAWSService condition key to your IAM policy if your policy restricts incoming traffic to any of the following: • VPC endpoints with aws:SourceVpce • IP ranges with aws:SourceIp • VPCs with aws:SourceVpc The aws:ViaAWSService condition key allows access when any AWS service makes a request using the principal's credentials. For more information, see IAM JSON policy elements: Condition key in the IAM User Guide. The following is an example of a policy that restricts source traffic to a specific IP address and allows Amazon Keyspaces to restore a table on the principal's behalf. { "Version":"2012-10-17", "Statement":[ { "Sid":"CassandraAccessForCustomIp", "Effect":"Allow", "Action":"cassandra:*", "Resource":"*", "Condition":{ "Bool":{ "aws:ViaAWSService":"false" }, "ForAnyValue:IpAddress":{ "aws:SourceIp":[ "123.45.167.89" ] } } }, { "Sid":"CassandraAccessForAwsService", "Effect":"Allow", "Action":"cassandra:*", "Resource":"*", "Condition":{ Use point-in-time recovery 379 Amazon Keyspaces (for Apache Cassandra) "Bool":{ "aws:ViaAWSService":"true" } } } ] } Developer Guide For an example policy using the aws:ViaAWSService global condition key, see the section called “VPC endpoint policies and Amazon Keyspaces point-in-time recovery (PITR)”. Configure PITR for a table in Amazon Keyspaces You can configure a table in Amazon Keyspaces for backup and restore operations using PITR with the console, CQL, and the AWS CLI. When creating a new table using CQL or the AWS CLI, you must explicitly enable PITR in the create table statement. When you create a new table using the console, PITR will be enable by default. To learn how to restore a table, see the section called “Restore a table to a point in time”. Console Configure PITR for a table using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables and select the table you want to edit. 3. On the Backups tab, choose Edit. 4. In the Edit point-in-time recovery settings section, select Enable Point-in-time recovery. 5. Choose Save changes. Cassandra Query Language (CQL) Configure PITR for a table using CQL 1. You can manage PITR settings for tables by using the point_in_time_recovery custom property. Use point-in-time recovery 380 Amazon Keyspaces (for Apache Cassandra) Developer Guide To enable PITR when you're creating a new table, you must set the status of point_in_time_recovery to enabled. You can use the |
AmazonKeyspaces-133 | AmazonKeyspaces.pdf | 133 | Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables and select the table you want to edit. 3. On the Backups tab, choose Edit. 4. In the Edit point-in-time recovery settings section, select Enable Point-in-time recovery. 5. Choose Save changes. Cassandra Query Language (CQL) Configure PITR for a table using CQL 1. You can manage PITR settings for tables by using the point_in_time_recovery custom property. Use point-in-time recovery 380 Amazon Keyspaces (for Apache Cassandra) Developer Guide To enable PITR when you're creating a new table, you must set the status of point_in_time_recovery to enabled. You can use the following CQL command as an example. CREATE TABLE "my_keyspace1"."my_table1"( "id" int, "name" ascii, "date" timestamp, PRIMARY KEY("id")) WITH CUSTOM_PROPERTIES = { 'capacity_mode':{'throughput_mode':'PAY_PER_REQUEST'}, 'point_in_time_recovery':{'status':'enabled'} } Note If no point-in-time recovery custom property is specified, point-in-time recovery is disabled by default. 2. To enable PITR for an existing table using CQL, run the following CQL command. ALTER TABLE mykeyspace.mytable WITH custom_properties = {'point_in_time_recovery': {'status': 'enabled'}} CLI Configure PITR for a table using the AWS CLI 1. You can manage PITR settings for tables by using the UpdateTable API. To enable PITR when you're creating a new table, you must include point-in-time- recovery 'status=ENABLED' in the create table command. You can use the following AWS CLI command as an example. The command has been broken into separate lines to improve readability. aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' --schema-definition 'allColumns=[{name=id,type=int}, {name=name,type=text},{name=date,type=timestamp}],partitionKeys=[{name=id}]' Use point-in-time recovery 381 Amazon Keyspaces (for Apache Cassandra) Developer Guide --point-in-time-recovery 'status=ENABLED' Note If no point-in-time recovery value is specified, point-in-time recovery is disabled by default. 2. To confirm the point-in-time recovery setting for a table, you can use the following AWS CLI command. aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable' 3. To enable PITR for an existing table using the AWS CLI, run the following command. aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --point-in-time-recovery 'status=ENABLED' Turn off PITR for an Amazon Keyspaces table You can turn off PITR for an Amazon Keyspaces table at any time using the console, CQL, or the AWS CLI. Important Disabling PITR deletes your backup history immediately, even if you reenable PITR on the table within 35 days. To learn how to restore a table, see the section called “Restore a table to a point in time”. Console Disable PITR for a table using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables and select the table you want to edit. 3. On the Backups tab, choose Edit. Use point-in-time recovery 382 Amazon Keyspaces (for Apache Cassandra) Developer Guide 4. In the Edit point-in-time recovery settings section, clear the Enable Point-in-time recovery check box. 5. Choose Save changes. Cassandra Query Language (CQL) Disable PITR for a table using CQL • To disable PITR for an existing table, run the following CQL command. ALTER TABLE mykeyspace.mytable WITH custom_properties = {'point_in_time_recovery': {'status': 'disabled'}} CLI Disable PITR for a table using the AWS CLI • To disable PITR for an existing table, run the following AWS CLI command. aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --point-in-time-recovery 'status=DISABLED' Restore a table from backup to a specified point in time in Amazon Keyspaces The following section demonstrates how to restore an existing Amazon Keyspaces table to a specified point in time. Note This procedure assumes that the table you're using has been configured with point-in-time recovery. To enable PITR for a table, see the section called “Configure PITR”. Important While a restore is in progress, don't modify or delete the AWS Identity and Access Management (IAM) policies that grant the IAM principal (for example, user, group, or Use point-in-time recovery 383 Amazon Keyspaces (for Apache Cassandra) Developer Guide role) permission to perform the restore. Otherwise, unexpected behavior can result. For example, if you remove write permissions for a table while that table is being restored, the underlying RestoreTableToPointInTime operation can't write any of the restored data to the table. You can modify or delete permissions only after the restore operation is complete. Console Restore a table to a specified point in time using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. 3. In the navigation pane on the left side of the console, choose Tables. In the list of tables, choose the table you want to restore. 4. On the Backups tab of the table, in the Point-in-time recovery section, choose Restore. 5. For the new table name, enter a new name for the restored table, for example mytable_restored. 6. To define the point in time for the restore operation, you can choose between two options: • • Select the preconfigured Earliest time. Select Specify date and time and enter the date and |
AmazonKeyspaces-134 | AmazonKeyspaces.pdf | 134 | open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. 3. In the navigation pane on the left side of the console, choose Tables. In the list of tables, choose the table you want to restore. 4. On the Backups tab of the table, in the Point-in-time recovery section, choose Restore. 5. For the new table name, enter a new name for the restored table, for example mytable_restored. 6. To define the point in time for the restore operation, you can choose between two options: • • Select the preconfigured Earliest time. Select Specify date and time and enter the date and time you want to restore the new table to. Note You can restore to any point in time within Earliest time and the current time. Amazon Keyspaces restores your table data to the state based on the selected date and time (day:hour:minute:second). 7. Choose Restore to start the restore process. The table that is being restored is shown with the status Restoring. After the restore process is finished, the status of the restored table changes to Active. Use point-in-time recovery 384 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) Restore a table to a point in time using CQL 1. You can restore an active table to a point-in-time between earliest_restorable_timestamp and the current time. Current time is the default. To confirm that point-in-time recovery is enabled for the table, query the system_schema_mcs.tables as shown in this example. SELECT custom_properties FROM system_schema_mcs.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable'; Point-in-time recovery is enabled as shown in the following sample output. custom_properties ----------------- { ..., "point_in_time_recovery": { "earliest_restorable_timestamp":"2020-06-30T19:19:21.175Z" "status":"enabled" } } 2. • Restore the table to the current time. When you omit the WITH restore_timestamp = ... clause, the current timestamp is used. RESTORE TABLE mykeyspace.mytable_restored FROM TABLE mykeyspace.mytable; • You can also restore to a specific point in time, defined by a restore_timestamp in ISO 8601 format. You can specify any point in time during the last 35 days. For example, the following command restores the table to the EarliestRestorableDateTime. RESTORE TABLE mykeyspace.mytable_restored FROM TABLE mykeyspace.mytable WITH restore_timestamp = '2020-06-30T19:19:21.175Z'; Use point-in-time recovery 385 Amazon Keyspaces (for Apache Cassandra) Developer Guide For a full syntax description, see the section called “RESTORE TABLE” in the language reference. 3. To verify that the restore of the table was successful, query the system_schema_mcs.tables to confirm the status of the table. SELECT status FROM system_schema_mcs.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable_restored' The query shows the following output. status ------ RESTORING The table that is being restored is shown with the status Restoring. After the restore process is finished, the status of the table changes to Active. CLI Restore a table to a point in time using the AWS CLI 1. Create a simple table named myTable that has PITR enabled. The command has been broken up into separate lines for readability. aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' --schema-definition 'allColumns=[{name=id,type=int}, {name=name,type=text},{name=date,type=timestamp}],partitionKeys=[{name=id}]' --point-in-time-recovery 'status=ENABLED' 2. Confirm the properties of the new table and review the earliestRestorableTimestamp for PITR. aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable' The output of this command returns the following. { Use point-in-time recovery 386 Amazon Keyspaces (for Apache Cassandra) Developer Guide "keyspaceName": "myKeyspace", "tableName": "myTable", "resourceArn": "arn:aws:cassandra:us-east-1:111222333444:/keyspace/ myKeyspace/table/myTable", "creationTimestamp": "2022-06-20T14:34:57.049000-07:00", "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "id", "type": "int" }, { "name": "date", "type": "timestamp" }, { "name": "name", "type": "text" } ], "partitionKeys": [ { "name": "id" } ], "clusteringKeys": [], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2022-06-20T14:34:57.049000-07:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "ENABLED", "earliestRestorableTimestamp": "2022-06-20T14:35:13.693000-07:00" }, "defaultTimeToLive": 0, "comment": { "message": "" } Use point-in-time recovery 387 Amazon Keyspaces (for Apache Cassandra) Developer Guide } 3. • To restore a table to a point in time, specify a restore_timestamp in ISO 8601 format. You can chose any point in time during the last 35 days in one second intervals. For example, the following command restores the table to the EarliestRestorableDateTime. aws keyspaces restore-table --source-keyspace-name 'myKeyspace' --source- table-name 'myTable' --target-keyspace-name 'myKeyspace' --target-table-name 'myTable_restored' --restore-timestamp "2022-06-20 21:35:14.693" The output of this command returns the ARN of the restored table. { "restoredTableARN": "arn:aws:cassandra:us-east-1:111222333444:/keyspace/ myKeyspace/table/myTable_restored" } • To restore the table to the current time, you can omit the restore-timestamp parameter. aws keyspaces restore-table --source-keyspace-name 'myKeyspace' --source- table-name 'myTable' --target-keyspace-name 'myKeyspace' --target-table-name 'myTable_restored1'" Restore a deleted table using Amazon Keyspaces PITR The following procedure shows how to restore a deleted table from backup to the time of deletion. You can do this using CQL or the AWS CLI. Note This procedure assumes that PITR was enabled on the deleted table. Use point-in-time recovery 388 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) Restore a deleted table using CQL 1. To confirm that point-in-time recovery is |
AmazonKeyspaces-135 | AmazonKeyspaces.pdf | 135 | restore the table to the current time, you can omit the restore-timestamp parameter. aws keyspaces restore-table --source-keyspace-name 'myKeyspace' --source- table-name 'myTable' --target-keyspace-name 'myKeyspace' --target-table-name 'myTable_restored1'" Restore a deleted table using Amazon Keyspaces PITR The following procedure shows how to restore a deleted table from backup to the time of deletion. You can do this using CQL or the AWS CLI. Note This procedure assumes that PITR was enabled on the deleted table. Use point-in-time recovery 388 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) Restore a deleted table using CQL 1. To confirm that point-in-time recovery is enabled for a deleted table, query the system table. Only tables with point-in-time recovery enabled are shown. SELECT custom_properties FROM system_schema_mcs.tables_history WHERE keyspace_name = 'mykeyspace' AND table_name = 'my_table'; The query shows the following output. custom_properties ------------------ { ..., "point_in_time_recovery":{ "restorable_until_time":"2020-08-04T00:48:58.381Z", "status":"enabled" } } 2. Restore the table to the time of deletion with the following sample statement. RESTORE TABLE mykeyspace.mytable_restored FROM TABLE mykeyspace.mytable; CLI Restore a deleted table using the AWS CLI 1. Delete a table that you created previously that has PITR enabled. The following command is an example. aws keyspaces delete-table --keyspace-name 'myKeyspace' --table-name 'myTable' 2. Restore the deleted table to the time of deletion with the following command. Use point-in-time recovery 389 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws keyspaces restore-table --source-keyspace-name 'myKeyspace' --source- table-name 'myTable' --target-keyspace-name 'myKeyspace' --target-table-name 'myTable_restored2' The output of this command returns the ARN of the restored table. { "restoredTableARN": "arn:aws:cassandra:us-east-1:111222333444:/keyspace/ myKeyspace/table/myTable_restored2" } Expire data with Time to Live (TTL) for Amazon Keyspaces (for Apache Cassandra) Amazon Keyspaces (for Apache Cassandra) Time to Live (TTL) helps you simplify your application logic and optimize the price of storage by expiring data from tables automatically. Data that you no longer need is automatically deleted from your table based on the Time to Live value that you set. This makes it easier to comply with data retention policies based on business, industry, or regulatory requirements that define how long data needs to be retained or specify when data must be deleted. For example, you can use TTL in an AdTech application to schedule when data for specific ads expires and is no longer visible to clients. You can also use TTL to retire older data automatically and save on your storage costs. You can set a default TTL value for the entire table, and overwrite that value for individual rows and columns. TTL operations don't impact your application's performance. Also, the number of rows and columns marked to expire with TTL doesn't affect your table's availability. Amazon Keyspaces automatically filters out expired data so that expired data isn't returned in query results or available for use in data manipulation language (DML) statements. Amazon Keyspaces typically deletes expired data from storage within 10 days of the expiration date. Expire data with Time to Live 390 Amazon Keyspaces (for Apache Cassandra) Developer Guide In rare cases, Amazon Keyspaces may not be able to delete data within 10 days if there is sustained activity on the underlying storage partition to protect availability. In these cases, Amazon Keyspaces continues to attempt to delete the expired data once traffic on the partition decreases. After the data is permanently deleted from storage, you stop incurring storage fees. You can set, modify, or disable default TTL settings for new and existing tables by using the console, Cassandra Query Language (CQL), or the AWS CLI. On tables with default TTL configured, you can use CQL statements to override the default TTL settings of the table and apply custom TTL values to rows and columns. For more information, see the section called “Use INSERT to set custom TTL for new rows” and the section called “Use UPDATE to set custom TTL for rows and columns”. TTL pricing is based on the size of the rows being deleted or updated by using Time to Live. TTL operations are metered in units of TTL deletes. One TTL delete is consumed per KB of data per row that is deleted or updated. For example, to update a row that stores 2.5 KB of data and to delete one or more columns within the row at the same time requires three TTL deletes. Or, to delete an entire row that contains 3.5 KB of data requires four TTL deletes. One TTL delete is consumed per KB of deleted data per row. For more information about pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. Topics • Amazon Keyspaces Time to Live and integration with AWS services • Create a new table with default Time to Live (TTL) settings • Update the default Time to Live (TTL) value of a table • Create table with custom Time to Live (TTL) settings enabled • Update table with custom Time to |
AmazonKeyspaces-136 | AmazonKeyspaces.pdf | 136 | same time requires three TTL deletes. Or, to delete an entire row that contains 3.5 KB of data requires four TTL deletes. One TTL delete is consumed per KB of deleted data per row. For more information about pricing, see Amazon Keyspaces (for Apache Cassandra) pricing. Topics • Amazon Keyspaces Time to Live and integration with AWS services • Create a new table with default Time to Live (TTL) settings • Update the default Time to Live (TTL) value of a table • Create table with custom Time to Live (TTL) settings enabled • Update table with custom Time to Live (TTL) • Use the INSERT statement to set custom Time to Live (TTL) values for new rows • Use the UPDATE statement to edit custom Time to Live (TTL) settings for rows and columns Amazon Keyspaces Time to Live and integration with AWS services The following TTL metric is available in Amazon CloudWatch to enable continuous monitoring. Integration with AWS services 391 Amazon Keyspaces (for Apache Cassandra) Developer Guide • TTLDeletes – The units consumed to delete or update data in a row by using Time to Live (TTL). For more information about how to monitor CloudWatch metrics, see the section called “Monitoring with CloudWatch”. When you use AWS CloudFormation, you can turn on TTL when creating an Amazon Keyspaces table. For more information, see the AWS CloudFormation User Guide. Create a new table with default Time to Live (TTL) settings In Amazon Keyspaces, you can set a default TTL value for all rows in a table when the table is created. The default TTL value for a table is zero, which means that data doesn't expire automatically. If the default TTL value for a table is greater than zero, an expiration timestamp is added to each row. TTL values are set in seconds, and the maximum configurable value is 630,720,000 seconds, which is the equivalent of 20 years. After table creation, you can overwrite the table's default TTL setting for specific rows or columns with CQL DML statements. For more information, see the section called “Use INSERT to set custom TTL for new rows” and the section called “Use UPDATE to set custom TTL for rows and columns”. When you enable TTL on a table, Amazon Keyspaces begins to store additional TTL-related metadata for each row. In addition, TTL uses expiration timestamps to track when rows or columns expire. The timestamps are stored as row metadata and contribute to the storage cost for the row. After the TTL feature is enabled, you can't disable it for a table. Setting the table’s default_time_to_live to 0 disables default expiration times for new data, but it doesn't deactivate the TTL feature or revert the table back to the original Amazon Keyspaces storage metadata or write behavior. The following examples show how to create a new table with a default TTL value. Console Create a new table with a Time to Live default value using the console. 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. Create table with default TTL value 392 Amazon Keyspaces (for Apache Cassandra) Developer Guide 2. In the navigation pane, choose Tables, and then choose Create table. 3. On the Create table page in the Table details section, select a keyspace and provide a name for the new table. 4. 5. In the Schema section, create the schema for your table. In the Table settings section, choose Customize settings. 6. Continue to Time to Live (TTL). In this step, you select the default TTL settings for the table. For the Default TTL period, enter the expiration time and choose the unit of time you entered, for example seconds, days, or years. Amazon Keyspaces will store the value in seconds. 7. Choose Create table. Your table is created with the specified default TTL value. Cassandra Query Language (CQL) Create a new table with a default TTL value using CQL 1. The following statement creates a new table with the default TTL value set to 3,024,000 seconds, which represents 35 days. CREATE TABLE my_table ( userid uuid, time timeuuid, subject text, body text, user inet, PRIMARY KEY (userid, time) ) WITH default_time_to_live = 3024000; 2. To confirm the TTL settings for the new table, use the cqlsh DESCRIBE statement as shown in the following example. The output shows the default TTL setting for the table as default_time_to_live. DESC TABLE my_table; CREATE TABLE my_keyspace.my_table ( userid uuid, Create table with default TTL value 393 Amazon Keyspaces (for Apache Cassandra) Developer Guide time timeuuid, body text, subject text, user inet, PRIMARY KEY (userid, time) ) WITH CLUSTERING ORDER BY (time ASC) AND bloom_filter_fp_chance = 0.01 AND caching = {'class': 'com.amazonaws.cassandra.DefaultCaching'} AND comment = '' AND compaction = {'class': 'com.amazonaws.cassandra.DefaultCompaction'} AND compression = {'class': 'com.amazonaws.cassandra.DefaultCompression'} |
AmazonKeyspaces-137 | AmazonKeyspaces.pdf | 137 | = 3024000; 2. To confirm the TTL settings for the new table, use the cqlsh DESCRIBE statement as shown in the following example. The output shows the default TTL setting for the table as default_time_to_live. DESC TABLE my_table; CREATE TABLE my_keyspace.my_table ( userid uuid, Create table with default TTL value 393 Amazon Keyspaces (for Apache Cassandra) Developer Guide time timeuuid, body text, subject text, user inet, PRIMARY KEY (userid, time) ) WITH CLUSTERING ORDER BY (time ASC) AND bloom_filter_fp_chance = 0.01 AND caching = {'class': 'com.amazonaws.cassandra.DefaultCaching'} AND comment = '' AND compaction = {'class': 'com.amazonaws.cassandra.DefaultCompaction'} AND compression = {'class': 'com.amazonaws.cassandra.DefaultCompression'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.0 AND default_time_to_live = 3024000 AND gc_grace_seconds = 7776000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 3600000 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; CLI Create a new table with a default TTL value using the AWS CLI 1. You can use the following command to create a new table with the default TTL value set to one year. aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' \ --schema-definition 'allColumns=[{name=id,type=int}, {name=name,type=text},{name=date,type=timestamp}],partitionKeys=[{name=id}]' \ --default-time-to-live '31536000' 2. To confirm the TTL status of the table, you can use the following command. aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable' The output of the command looks like in the following example { "keyspaceName": "myKeyspace", Create table with default TTL value 394 Amazon Keyspaces (for Apache Cassandra) Developer Guide "tableName": "myTable", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ myKeyspace/table/myTable", "creationTimestamp": "2024-09-02T10:52:22.190000+00:00", "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "id", "type": "int" }, { "name": "date", "type": "timestamp" }, { "name": "name", "type": "text" } ], "partitionKeys": [ { "name": "id" } ], "clusteringKeys": [], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2024-09-02T10:52:22.190000+00:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "ttl": { "status": "ENABLED" }, "defaultTimeToLive": 31536000, "comment": { "message": "" Create table with default TTL value 395 Amazon Keyspaces (for Apache Cassandra) Developer Guide }, "replicaSpecifications": [] } Update the default Time to Live (TTL) value of a table You can update an existing table with a new default TTL value. TTL values are set in seconds, and the maximum configurable value is 630,720,000 seconds, which is the equivalent of 20 years. When you enable TTL on a table, Amazon Keyspaces begins to store additional TTL-related metadata for each row. In addition, TTL uses expiration timestamps to track when rows or columns expire. The timestamps are stored as row metadata and contribute to the storage cost for the row. After TTL has been enabled for a table, you can overwrite the table's default TTL setting for specific rows or columns with CQL DML statements. For more information, see the section called “Use INSERT to set custom TTL for new rows” and the section called “Use UPDATE to set custom TTL for rows and columns”. After the TTL feature is enabled, you can't disable it for a table. Setting the table’s default_time_to_live to 0 disables default expiration times for new data, but it doesn't deactivate the TTL feature or revert the table back to the original Amazon Keyspaces storage metadata or write behavior. Follow these steps to update default Time to Live settings for existing tables using the console, CQL, or the AWS CLI. Console Update the default TTL value of a table using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. Choose the table that you want to update, and then choose the Additional settings tab. 3. Continue to Time to Live (TTL) and choose Edit. 4. For the Default TTL period, enter the expiration time and choose the unit of time, for example seconds, days, or years. Amazon Keyspaces will store the value in seconds. This doesn't change the TTL value of existing rows. 5. When the TTL settings are defined, choose Save changes. Update table default TTL value 396 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) Update the default TTL value of a table using CQL 1. You can use ALTER TABLE to edit default Time to Live (TTL) settings of a table. To update the default TTL settings of the table to 2,592,000 seconds, which represents 30 days, you can use the following statement. ALTER TABLE my_table WITH default_time_to_live = 2592000; 2. To confirm the TTL settings for the updated table, use the cqlsh DESCRIBE statement as shown in the following example. The output shows the default TTL setting for the table as default_time_to_live. DESC TABLE my_table; The output of the statement should look similar to this example. CREATE TABLE my_keyspace.my_table ( id int PRIMARY KEY, date timestamp, name text ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'class': 'com.amazonaws.cassandra.DefaultCaching'} AND comment = '' AND compaction = {'class': 'com.amazonaws.cassandra.DefaultCompaction'} AND compression = {'class': 'com.amazonaws.cassandra.DefaultCompression'} AND |
AmazonKeyspaces-138 | AmazonKeyspaces.pdf | 138 | represents 30 days, you can use the following statement. ALTER TABLE my_table WITH default_time_to_live = 2592000; 2. To confirm the TTL settings for the updated table, use the cqlsh DESCRIBE statement as shown in the following example. The output shows the default TTL setting for the table as default_time_to_live. DESC TABLE my_table; The output of the statement should look similar to this example. CREATE TABLE my_keyspace.my_table ( id int PRIMARY KEY, date timestamp, name text ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'class': 'com.amazonaws.cassandra.DefaultCaching'} AND comment = '' AND compaction = {'class': 'com.amazonaws.cassandra.DefaultCompaction'} AND compression = {'class': 'com.amazonaws.cassandra.DefaultCompression'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.0 AND default_time_to_live = 2592000 AND gc_grace_seconds = 7776000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 3600000 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; Update table default TTL value 397 Amazon Keyspaces (for Apache Cassandra) Developer Guide CLI Update the default TTL value of a table using the AWS CLI 1. You can use update-table to edit the default TTL value a table. To update the default TTL settings of the table to 2,592,000 seconds, which represents 30 days, you can use the following statement. aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --default-time-to-live '2592000' 2. To confirm the updated default TTL value, you can use the following statement. aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable' The output of the statement should look like in the following example. { "keyspaceName": "myKeyspace", "tableName": "myTable", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ myKeyspace/table/myTable", "creationTimestamp": "2024-09-02T10:52:22.190000+00:00", "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "id", "type": "int" }, { "name": "date", "type": "timestamp" }, { "name": "name", "type": "text" } ], "partitionKeys": [ { "name": "id" Update table default TTL value 398 Amazon Keyspaces (for Apache Cassandra) Developer Guide } ], "clusteringKeys": [], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2024-09-02T10:52:22.190000+00:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "ttl": { "status": "ENABLED" }, "defaultTimeToLive": 2592000, "comment": { "message": "" }, "replicaSpecifications": [] } Create table with custom Time to Live (TTL) settings enabled To create a new table with Time to Live custom settings that can be applied to rows and columns without enabling TTL default settings for the entire table, you can use the following commands. Note If a table is created with ttl custom settings enabled, you can't disable the setting later. Create table with custom TTL 399 Amazon Keyspaces (for Apache Cassandra) Developer Guide Cassandra Query Language (CQL) Create a new table with custom TTL setting using CQL CREATE TABLE my_keyspace.my_table (id int primary key) WITH CUSTOM_PROPERTIES={'ttl':{'status': 'enabled'}}; • CLI Create a new table with custom TTL setting using the AWS CLI 1. You can use the following command to create a new table with TTL enabled. aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' \ --schema-definition 'allColumns=[{name=id,type=int},{name=name,type=text}, {name=date,type=timestamp}],partitionKeys=[{name=id}]' \ --ttl 'status=ENABLED' 2. To confirm that TTL is enabled for the table, you can use the following statement. aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable' The output of the statement should look like in the following example. { "keyspaceName": "myKeyspace", "tableName": "myTable", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ myKeyspace/table/myTable", "creationTimestamp": "2024-09-02T10:52:22.190000+00:00", "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "id", "type": "int" }, { "name": "date", "type": "timestamp" Create table with custom TTL 400 Amazon Keyspaces (for Apache Cassandra) Developer Guide }, { "name": "name", "type": "text" } ], "partitionKeys": [ { "name": "id" } ], "clusteringKeys": [], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2024-09-02T11:18:55.796000+00:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "ttl": { "status": "ENABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" }, "replicaSpecifications": [] } Update table with custom Time to Live (TTL) To enable Time to Live custom settings for a table so that TTL values can be applied to individual rows and columns without setting a TTL default value for the entire table, you can use the following commands. Update table custom TTL 401 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note After ttl is enabled, you can't disable it for the table. Cassandra Query Language (CQL) Enable custom TTL settings for a table using CQL ALTER TABLE my_table WITH CUSTOM_PROPERTIES={'ttl':{'status': 'enabled'}}; • CLI Enable custom TTL settings for a table using the AWS CLI 1. You can use the following command to update the custom TTL setting of a table. aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --ttl 'status=ENABLED' 2. To confirm that TTL is now enabled for the table, you can use the following statement. aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable' The output of the statement should look like in the following example. { "keyspaceName": "myKeyspace", "tableName": "myTable", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ myKeyspace/table/myTable", "creationTimestamp": "2024-09-02T11:32:27.349000+00:00", "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "id", "type": "int" }, { Update table custom TTL 402 Amazon Keyspaces (for Apache Cassandra) Developer Guide "name": "date", "type": "timestamp" |
AmazonKeyspaces-139 | AmazonKeyspaces.pdf | 139 | 1. You can use the following command to update the custom TTL setting of a table. aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --ttl 'status=ENABLED' 2. To confirm that TTL is now enabled for the table, you can use the following statement. aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable' The output of the statement should look like in the following example. { "keyspaceName": "myKeyspace", "tableName": "myTable", "resourceArn": "arn:aws:cassandra:us-east-1:123SAMPLE012:/keyspace/ myKeyspace/table/myTable", "creationTimestamp": "2024-09-02T11:32:27.349000+00:00", "status": "ACTIVE", "schemaDefinition": { "allColumns": [ { "name": "id", "type": "int" }, { Update table custom TTL 402 Amazon Keyspaces (for Apache Cassandra) Developer Guide "name": "date", "type": "timestamp" }, { "name": "name", "type": "text" } ], "partitionKeys": [ { "name": "id" } ], "clusteringKeys": [], "staticColumns": [] }, "capacitySpecification": { "throughputMode": "PAY_PER_REQUEST", "lastUpdateToPayPerRequestTimestamp": "2024-09-02T11:32:27.349000+00:00" }, "encryptionSpecification": { "type": "AWS_OWNED_KMS_KEY" }, "pointInTimeRecovery": { "status": "DISABLED" }, "ttl": { "status": "ENABLED" }, "defaultTimeToLive": 0, "comment": { "message": "" }, "replicaSpecifications": [] } Update table custom TTL 403 Amazon Keyspaces (for Apache Cassandra) Developer Guide Use the INSERT statement to set custom Time to Live (TTL) values for new rows Note Before you can set custom TTL values for rows using the INSERT statement, you must first enable custom TTL on the table. For more information, see the section called “Update table custom TTL”. To overwrite a table's default TTL value by setting expiration dates for individual rows, you can use the INSERT statement: • INSERT – Insert a new row of data with a TTL value set. Setting TTL values for new rows using the INSERT statement takes precedence over the default TTL setting of the table. The following CQL statement inserts a row of data into the table and changes the default TTL setting to 259,200 seconds (which is equivalent to 3 days). INSERT INTO my_table (userid, time, subject, body, user) VALUES (B79CB3BA-745E-5D9A-8903-4A02327A7E09, 96a29100-5e25-11ec-90d7- b5d91eceda0a, 'Message', 'Hello','205.212.123.123') USING TTL 259200; To confirm the TTL settings for the inserted row, use the following statement. SELECT TTL (subject) from my_table; Use the UPDATE statement to edit custom Time to Live (TTL) settings for rows and columns Note Before you can set custom TTL values for rows and columns, you must enable TTL on the table first. For more information, see the section called “Update table custom TTL”. Use INSERT to set custom TTL for new rows 404 Amazon Keyspaces (for Apache Cassandra) Developer Guide You can use the UPDATE statement to overwrite a table's default TTL value by setting the expiration date for individual rows and columns: • Rows – You can update an existing row of data with a custom TTL value. • Columns – You can update a subset of columns within existing rows with a custom TTL value. Setting TTL values for rows and columns takes precedence over the default TTL setting for the table. To change the TTL settings of the 'subject' column inserted earlier from 259,200 seconds (3 days) to 86,400 seconds (one day), use the following statement. UPDATE my_table USING TTL 86400 set subject = 'Updated Message' WHERE userid = B79CB3BA-745E-5D9A-8903-4A02327A7E09 and time = 96a29100-5e25-11ec-90d7-b5d91eceda0a; You can run a simple select query to see the updated record before the expiration time. SELECT * from my_table; The query shows the following output. userid | time | body | subject | user --------------------------------------+--------------------------------------+------- +-----------------+----------------- b79cb3ba-745e-5d9a-8903-4a02327a7e09 | 96a29100-5e25-11ec-90d7-b5d91eceda0a | Hello | Updated Message | 205.212.123.123 50554d6e-29bb-11e5-b345-feff819cdc9f | cf03fb21-59b5-11ec-b371-dff626ab9620 | Hello | Message | 205.212.123.123 To confirm that the expiration was successful, run the same query again after the configured expiration time. SELECT * from my_table; The query shows the following output after the 'subject' column has expired. userid | time | body | subject | user Use UPDATE to set custom TTL for rows and columns 405 Amazon Keyspaces (for Apache Cassandra) Developer Guide --------------------------------------+--------------------------------------+------- +---------+----------------- b79cb3ba-745e-5d9a-8903-4a02327a7e09 | 96a29100-5e25-11ec-90d7-b5d91eceda0a | Hello | null | 205.212.123.123 50554d6e-29bb-11e5-b345-feff819cdc9f | cf03fb21-59b5-11ec-b371-dff626ab9620 | Hello | Message | 205.212.123.123 Using this service with an AWS SDK AWS software development kits (SDKs) are available for many popular programming languages. Each SDK provides an API, code examples, and documentation that make it easier for developers to build applications in their preferred language. SDK documentation Code examples AWS SDK for C++ AWS SDK for C++ code examples AWS CLI AWS SDK for Go AWS SDK for Java AWS CLI code examples AWS SDK for Go code examples AWS SDK for Java code examples AWS SDK for JavaScript AWS SDK for JavaScript code examples AWS SDK for Kotlin AWS SDK for Kotlin code examples AWS SDK for .NET AWS SDK for PHP AWS SDK for .NET code examples AWS SDK for PHP code examples AWS Tools for PowerShell Tools for PowerShell code examples AWS SDK for Python (Boto3) AWS SDK for Python (Boto3) code examples AWS SDK for Ruby AWS SDK for Rust AWS SDK for Ruby |
AmazonKeyspaces-140 | AmazonKeyspaces.pdf | 140 | examples AWS CLI AWS SDK for Go AWS SDK for Java AWS CLI code examples AWS SDK for Go code examples AWS SDK for Java code examples AWS SDK for JavaScript AWS SDK for JavaScript code examples AWS SDK for Kotlin AWS SDK for Kotlin code examples AWS SDK for .NET AWS SDK for PHP AWS SDK for .NET code examples AWS SDK for PHP code examples AWS Tools for PowerShell Tools for PowerShell code examples AWS SDK for Python (Boto3) AWS SDK for Python (Boto3) code examples AWS SDK for Ruby AWS SDK for Rust AWS SDK for Ruby code examples AWS SDK for Rust code examples AWS SDK for SAP ABAP AWS SDK for SAP ABAP code examples Working with AWS SDKs 406 Amazon Keyspaces (for Apache Cassandra) Developer Guide SDK documentation Code examples AWS SDK for Swift AWS SDK for Swift code examples Example availability Can't find what you need? Request a code example by using the Provide feedback link at the bottom of this page. Working with tags and labels for Amazon Keyspaces resources You can label Amazon Keyspaces (for Apache Cassandra) resources using tags. Tags let you categorize your resources in different ways—for example, by purpose, owner, environment, or other criteria. Tags can help you do the following: • Quickly identify a resource based on the tags that you assigned to it. • See AWS bills broken down by tags. • Control access to Amazon Keyspaces resources based on tags. For IAM policy examples using tags, see the section called “Authorization based on Amazon Keyspaces tags”. Tagging is supported by AWS services like Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon Keyspaces, and more. Efficient tagging can provide cost insights by enabling you to create reports across services that carry a specific tag. To get started with tagging, do the following: 1. Understand Restrictions for using tags to label resources in Amazon Keyspaces. 2. Create tags by using Tag keyspaces and tables in Amazon Keyspaces. 3. Use Create cost allocation reports using tags for Amazon Keyspaces to track your AWS costs per active tag. Finally, it is good practice to follow optimal tagging strategies. For information, see AWS tagging strategies. Working with tags 407 Amazon Keyspaces (for Apache Cassandra) Developer Guide Restrictions for using tags to label resources in Amazon Keyspaces Each tag consists of a key and a value, both of which you define. The following restrictions apply: • Each Amazon Keyspaces keyspace or table can have only one tag with the same key. If you try to add an existing tag (same key), the existing tag value is updated to the new value. • Tags applied to a keyspace do not automatically apply to tables within that keyspace. To apply the same tag to a keyspace and all its tables, each resource must be individually tagged. • When you create a multi-Region keyspace or table, any tags that you define during the creation process are automatically applied to all keyspaces and tables in all Regions. When you change existing tags using ALTER KEYSPACE or ALTER TABLE, the update is only applied to the keyspace or table in the Region where you're making the change. • A value acts as a descriptor within a tag category (key). In Amazon Keyspaces the value cannot be empty or null. • Tag keys and values are case sensitive. • The maximum key length is 128 Unicode characters. • The maximum value length is 256 Unicode characters. • The allowed characters are letters, white space, and numbers, plus the following special characters: + - = . _ : / • The maximum number of tags per resource is 50. • AWS-assigned tag names and values are automatically assigned the aws: prefix, which you can't assign. AWS-assigned tag names don't count toward the tag limit of 50. User-assigned tag names have the prefix user: in the cost allocation report. • You can't backdate the application of a tag. Tag keyspaces and tables in Amazon Keyspaces You can add, list, edit, or delete tags for keyspaces and tables using the Amazon Keyspaces (for Apache Cassandra) console, the AWS CLI, or Cassandra Query Language (CQL). You can then activate these user-defined tags so that they appear on the AWS Billing and Cost Management console for cost allocation tracking. For more information, see Create cost allocation reports using tags for Amazon Keyspaces. For bulk editing, you can also use Tag Editor on the console. For more information, see Working with Tag Editor in the AWS Resource Groups User Guide. Tagging restrictions 408 Amazon Keyspaces (for Apache Cassandra) Developer Guide For information about tag structure, see Restrictions for using tags to label resources in Amazon Keyspaces. Topics • Add tags when creating a new keyspace • Add tags |
AmazonKeyspaces-141 | AmazonKeyspaces.pdf | 141 | can then activate these user-defined tags so that they appear on the AWS Billing and Cost Management console for cost allocation tracking. For more information, see Create cost allocation reports using tags for Amazon Keyspaces. For bulk editing, you can also use Tag Editor on the console. For more information, see Working with Tag Editor in the AWS Resource Groups User Guide. Tagging restrictions 408 Amazon Keyspaces (for Apache Cassandra) Developer Guide For information about tag structure, see Restrictions for using tags to label resources in Amazon Keyspaces. Topics • Add tags when creating a new keyspace • Add tags to a keyspace • Delete tags from a keyspace • View the tags of a keyspace • Add tags when creating a new table • Add tags to a table • Delete tags from a table • View the tags of a table Add tags when creating a new keyspace You can use the Amazon Keyspaces console, CQL or the AWS CLI to add tags when you create a new keyspace. Console Set a tag when creating a new keyspace using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces, and then choose Create keyspace. 3. On the Create keyspace page, provide a name for the keyspace. 4. Under Tags choose Add new tag and enter a key and a value. 5. Choose Create keyspace. Cassandra Query Language (CQL) Set a tag when creating a new keyspace using CQL • The following example creates a new keyspace with tags. Tag keyspaces and tables 409 Amazon Keyspaces (for Apache Cassandra) Developer Guide CREATE KEYSPACE mykeyspace WITH TAGS = {'key1':'val1', 'key2':'val2'}; CLI Set a tag when creating a new keyspace using the AWS CLI • The following statement creates a new keyspace with tags. aws keyspaces create-keyspace --keyspace-name 'myKeyspace' --tags 'key=key1,value=val1' 'key=key2,value=val2' Add tags to a keyspace The following examples show how to add tags to a keyspace in Amazon Keyspaces. Console Add a tag to an existing keyspace using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces. 3. Choose a keyspace from the list. Then choose the Tags tab where you can view the tags of the keyspace. 4. Choose Manage tags to add, edit, or delete tags. 5. Choose Save changes. Cassandra Query Language (CQL) Add a tag to an existing keyspace using CQL • ALTER KEYSPACE mykeyspace ADD TAGS {'key1':'val1', 'key2':'val2'}; Tag keyspaces and tables 410 Amazon Keyspaces (for Apache Cassandra) Developer Guide CLI Add a tag to an existing keyspace using the AWS CLI • The following example shows how to add new tags to an existing keyspace. aws keyspaces tag-resource --resource-arn 'arn:aws:cassandra:us- east-1:111222333444:/keyspace/myKeyspace/' --tags 'key=key3,value=val3' 'key=key4,value=val4' Delete tags from a keyspace Console Delete a tag from an existing keyspace using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces. 3. Choose a keyspace from the list. Then choose the Tags tab where you can view the tags of the keyspace. 4. Choose Manage tags and delete the tags you don't need anymore. 5. Choose Save changes. Cassandra Query Language (CQL) Delete a tag from an existing keyspace using CQL ALTER KEYSPACE mykeyspace DROP TAGS {'key1':'val1', 'key2':'val2'}; • CLI Delete a tag from an existing keyspace using the AWS CLI • The following statement removes the specified tags from a keyspace. Tag keyspaces and tables 411 Amazon Keyspaces (for Apache Cassandra) Developer Guide aws keyspaces untag-resource --resource-arn 'arn:aws:cassandra:us- east-1:111222333444:/keyspace/myKeyspace/' --tags 'key=key3,value=val3' 'key=key4,value=val4' View the tags of a keyspace The following examples show how to read tags using the console, CQL or the AWS CLI. Console View the tags of a keyspace using the Amazon Keyspaces console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Keyspaces. 3. Choose a keyspace from the list. Then choose the Tags tab where you can view the tags of the keyspace. Cassandra Query Language (CQL) View the tags of a keyspace using CQL To read the tags attached to a keyspace, use the following CQL statement. SELECT * FROM system_schema_mcs.tags WHERE valid_where_clause; The WHERE clause is required, and must use one of the following formats: • keyspace_name = 'mykeyspace' AND resource_type = 'keyspace' • resource_id = arn • The following statement shows whether a keyspace has tags. SELECT * FROM system_schema_mcs.tags WHERE keyspace_name = 'mykeyspace' AND resource_type = 'keyspace'; The output of the query looks like the following. Tag keyspaces and tables 412 Amazon Keyspaces (for Apache Cassandra) Developer Guide resource_id | keyspace_name | resource_name | resource_type | tags ----------------------------------------------------------------- |
AmazonKeyspaces-142 | AmazonKeyspaces.pdf | 142 | keyspace using CQL To read the tags attached to a keyspace, use the following CQL statement. SELECT * FROM system_schema_mcs.tags WHERE valid_where_clause; The WHERE clause is required, and must use one of the following formats: • keyspace_name = 'mykeyspace' AND resource_type = 'keyspace' • resource_id = arn • The following statement shows whether a keyspace has tags. SELECT * FROM system_schema_mcs.tags WHERE keyspace_name = 'mykeyspace' AND resource_type = 'keyspace'; The output of the query looks like the following. Tag keyspaces and tables 412 Amazon Keyspaces (for Apache Cassandra) Developer Guide resource_id | keyspace_name | resource_name | resource_type | tags ----------------------------------------------------------------- +---------------+---------------+---------------+------ arn:aws:cassandra:us-east-1:123456789:/keyspace/mykeyspace/ | mykeyspace | mykeyspace | keyspace | {'key1': 'val1', 'key2': 'val2'} CLI View the tags of a keyspace using the AWS CLI • This example shows how to list the tags of the specified resource. aws keyspaces list-tags-for-resource --resource-arn 'arn:aws:cassandra:us- east-1:111222333444:/keyspace/myKeyspace/' The output of the last command looks like this. { "tags": [ { "key": "key1", "value": "val1" }, { "key": "key2", "value": "val2" }, { "key": "key3", "value": "val3" }, { "key": "key4", "value": "val4" } ] } Tag keyspaces and tables 413 Amazon Keyspaces (for Apache Cassandra) Developer Guide Add tags when creating a new table You can use the Amazon Keyspaces console, CQL or the AWS CLI to add tags to new keyspaces and tables when you create them. Console Add a tag when creating a new table using the (console) 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables, and then choose Create table. 3. On the Create table page in the Table details section, select a keyspace and provide a name for the table. 4. 5. In the Schema section, create the schema for your table. In the Table settings section, choose Customize settings. 6. Continue to the Table tags – optional section, and choose Add new tag to create new tags. 7. Choose Create table. Cassandra Query Language (CQL) Add tags when creating a new table using CQL • The following example creates a new table with tags. CREATE TABLE mytable(...) WITH TAGS = {'key1':'val1', 'key2':'val2'}; CLI Add tags when creating a new table using the AWS CLI • The following example shows how to create a new table with tags. The command creates a table myTable in an already existing keyspace myKeyspace. Note that the command has been broken up into different lines to help with readability. aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' Tag keyspaces and tables 414 Amazon Keyspaces (for Apache Cassandra) Developer Guide --schema-definition 'allColumns=[{name=id,type=int}, {name=name,type=text},{name=date,type=timestamp}],partitionKeys=[{name=id}]' --tags 'key=key1,value=val1' 'key=key2,value=val2' Add tags to a table You can add tags to an existing table in Amazon Keyspaces using the console, CQL or the AWS CLI. Console Add tags to a table using the Amazon Keyspaces console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables. 3. Choose a table from the list and choose the Tags tab. 4. Choose Manage tags to add tags to the table. 5. Choose Save changes. Cassandra Query Language (CQL) Add tags to a table using CQL • The following statement shows how to add tags to an existing table. ALTER TABLE mykeyspace.mytable ADD TAGS {'key1':'val1', 'key2':'val2'}; CLI Add tags to a table using the AWS CLI • The following example shows how to add new tags to an existing table. aws keyspaces tag-resource --resource-arn 'arn:aws:cassandra:us- east-1:111222333444:/keyspace/myKeyspace/table/myTable' --tags 'key=key3,value=val3' 'key=key4,value=val4' Tag keyspaces and tables 415 Amazon Keyspaces (for Apache Cassandra) Delete tags from a table Console Developer Guide Delete tags from a table using the Amazon Keyspaces console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables. 3. Choose a table from the list and choose the Tags tab. 4. Choose Manage tags to delete tags from the table. 5. Choose Save changes. Cassandra Query Language (CQL) Delete tags from a table using CQL • The following statement shows how to delete tags from an existing table. ALTER TABLE mytable DROP TAGS {'key3':'val3', 'key4':'val4'}; CLI Add tags to a table using the AWS CLI • The following statement removes the specified tags from a keyspace. aws keyspaces untag-resource --resource-arn 'arn:aws:cassandra:us- east-1:111222333444:/keyspace/myKeyspace/table/myTable' --tags 'key=key3,value=val3' 'key=key4,value=val4' View the tags of a table The following examples show how to the tags of a table in Amazon Keyspaces using the console, CQL, or the AWS CLI. Tag keyspaces and tables 416 Amazon Keyspaces (for Apache Cassandra) Developer Guide Console View the tags of a table using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables. 3. Choose a table from the |
AmazonKeyspaces-143 | AmazonKeyspaces.pdf | 143 | CLI • The following statement removes the specified tags from a keyspace. aws keyspaces untag-resource --resource-arn 'arn:aws:cassandra:us- east-1:111222333444:/keyspace/myKeyspace/table/myTable' --tags 'key=key3,value=val3' 'key=key4,value=val4' View the tags of a table The following examples show how to the tags of a table in Amazon Keyspaces using the console, CQL, or the AWS CLI. Tag keyspaces and tables 416 Amazon Keyspaces (for Apache Cassandra) Developer Guide Console View the tags of a table using the console 1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at https://console.aws.amazon.com/keyspaces/home. 2. In the navigation pane, choose Tables. 3. Choose a table from the list and choose the Tags tab. Cassandra Query Language (CQL) View the tags of a table using CQL To read the tags attached to a table, use the following CQL statement. SELECT * FROM system_schema_mcs.tags WHERE valid_where_clause; The WHERE clause is required, and must use one of the following formats: • keyspace_name = 'mykeyspace' AND resource_name = 'mytable' • resource_id = arn • The following query returns the tags of the specified table. SELECT * FROM system_schema_mcs.tags WHERE keyspace_name = 'mykeyspace' AND resource_name = 'mytable'; The output of that query looks like the following. resource_id | keyspace_name | resource_name | resource_type | tags ---------------------------------------------------------------------------- +---------------+---------------+---------------+------ arn:aws:cassandra:us-east-1:123456789:/keyspace/mykeyspace/table/mytable | mykeyspace | mytable | table | {'key1': 'val1', 'key2': 'val2'} Tag keyspaces and tables 417 Amazon Keyspaces (for Apache Cassandra) Developer Guide CLI View the tags of a table using the AWS CLI • This example shows how to list the tags of the specified resource. aws keyspaces list-tags-for-resource --resource-arn 'arn:aws:cassandra:us- east-1:111222333444:/keyspace/myKeyspace/table/myTable' The output of the last command looks like this. { "tags": [ { "key": "key1", "value": "val1" }, { "key": "key2", "value": "val2" }, { "key": "key3", "value": "val3" }, { "key": "key4", "value": "val4" } ] } Create cost allocation reports using tags for Amazon Keyspaces AWS uses tags to organize resource costs on your cost allocation report. AWS provides two types of cost allocation tags: • An AWS-generated tag. AWS defines, creates, and applies this tag for you. • User-defined tags. You define, create, and apply these tags. Create cost allocation reports 418 Amazon Keyspaces (for Apache Cassandra) Developer Guide You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report. To activate AWS-generated tags: 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/. 2. In the navigation pane, choose Cost Allocation Tags. 3. Under AWS-Generated Cost Allocation Tags, choose Activate. To activate user-defined tags: 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/. 2. In the navigation pane, choose Cost Allocation Tags. 3. Under User-Defined Cost Allocation Tags, choose Activate. After you create and activate tags, AWS generates a cost allocation report with your usage and costs grouped by your active tags. The cost allocation report includes all of your AWS costs for each billing period. The report includes both tagged and untagged resources, so that you can clearly organize the charges for resources. Note Currently, any data transferred out from Amazon Keyspaces won't be broken down by tags on cost allocation reports. For more information, see Using cost allocation tags. Create Amazon Keyspaces resources with AWS CloudFormation Amazon Keyspaces is integrated with AWS CloudFormation, a service that helps you model and set up your AWS keyspaces and tables so that you can spend less time creating and managing your resources and infrastructure. You create a template that describes the keyspaces and tables that you want, and AWS CloudFormation takes care of provisioning and configuring those resources for you. Create AWS CloudFormation resources 419 Amazon Keyspaces (for Apache Cassandra) Developer Guide When you use AWS CloudFormation, you can reuse your template to set up your Amazon Keyspaces resources consistently and repeatedly. Just describe your resources once, and then provision the same resources over and over in multiple AWS accounts and Regions. Amazon Keyspaces and AWS CloudFormation templates To provision and configure resources for Amazon Keyspaces, you must understand AWS CloudFormation templates. Templates are formatted text files in JSON or YAML. These templates describe the resources that you want to provision in your AWS CloudFormation stacks. If you're unfamiliar with JSON or YAML, you can use AWS CloudFormation Designer to help you get started with AWS CloudFormation templates. For more information, see What is AWS CloudFormation designer? in the AWS CloudFormation User Guide. Amazon Keyspaces supports creating keyspaces and tables in AWS CloudFormation. For the tables you create using AWS CloudFormation templates, you can specify the schema, read/write mode, provisioned throughput settings, and other supported features. For more information, including examples of JSON and YAML templates for keyspaces and tables, see Cassandra resource type reference in the AWS CloudFormation User Guide. |
AmazonKeyspaces-144 | AmazonKeyspaces.pdf | 144 | in your AWS CloudFormation stacks. If you're unfamiliar with JSON or YAML, you can use AWS CloudFormation Designer to help you get started with AWS CloudFormation templates. For more information, see What is AWS CloudFormation designer? in the AWS CloudFormation User Guide. Amazon Keyspaces supports creating keyspaces and tables in AWS CloudFormation. For the tables you create using AWS CloudFormation templates, you can specify the schema, read/write mode, provisioned throughput settings, and other supported features. For more information, including examples of JSON and YAML templates for keyspaces and tables, see Cassandra resource type reference in the AWS CloudFormation User Guide. Learn more about AWS CloudFormation To learn more about AWS CloudFormation, see the following resources: • AWS CloudFormation • AWS CloudFormation User Guide • AWS CloudFormation command line interface User Guide Using NoSQL Workbench with Amazon Keyspaces (for Apache Cassandra) NoSQL Workbench is a client-side application that helps you design and visualize nonrelational data models for Amazon Keyspaces more easily. NoSQL Workbench clients are available for Windows, macOS, and Linux. Designing data models and creating resources automatically NoSQL Workbench provides you a point-and-click interface to design and create Amazon Keyspaces data models. You can easily create new data models from scratch by defining Amazon Keyspaces and AWS CloudFormation templates 420 Amazon Keyspaces (for Apache Cassandra) Developer Guide keyspaces, tables, and columns. You can also import existing data models and make modifications (such as adding, editing, or removing columns) to adapt the data models for new applications. NoSQL Workbench then enables you to commit the data models to Amazon Keyspaces or Apache Cassandra, and create the keyspaces and tables automatically. To learn how to build data models, see the section called “Create a data model” and the section called “Edit a data model”. Visualizing data models Using NoSQL Workbench, you can visualize your data models to help ensure that the data models can support your application’s queries and access patterns. You can also save and export your data models in a variety of formats for collaboration, documentation, and presentations. For more information, see the section called “Visualize a data model”. Topics • Download NoSQL Workbench • Getting started with NoSQL Workbench • Visualize data models with NoSQL Workbench • Create a new data model with NoSQL Workbench • Edit existing data models with NoSQL Workbench • How to commit data models to Amazon Keyspaces and Apache Cassandra • Sample data models in NoSQL Workbench • Release history for NoSQL Workbench Download NoSQL Workbench Follow these instructions to download and install NoSQL Workbench. To download and install NoSQL Workbench 1. Use one of the following links to download NoSQL Workbench for free. Operating System Download Link macOS Download for macOS Download 421 Amazon Keyspaces (for Apache Cassandra) Developer Guide Operating System Download Link Linux* Windows Download for Linux Download for Windows * NoSQL Workbench supports Ubuntu 12.04, Fedora 21, and Debian 8 or any newer versions of these Linux distributions. 2. After the download completes, start the application and follow the onscreen instructions to complete the installation. Getting started with NoSQL Workbench To get started with NoSQL Workbench, on the Database Catalog page in NoSQL Workbench, choose Amazon Keyspaces, and then choose Launch. Getting started 422 Amazon Keyspaces (for Apache Cassandra) Developer Guide This opens the NoSQL Workbench home page for Amazon Keyspaces where you have the following options to get started: 1. Create a new data model. 2. Import an existing data model in JSON format. 3. Open a recently edited data model. 4. Open one of the available sample models. Each of the options opens the NoSQL Workbench data modeler. To continue creating a new data model, see the section called “Create a data model”. To edit an existing data model, see the section called “Edit a data model”. Getting started 423 Amazon Keyspaces (for Apache Cassandra) Developer Guide Visualize data models with NoSQL Workbench Using NoSQL Workbench, you can visualize your data models to help ensure that the data models can support your application’s queries and access patterns. You also can save and export your data models in a variety of formats for collaboration, documentation, and presentations. After you have created a new data model or edited an existing data model, you can visualize the model. Visualizing data models with NoSQL Workbench When you have completed the data model in the data modeler, choose Visualize data model. This takes you to the data visualizer in NoSQL Workbench. The data visualizer provides a visual representation of the table's schema and lets you add sample data. To add sample data to a table, choose a table from the model, and then choose Edit. To add a new row of data, choose Add new row at the bottom of the screen. Choose Save when you're done. Visualize a data model 424 Amazon |
AmazonKeyspaces-145 | AmazonKeyspaces.pdf | 145 | data model, you can visualize the model. Visualizing data models with NoSQL Workbench When you have completed the data model in the data modeler, choose Visualize data model. This takes you to the data visualizer in NoSQL Workbench. The data visualizer provides a visual representation of the table's schema and lets you add sample data. To add sample data to a table, choose a table from the model, and then choose Edit. To add a new row of data, choose Add new row at the bottom of the screen. Choose Save when you're done. Visualize a data model 424 Amazon Keyspaces (for Apache Cassandra) Developer Guide Aggregate view After you have confirmed the table's schema, you can aggregate data model visualizations. Visualize a data model 425 Amazon Keyspaces (for Apache Cassandra) Developer Guide After you have aggregated the view of the data model, you can export the view to a PNG file. To export the data model to a JSON file, choose the upload sign under the data model name. Note You can export the data model in JSON format at any time in the design process. Visualize a data model 426 Amazon Keyspaces (for Apache Cassandra) Developer Guide You have the following options to commit the changes: • Commit to Amazon Keyspaces • Commit to an Apache Cassandra cluster To learn more about how to commit changes, see the section called “Commit a data model”. Create a new data model with NoSQL Workbench You can use the NoSQL Workbench data modeler to design new data models based on your application's data access patterns. To create a new data model for Amazon Keyspaces, you can use the NoSQL Workbench data modeler to create keyspaces, tables, and columns. Follow these steps to create a new data model. 1. To create a new keyspace, choose the plus sign under Keyspace. In this step, choose the following properties and settings. Create a data model 427 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Keyspace name – Enter the name of the new keyspace. • Replication strategy – Choose the replication strategy for the keyspace. Amazon Keyspaces uses the SingleRegionStrategy to replicate data three times automatically in multiple AWS Availability Zones. If you're planning to commit the data model to an Apache Cassandra cluster, you can choose SimpleStrategy or NetworkTopologyStrategy. • Keyspaces tags – Resource tags are optional and let you categorize your resources in different ways—for example, by purpose, owner, environment, or other criteria. To learn more about tags for Amazon Keyspaces resources, see the section called “Working with tags”. 2. Choose Add keyspace definition to create the keyspace. 3. To create a new table, choose the plus sign next to Tables. In this step, you define the following properties and settings. • Table name – The name of the new table. • Columns – Add a column name and choose the data type. Repeat these steps for every column in your schema. • Partition key – Choose columns for the partition key. • Clustering columns – Choose clustering columns (optional). Create a data model 428 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Capacity mode – Choose the read/write capacity mode for the table. You can choose provisioned or on-demand capacity. To learn more about capacity modes, see the section called “Configure read/write capacity modes”. • Table tags – Resource tags are optional and let you categorize your resources in different ways—for example, by purpose, owner, environment, or other criteria. To learn more about tags for Amazon Keyspaces resources, see the section called “Working with tags”. 4. Choose Add table definition to create the new table. 5. Repeat these steps to create additional tables. 6. Continue to the section called “Visualizing a Data Model” to visualize the data model that you created. Edit existing data models with NoSQL Workbench You can use the data modeler to import and modify existing data models created using NoSQL Workbench. The data modeler also includes a few sample data models to help you get started with Edit a data model 429 Amazon Keyspaces (for Apache Cassandra) Developer Guide data modeling. The data models you can edit with NoSQL Workbench can be data models that are imported from a file, the provided sample data models, or data models that you created previously. 1. To edit a keyspace, choose the edit symbol under Keyspace. In this step, you can edit the following properties and settings. • Keyspace name – Enter the name of the new keyspace. • Replication strategy – Choose the replication strategy for the keyspace. Amazon Keyspaces uses the SingleRegionStrategy to replicate data three times automatically in multiple AWS Availability Zones. If you're planning to commit the data model to an Apache Cassandra cluster, you can choose SimpleStrategy or NetworkTopologyStrategy. • Keyspaces tags – Resource tags |
AmazonKeyspaces-146 | AmazonKeyspaces.pdf | 146 | from a file, the provided sample data models, or data models that you created previously. 1. To edit a keyspace, choose the edit symbol under Keyspace. In this step, you can edit the following properties and settings. • Keyspace name – Enter the name of the new keyspace. • Replication strategy – Choose the replication strategy for the keyspace. Amazon Keyspaces uses the SingleRegionStrategy to replicate data three times automatically in multiple AWS Availability Zones. If you're planning to commit the data model to an Apache Cassandra cluster, you can choose SimpleStrategy or NetworkTopologyStrategy. • Keyspaces tags – Resource tags are optional and let you categorize your resources in different ways—for example, by purpose, owner, environment, or other criteria. To learn more about tags for Amazon Keyspaces resources, see the section called “Working with tags”. 2. Choose Save edits to update the keyspace. 3. To edit a table, choose Edit next to the table name. In this step, you can update the following properties and settings. Edit a data model 430 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Table name – The name of the new table. • Columns – Add a column name and choose the data type. Repeat these steps for every column in your schema. • Partition key – Choose columns for the partition key. • Clustering columns – Choose clustering columns (optional). • Capacity mode – Choose the read/write capacity mode for the table. You can choose provisioned or on-demand capacity. To learn more about capacity modes, see the section called “Configure read/write capacity modes”. • Table tags – Resource tags are optional and let you categorize your resources in different ways—for example, by purpose, owner, environment, or other criteria. To learn more about tags for Amazon Keyspaces resources, see the section called “Working with tags”. 4. Choose Save edits to update the table. 5. Continue to the section called “Visualizing a Data Model” to visualize the data model that you updated. How to commit data models to Amazon Keyspaces and Apache Cassandra This section shows you how to commit completed data models to Amazon Keyspaces and Apache Cassandra clusters. This process automatically creates the server-side resources for keyspaces and tables based on the settings that you defined in the data model. Commit a data model 431 Amazon Keyspaces (for Apache Cassandra) Developer Guide Topics • Before you begin • Connect to Amazon Keyspaces with service-specific credentials • Connect to Amazon Keyspaces with AWS Identity and Access Management (IAM) credentials • Use a saved connection • Commit to Apache Cassandra Before you begin Amazon Keyspaces requires the use of Transport Layer Security (TLS) to help secure connections with clients. To connect to Amazon Keyspaces using TLS, you need to complete the following task before you can start. • Download the Starfield digital certificate using the following command and save sf-class2- root.crt locally or in your home directory. Commit a data model 432 Amazon Keyspaces (for Apache Cassandra) Developer Guide curl https://certs.secureserver.net/repository/sf-class2-root.crt -O Note You can also use the Amazon digital certificate to connect to Amazon Keyspaces and can continue to do so if your client is connecting to Amazon Keyspaces successfully. The Starfield certificate provides additional backwards compatibility for clients using older certificate authorities. curl https://certs.secureserver.net/repository/sf-class2-root.crt -O After you have saved the certificate file, you can connect to Amazon Keyspaces. One option is to connect by using service-specific credentials. Service-specific credentials are a user name and password that are associated with a specific IAM user and can only be used with the specified service. The second option is to connect with IAM credentials that are using the AWS Signature Version 4 process (SigV4). To learn more about these two options, see the section called “Create programmatic access credentials”. To connect with service-specific credentials, see the section called “Connect with service-specific credentials”. To connect with IAM credentials, see the section called “Connect with IAM credentials”. Connect to Amazon Keyspaces with service-specific credentials This section shows how to use service-specific credentials to commit the data model you created or edited with NoSQL Workbench. 1. To create a new connection using service-specific credentials, choose the Connect by using user name and password tab. • Before you begin, you must create service-specific credentials using the process documented at the section called “Create service-specific credentials”. Commit a data model 433 Amazon Keyspaces (for Apache Cassandra) Developer Guide After you have obtained the service-specific credentials, you can continue to set up the connection. Continue with one of the following: • User name – Enter the user name. • Password – Enter the password. • AWS Region – For available Regions, see the section called “Service endpoints”. • Port – Amazon Keyspaces uses port 9142. Alternatively, you can import saved credentials from a file. 2. Choose Commit to update Amazon Keyspaces with |
AmazonKeyspaces-147 | AmazonKeyspaces.pdf | 147 | must create service-specific credentials using the process documented at the section called “Create service-specific credentials”. Commit a data model 433 Amazon Keyspaces (for Apache Cassandra) Developer Guide After you have obtained the service-specific credentials, you can continue to set up the connection. Continue with one of the following: • User name – Enter the user name. • Password – Enter the password. • AWS Region – For available Regions, see the section called “Service endpoints”. • Port – Amazon Keyspaces uses port 9142. Alternatively, you can import saved credentials from a file. 2. Choose Commit to update Amazon Keyspaces with the data model. Commit a data model 434 Amazon Keyspaces (for Apache Cassandra) Developer Guide Commit a data model 435 Amazon Keyspaces (for Apache Cassandra) Developer Guide Connect to Amazon Keyspaces with AWS Identity and Access Management (IAM) credentials This section shows how to use IAM credentials to commit the data model created or edited with NoSQL Workbench. 1. To create a new connection using IAM credentials, choose the Connect by using IAM credentials tab. • Before you begin, you must create IAM credentials using one of the following methods. • For console access, use your IAM user name and password to sign in to the AWS Management Console from the IAM sign-in page. For information about AWS security credentials, including programmatic access and alternatives to long-term credentials, see AWS security credentials in the IAM User Guide. For details about signing in to your AWS account, see How to sign in to AWS in the AWS Sign-In User Guide. • For CLI access, you need an access key ID and a secret access key. Use temporary credentials instead of long-term access keys when possible. Temporary credentials include an access key ID, a secret access key, and a security token that indicates when the credentials expire. For more information, see Using temporary credentials with AWS resources in the IAM User Guide. • For API access, you need an access key ID and secret access key. Use IAM user access keys instead of AWS account root user access keys. For more information about creating access keys, see Manage access keys for IAM users in the IAM User Guide. For more information, see Managing access keys for IAM users. After you have obtained the IAM credentials, you can continue to set up the connection. • Connection name – The name of the connection. • AWS Region – For available Regions, see the section called “Service endpoints”. • Access key ID – Enter the access key ID. • Secret access key – Enter the secret access key. • Port – Amazon Keyspaces uses port 9142. • AWS public certificate – Point to the AWS certificate that was downloaded in the first step. Commit a data model 436 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Persist connection – Select this check box if you want to save the AWS connection secrets locally. 2. Choose Commit to update Amazon Keyspaces with the data model. Commit a data model 437 Amazon Keyspaces (for Apache Cassandra) Developer Guide Commit a data model 438 Amazon Keyspaces (for Apache Cassandra) Use a saved connection Developer Guide If you have previously set up a connection to Amazon Keyspaces, you can use that as the default connection to commit data model changes. Choose the Use saved connections tab and continue to commit the updates. Commit to Apache Cassandra This section walks you through making the connections to an Apache Cassandra cluster to commit the data model created or edited with NoSQL Workbench. Commit a data model 439 Amazon Keyspaces (for Apache Cassandra) Developer Guide Note Only data models that have been created with SimpleStrategy or NetworkTopologyStrategy can be committed to Apache Cassandra clusters. To change the replication strategy, edit the keyspace in the data modeler. 1. • User name – Enter the user name if authentication is enabled on the cluster. • Password – Enter the password if authentication is enabled on the cluster. • Contact points – Enter the contact points. • Local data center – Enter the name of the local data center. • Port – The connection uses port 9042. 2. Choose Commit to update the Apache Cassandra cluster with the data model. Commit a data model 440 Amazon Keyspaces (for Apache Cassandra) Developer Guide Commit a data model 441 Amazon Keyspaces (for Apache Cassandra) Developer Guide Sample data models in NoSQL Workbench The home page for the modeler and visualizer displays a number of sample models that ship with NoSQL Workbench. This section describes these models and their potential uses. Topics • Employee data model • Credit card transactions data model • Airline operations data model Employee data model This data model represents an Amazon Keyspaces schema for an employee database application. Applications that access employee information |
AmazonKeyspaces-148 | AmazonKeyspaces.pdf | 148 | cluster with the data model. Commit a data model 440 Amazon Keyspaces (for Apache Cassandra) Developer Guide Commit a data model 441 Amazon Keyspaces (for Apache Cassandra) Developer Guide Sample data models in NoSQL Workbench The home page for the modeler and visualizer displays a number of sample models that ship with NoSQL Workbench. This section describes these models and their potential uses. Topics • Employee data model • Credit card transactions data model • Airline operations data model Employee data model This data model represents an Amazon Keyspaces schema for an employee database application. Applications that access employee information for a given company can use this data model. The access patterns supported by this data model are: • Retrieval of an employee record with a given ID. • Retrieval of an employee record with a given ID and division. • Retrieval of an employee record with a given ID and name. Credit card transactions data model This data model represents an Amazon Keyspaces schema for credit card transactions at retail stores. The storage of credit card transactions not only helps stores with bookkeeping, but also helps store managers analyze purchase trends, which can help them with forecasting and planning. The access patterns supported by this data model are: • Retrieval of transactions by credit card number, month and year, and date. • Retrieval of transactions by credit card number, category, and date. • Retrieval of transactions by category, location, and credit card number. • Retrieval of transactions by credit card number and dispute status. Sample data models 442 Amazon Keyspaces (for Apache Cassandra) Developer Guide Airline operations data model This data model shows data about plane flights, including airports, airlines, and flight routes. Key components of Amazon Keyspaces modeling that are demonstrated are key-value pairs, wide-column data stores, composite keys, and complex data types such as maps to demonstrate common NoSQL data-access patterns. The access patterns supported by this data model are: • Retrieval of routes originating from a given airline at a given airport. • Retrieval of routes with a given destination airport. • Retrieval of airports with direct flights. • Retrieval of airport details and airline details. Release history for NoSQL Workbench The following table describes the important changes in each release of the NoSQL Workbench client-side application. Change Description Date October 28, 2020 October 5, 2020 NoSQL Workbench for Amazon Keyspaces – GA. NoSQL Workbench for Amazon Keyspaces is generally available. NoSQL Workbench preview released. NoSQL Workbench is a client- side application that helps you design and visualize nonrelational data models for Amazon Keyspaces more easily. NoSQL Workbench clients are available for Windows, macOS, and Linux. For more information, see NoSQL Workbench for Amazon Keyspaces. Release history 443 Amazon Keyspaces (for Apache Cassandra) Developer Guide Code examples for Amazon Keyspaces using AWS SDKs The following code examples show how to use Amazon Keyspaces with an AWS software development kit (SDK). Basics are code examples that show you how to perform the essential operations within a service. Actions are code excerpts from larger programs and must be run in context. While actions show you how to call individual service functions, you can see actions in context in their related scenarios. For a complete list of AWS SDK developer guides and code examples, see Using this service with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Get started Hello Amazon Keyspaces The following code examples show how to get started using Amazon Keyspaces. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. namespace KeyspacesActions; public class HelloKeyspaces { private static ILogger logger = null!; static async Task Main(string[] args) { // Set up dependency injection for Amazon Keyspaces (for Apache Cassandra). using var host = Host.CreateDefaultBuilder(args) 444 Amazon Keyspaces (for Apache Cassandra) Developer Guide .ConfigureLogging(logging => logging.AddFilter("System", LogLevel.Debug) .AddFilter<DebugLoggerProvider>("Microsoft", LogLevel.Information) .AddFilter<ConsoleLoggerProvider>("Microsoft", LogLevel.Trace)) .ConfigureServices((_, services) => services.AddAWSService<IAmazonKeyspaces>() .AddTransient<KeyspacesWrapper>() ) .Build(); logger = LoggerFactory.Create(builder => { builder.AddConsole(); }) .CreateLogger<HelloKeyspaces>(); var keyspacesClient = host.Services.GetRequiredService<IAmazonKeyspaces>(); var keyspacesWrapper = new KeyspacesWrapper(keyspacesClient); Console.WriteLine("Hello, Amazon Keyspaces! Let's list your keyspaces:"); await keyspacesWrapper.ListKeyspaces(); } } • For API details, see ListKeyspaces in AWS SDK for .NET API Reference. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.keyspaces.KeyspacesClient; import software.amazon.awssdk.services.keyspaces.model.KeyspaceSummary; import software.amazon.awssdk.services.keyspaces.model.KeyspacesException; 445 Amazon Keyspaces (for Apache Cassandra) Developer Guide import software.amazon.awssdk.services.keyspaces.model.ListKeyspacesRequest; import software.amazon.awssdk.services.keyspaces.model.ListKeyspacesResponse; import java.util.List; /** * Before running this Java (v2) code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get- started.html */ public class HelloKeyspaces { public static void main(String[] |
AmazonKeyspaces-149 | AmazonKeyspaces.pdf | 149 | see ListKeyspaces in AWS SDK for .NET API Reference. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.keyspaces.KeyspacesClient; import software.amazon.awssdk.services.keyspaces.model.KeyspaceSummary; import software.amazon.awssdk.services.keyspaces.model.KeyspacesException; 445 Amazon Keyspaces (for Apache Cassandra) Developer Guide import software.amazon.awssdk.services.keyspaces.model.ListKeyspacesRequest; import software.amazon.awssdk.services.keyspaces.model.ListKeyspacesResponse; import java.util.List; /** * Before running this Java (v2) code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get- started.html */ public class HelloKeyspaces { public static void main(String[] args) { Region region = Region.US_EAST_1; KeyspacesClient keyClient = KeyspacesClient.builder() .region(region) .build(); listKeyspaces(keyClient); } public static void listKeyspaces(KeyspacesClient keyClient) { try { ListKeyspacesRequest keyspacesRequest = ListKeyspacesRequest.builder() .maxResults(10) .build(); ListKeyspacesResponse response = keyClient.listKeyspaces(keyspacesRequest); List<KeyspaceSummary> keyspaces = response.keyspaces(); for (KeyspaceSummary keyspace : keyspaces) { System.out.println("The name of the keyspace is " + keyspace.keyspaceName()); } } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } } 446 Amazon Keyspaces (for Apache Cassandra) Developer Guide • For API details, see ListKeyspaces in AWS SDK for Java 2.x API Reference. Kotlin SDK for Kotlin Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. /** Before running this Kotlin code example, set up your development environment, including your credentials. For more information, see the following documentation topic: https://docs.aws.amazon.com/sdk-for-kotlin/latest/developer-guide/setup.html */ suspend fun main() { listKeyspaces() } suspend fun listKeyspaces() { val keyspacesRequest = ListKeyspacesRequest { maxResults = 10 } KeyspacesClient { region = "us-east-1" }.use { keyClient -> val response = keyClient.listKeyspaces(keyspacesRequest) response.keyspaces?.forEach { keyspace -> println("The name of the keyspace is ${keyspace.keyspaceName}") } } } 447 Amazon Keyspaces (for Apache Cassandra) Developer Guide • For API details, see ListKeyspaces in AWS SDK for Kotlin API reference. Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. import boto3 def hello_keyspaces(keyspaces_client): """ Use the AWS SDK for Python (Boto3) to create an Amazon Keyspaces (for Apache Cassandra) client and list the keyspaces in your account. This example uses the default settings specified in your shared credentials and config files. :param keyspaces_client: A Boto3 Amazon Keyspaces Client object. This object wraps the low-level Amazon Keyspaces service API. """ print("Hello, Amazon Keyspaces! Let's list some of your keyspaces:\n") for ks in keyspaces_client.list_keyspaces(maxResults=5).get("keyspaces", []): print(ks["keyspaceName"]) print(f"\t{ks['resourceArn']}") if __name__ == "__main__": hello_keyspaces(boto3.client("keyspaces")) • For API details, see ListKeyspaces in AWS SDK for Python (Boto3) API Reference. 448 Amazon Keyspaces (for Apache Cassandra) Developer Guide Code examples • Basic examples for Amazon Keyspaces using AWS SDKs • Hello Amazon Keyspaces • Learn the basics of Amazon Keyspaces with an AWS SDK • Actions for Amazon Keyspaces using AWS SDKs • Use CreateKeyspace with an AWS SDK • Use CreateTable with an AWS SDK • Use DeleteKeyspace with an AWS SDK • Use DeleteTable with an AWS SDK • Use GetKeyspace with an AWS SDK • Use GetTable with an AWS SDK • Use ListKeyspaces with an AWS SDK • Use ListTables with an AWS SDK • Use RestoreTable with an AWS SDK • Use UpdateTable with an AWS SDK Basic examples for Amazon Keyspaces using AWS SDKs The following code examples show how to use the basics of Amazon Keyspaces (for Apache Cassandra) with AWS SDKs. Examples • Hello Amazon Keyspaces • Learn the basics of Amazon Keyspaces with an AWS SDK • Actions for Amazon Keyspaces using AWS SDKs • Use CreateKeyspace with an AWS SDK • Use CreateTable with an AWS SDK • Use DeleteKeyspace with an AWS SDK • Use DeleteTable with an AWS SDK • Use GetKeyspace with an AWS SDK • Use GetTable with an AWS SDK • Use ListKeyspaces with an AWS SDK Basics 449 Amazon Keyspaces (for Apache Cassandra) Developer Guide • Use ListTables with an AWS SDK • Use RestoreTable with an AWS SDK • Use UpdateTable with an AWS SDK Hello Amazon Keyspaces The following code examples show how to get started using Amazon Keyspaces. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. namespace KeyspacesActions; public class HelloKeyspaces { private static ILogger logger = null!; static async Task Main(string[] args) { // Set up dependency injection for Amazon Keyspaces (for Apache Cassandra). using var host = Host.CreateDefaultBuilder(args) .ConfigureLogging(logging => logging.AddFilter("System", LogLevel.Debug) .AddFilter<DebugLoggerProvider>("Microsoft", LogLevel.Information) .AddFilter<ConsoleLoggerProvider>("Microsoft", LogLevel.Trace)) .ConfigureServices((_, services) => services.AddAWSService<IAmazonKeyspaces>() .AddTransient<KeyspacesWrapper>() ) .Build(); Hello Amazon Keyspaces 450 Amazon Keyspaces (for Apache Cassandra) Developer Guide logger = LoggerFactory.Create(builder => { builder.AddConsole(); }) .CreateLogger<HelloKeyspaces>(); var keyspacesClient = host.Services.GetRequiredService<IAmazonKeyspaces>(); var keyspacesWrapper = new KeyspacesWrapper(keyspacesClient); Console.WriteLine("Hello, Amazon Keyspaces! Let's list your keyspaces:"); await keyspacesWrapper.ListKeyspaces(); } } • For |
AmazonKeyspaces-150 | AmazonKeyspaces.pdf | 150 | set up and run in the AWS Code Examples Repository. namespace KeyspacesActions; public class HelloKeyspaces { private static ILogger logger = null!; static async Task Main(string[] args) { // Set up dependency injection for Amazon Keyspaces (for Apache Cassandra). using var host = Host.CreateDefaultBuilder(args) .ConfigureLogging(logging => logging.AddFilter("System", LogLevel.Debug) .AddFilter<DebugLoggerProvider>("Microsoft", LogLevel.Information) .AddFilter<ConsoleLoggerProvider>("Microsoft", LogLevel.Trace)) .ConfigureServices((_, services) => services.AddAWSService<IAmazonKeyspaces>() .AddTransient<KeyspacesWrapper>() ) .Build(); Hello Amazon Keyspaces 450 Amazon Keyspaces (for Apache Cassandra) Developer Guide logger = LoggerFactory.Create(builder => { builder.AddConsole(); }) .CreateLogger<HelloKeyspaces>(); var keyspacesClient = host.Services.GetRequiredService<IAmazonKeyspaces>(); var keyspacesWrapper = new KeyspacesWrapper(keyspacesClient); Console.WriteLine("Hello, Amazon Keyspaces! Let's list your keyspaces:"); await keyspacesWrapper.ListKeyspaces(); } } • For API details, see ListKeyspaces in AWS SDK for .NET API Reference. Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.keyspaces.KeyspacesClient; import software.amazon.awssdk.services.keyspaces.model.KeyspaceSummary; import software.amazon.awssdk.services.keyspaces.model.KeyspacesException; import software.amazon.awssdk.services.keyspaces.model.ListKeyspacesRequest; import software.amazon.awssdk.services.keyspaces.model.ListKeyspacesResponse; import java.util.List; /** * Before running this Java (v2) code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * Hello Amazon Keyspaces 451 Amazon Keyspaces (for Apache Cassandra) Developer Guide * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get- started.html */ public class HelloKeyspaces { public static void main(String[] args) { Region region = Region.US_EAST_1; KeyspacesClient keyClient = KeyspacesClient.builder() .region(region) .build(); listKeyspaces(keyClient); } public static void listKeyspaces(KeyspacesClient keyClient) { try { ListKeyspacesRequest keyspacesRequest = ListKeyspacesRequest.builder() .maxResults(10) .build(); ListKeyspacesResponse response = keyClient.listKeyspaces(keyspacesRequest); List<KeyspaceSummary> keyspaces = response.keyspaces(); for (KeyspaceSummary keyspace : keyspaces) { System.out.println("The name of the keyspace is " + keyspace.keyspaceName()); } } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } } • For API details, see ListKeyspaces in AWS SDK for Java 2.x API Reference. Hello Amazon Keyspaces 452 Amazon Keyspaces (for Apache Cassandra) Developer Guide Kotlin SDK for Kotlin Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. /** Before running this Kotlin code example, set up your development environment, including your credentials. For more information, see the following documentation topic: https://docs.aws.amazon.com/sdk-for-kotlin/latest/developer-guide/setup.html */ suspend fun main() { listKeyspaces() } suspend fun listKeyspaces() { val keyspacesRequest = ListKeyspacesRequest { maxResults = 10 } KeyspacesClient { region = "us-east-1" }.use { keyClient -> val response = keyClient.listKeyspaces(keyspacesRequest) response.keyspaces?.forEach { keyspace -> println("The name of the keyspace is ${keyspace.keyspaceName}") } } } • For API details, see ListKeyspaces in AWS SDK for Kotlin API reference. Hello Amazon Keyspaces 453 Amazon Keyspaces (for Apache Cassandra) Developer Guide Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. import boto3 def hello_keyspaces(keyspaces_client): """ Use the AWS SDK for Python (Boto3) to create an Amazon Keyspaces (for Apache Cassandra) client and list the keyspaces in your account. This example uses the default settings specified in your shared credentials and config files. :param keyspaces_client: A Boto3 Amazon Keyspaces Client object. This object wraps the low-level Amazon Keyspaces service API. """ print("Hello, Amazon Keyspaces! Let's list some of your keyspaces:\n") for ks in keyspaces_client.list_keyspaces(maxResults=5).get("keyspaces", []): print(ks["keyspaceName"]) print(f"\t{ks['resourceArn']}") if __name__ == "__main__": hello_keyspaces(boto3.client("keyspaces")) • For API details, see ListKeyspaces in AWS SDK for Python (Boto3) API Reference. For a complete list of AWS SDK developer guides and code examples, see Using this service with an AWS SDK. This topic also includes information about getting started and details about previous SDK versions. Hello Amazon Keyspaces 454 Amazon Keyspaces (for Apache Cassandra) Developer Guide Learn the basics of Amazon Keyspaces with an AWS SDK The following code examples show how to: • Create a keyspace and table. The table schema holds movie data and has point-in-time recovery enabled. • Connect to the keyspace using a secure TLS connection with SigV4 authentication. • Query the table. Add, retrieve, and update movie data. • Update the table. Add a column to track watched movies. • Restore the table to its previous state and clean up resources. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. global using System.Security.Cryptography.X509Certificates; global using Amazon.Keyspaces; global using Amazon.Keyspaces.Model; global using KeyspacesActions; global using KeyspacesScenario; global using Microsoft.Extensions.Configuration; global using Microsoft.Extensions.DependencyInjection; global using Microsoft.Extensions.Hosting; global using Microsoft.Extensions.Logging; global using Microsoft.Extensions.Logging.Console; global using Microsoft.Extensions.Logging.Debug; global using Newtonsoft.Json; namespace KeyspacesBasics; /// <summary> /// Amazon Keyspaces (for Apache Cassandra) scenario. Shows some of the basic Learn the basics 455 Amazon Keyspaces (for Apache Cassandra) Developer Guide /// actions performed with Amazon Keyspaces. /// </summary> public class KeyspacesBasics { private static ILogger logger = null!; static async Task Main(string[] args) { // Set up dependency injection for the Amazon service. using var host = Host.CreateDefaultBuilder(args) .ConfigureLogging(logging => logging.AddFilter("System", LogLevel.Debug) .AddFilter<DebugLoggerProvider>("Microsoft", LogLevel.Information) .AddFilter<ConsoleLoggerProvider>("Microsoft", LogLevel.Trace)) .ConfigureServices((_, services) |
AmazonKeyspaces-151 | AmazonKeyspaces.pdf | 151 | KeyspacesActions; global using KeyspacesScenario; global using Microsoft.Extensions.Configuration; global using Microsoft.Extensions.DependencyInjection; global using Microsoft.Extensions.Hosting; global using Microsoft.Extensions.Logging; global using Microsoft.Extensions.Logging.Console; global using Microsoft.Extensions.Logging.Debug; global using Newtonsoft.Json; namespace KeyspacesBasics; /// <summary> /// Amazon Keyspaces (for Apache Cassandra) scenario. Shows some of the basic Learn the basics 455 Amazon Keyspaces (for Apache Cassandra) Developer Guide /// actions performed with Amazon Keyspaces. /// </summary> public class KeyspacesBasics { private static ILogger logger = null!; static async Task Main(string[] args) { // Set up dependency injection for the Amazon service. using var host = Host.CreateDefaultBuilder(args) .ConfigureLogging(logging => logging.AddFilter("System", LogLevel.Debug) .AddFilter<DebugLoggerProvider>("Microsoft", LogLevel.Information) .AddFilter<ConsoleLoggerProvider>("Microsoft", LogLevel.Trace)) .ConfigureServices((_, services) => services.AddAWSService<IAmazonKeyspaces>() .AddTransient<KeyspacesWrapper>() .AddTransient<CassandraWrapper>() ) .Build(); logger = LoggerFactory.Create(builder => { builder.AddConsole(); }) .CreateLogger<KeyspacesBasics>(); var configuration = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("settings.json") // Load test settings from .json file. .AddJsonFile("settings.local.json", true) // Optionally load local settings. .Build(); var keyspacesWrapper = host.Services.GetRequiredService<KeyspacesWrapper>(); var uiMethods = new UiMethods(); var keyspaceName = configuration["KeyspaceName"]; var tableName = configuration["TableName"]; bool success; // Used to track the results of some operations. uiMethods.DisplayOverview(); uiMethods.PressEnter(); Learn the basics 456 Amazon Keyspaces (for Apache Cassandra) Developer Guide // Create the keyspace. var keyspaceArn = await keyspacesWrapper.CreateKeyspace(keyspaceName); // Wait for the keyspace to be available. GetKeyspace results in a // resource not found error until it is ready for use. try { var getKeyspaceArn = ""; Console.Write($"Created {keyspaceName}. Waiting for it to become available. "); do { getKeyspaceArn = await keyspacesWrapper.GetKeyspace(keyspaceName); Console.Write(". "); } while (getKeyspaceArn != keyspaceArn); } catch (ResourceNotFoundException) { Console.WriteLine("Waiting for keyspace to be created."); } Console.WriteLine($"\nThe keyspace {keyspaceName} is ready for use."); uiMethods.PressEnter(); // Create the table. // First define the schema. var allColumns = new List<ColumnDefinition> { new ColumnDefinition { Name = "title", Type = "text" }, new ColumnDefinition { Name = "year", Type = "int" }, new ColumnDefinition { Name = "release_date", Type = "timestamp" }, new ColumnDefinition { Name = "plot", Type = "text" }, }; var partitionKeys = new List<PartitionKey> { new PartitionKey { Name = "year", }, new PartitionKey { Name = "title" }, }; var tableSchema = new SchemaDefinition Learn the basics 457 Amazon Keyspaces (for Apache Cassandra) Developer Guide { AllColumns = allColumns, PartitionKeys = partitionKeys, }; var tableArn = await keyspacesWrapper.CreateTable(keyspaceName, tableSchema, tableName); // Wait for the table to be active. try { var resp = new GetTableResponse(); Console.Write("Waiting for the new table to be active. "); do { try { resp = await keyspacesWrapper.GetTable(keyspaceName, tableName); Console.Write("."); } catch (ResourceNotFoundException) { Console.Write("."); } } while (resp.Status != TableStatus.ACTIVE); // Display the table's schema. Console.WriteLine($"\nTable {tableName} has been created in {keyspaceName}"); Console.WriteLine("Let's take a look at the schema."); uiMethods.DisplayTitle("All columns"); resp.SchemaDefinition.AllColumns.ForEach(column => { Console.WriteLine($"{column.Name,-40}\t{column.Type,-20}"); }); uiMethods.DisplayTitle("Cluster keys"); resp.SchemaDefinition.ClusteringKeys.ForEach(clusterKey => { Console.WriteLine($"{clusterKey.Name,-40}\t{clusterKey.OrderBy,-20}"); }); Learn the basics 458 Amazon Keyspaces (for Apache Cassandra) Developer Guide uiMethods.DisplayTitle("Partition keys"); resp.SchemaDefinition.PartitionKeys.ForEach(partitionKey => { Console.WriteLine($"{partitionKey.Name}"); }); uiMethods.PressEnter(); } catch (ResourceNotFoundException ex) { Console.WriteLine($"Error: {ex.Message}"); } // Access Apache Cassandra using the Cassandra drive for C#. var cassandraWrapper = host.Services.GetRequiredService<CassandraWrapper>(); var movieFilePath = configuration["MovieFile"]; Console.WriteLine("Let's add some movies to the table we created."); var inserted = await cassandraWrapper.InsertIntoMovieTable(keyspaceName, tableName, movieFilePath); uiMethods.PressEnter(); Console.WriteLine("Added the following movies to the table:"); var rows = await cassandraWrapper.GetMovies(keyspaceName, tableName); uiMethods.DisplayTitle("All Movies"); foreach (var row in rows) { var title = row.GetValue<string>("title"); var year = row.GetValue<int>("year"); var plot = row.GetValue<string>("plot"); var release_date = row.GetValue<DateTime>("release_date"); Console.WriteLine($"{release_date}\t{title}\t{year}\n{plot}"); Console.WriteLine(uiMethods.SepBar); } // Update the table schema uiMethods.DisplayTitle("Update table schema"); Console.WriteLine("Now we will update the table to add a boolean field called watched."); // First save the current time as a UTC Date so the original Learn the basics 459 Amazon Keyspaces (for Apache Cassandra) Developer Guide // table can be restored later. var timeChanged = DateTime.UtcNow; // Now update the schema. var resourceArn = await keyspacesWrapper.UpdateTable(keyspaceName, tableName); uiMethods.PressEnter(); Console.WriteLine("Now let's mark some of the movies as watched."); // Pick some files to mark as watched. var movieToWatch = rows[2].GetValue<string>("title"); var watchedMovieYear = rows[2].GetValue<int>("year"); var changedRows = await cassandraWrapper.MarkMovieAsWatched(keyspaceName, tableName, movieToWatch, watchedMovieYear); movieToWatch = rows[6].GetValue<string>("title"); watchedMovieYear = rows[6].GetValue<int>("year"); changedRows = await cassandraWrapper.MarkMovieAsWatched(keyspaceName, tableName, movieToWatch, watchedMovieYear); movieToWatch = rows[9].GetValue<string>("title"); watchedMovieYear = rows[9].GetValue<int>("year"); changedRows = await cassandraWrapper.MarkMovieAsWatched(keyspaceName, tableName, movieToWatch, watchedMovieYear); movieToWatch = rows[10].GetValue<string>("title"); watchedMovieYear = rows[10].GetValue<int>("year"); changedRows = await cassandraWrapper.MarkMovieAsWatched(keyspaceName, tableName, movieToWatch, watchedMovieYear); movieToWatch = rows[13].GetValue<string>("title"); watchedMovieYear = rows[13].GetValue<int>("year"); changedRows = await cassandraWrapper.MarkMovieAsWatched(keyspaceName, tableName, movieToWatch, watchedMovieYear); uiMethods.DisplayTitle("Watched movies"); Console.WriteLine("These movies have been marked as watched:"); rows = await cassandraWrapper.GetWatchedMovies(keyspaceName, tableName); foreach (var row in rows) { var title = row.GetValue<string>("title"); var year = row.GetValue<int>("year"); Console.WriteLine($"{title,-40}\t{year,8}"); Learn the basics 460 Amazon Keyspaces (for Apache Cassandra) Developer Guide } uiMethods.PressEnter(); Console.WriteLine("We can restore the table to its previous state but that can take up to 20 minutes to complete."); string answer; do { Console.WriteLine("Do you want to restore the table? (y/n)"); answer = Console.ReadLine(); } while (answer.ToLower() != "y" && answer.ToLower() != "n"); if (answer == "y") { var restoredTableName = $"{tableName}_restored"; var restoredTableArn = await keyspacesWrapper.RestoreTable( keyspaceName, tableName, restoredTableName, timeChanged); // |
AmazonKeyspaces-152 | AmazonKeyspaces.pdf | 152 | have been marked as watched:"); rows = await cassandraWrapper.GetWatchedMovies(keyspaceName, tableName); foreach (var row in rows) { var title = row.GetValue<string>("title"); var year = row.GetValue<int>("year"); Console.WriteLine($"{title,-40}\t{year,8}"); Learn the basics 460 Amazon Keyspaces (for Apache Cassandra) Developer Guide } uiMethods.PressEnter(); Console.WriteLine("We can restore the table to its previous state but that can take up to 20 minutes to complete."); string answer; do { Console.WriteLine("Do you want to restore the table? (y/n)"); answer = Console.ReadLine(); } while (answer.ToLower() != "y" && answer.ToLower() != "n"); if (answer == "y") { var restoredTableName = $"{tableName}_restored"; var restoredTableArn = await keyspacesWrapper.RestoreTable( keyspaceName, tableName, restoredTableName, timeChanged); // Loop and call GetTable until the table is gone. Once it has been // deleted completely, GetTable will raise a ResourceNotFoundException. bool wasRestored = false; try { do { var resp = await keyspacesWrapper.GetTable(keyspaceName, restoredTableName); wasRestored = (resp.Status == TableStatus.ACTIVE); } while (!wasRestored); } catch (ResourceNotFoundException) { // If the restored table raised an error, it isn't // ready yet. Console.Write("."); } } uiMethods.DisplayTitle("Clean up resources."); Learn the basics 461 Amazon Keyspaces (for Apache Cassandra) Developer Guide // Delete the table. success = await keyspacesWrapper.DeleteTable(keyspaceName, tableName); Console.WriteLine($"Table {tableName} successfully deleted from {keyspaceName}."); Console.WriteLine("Waiting for the table to be removed completely. "); // Loop and call GetTable until the table is gone. Once it has been // deleted completely, GetTable will raise a ResourceNotFoundException. bool wasDeleted = false; try { do { var resp = await keyspacesWrapper.GetTable(keyspaceName, tableName); } while (!wasDeleted); } catch (ResourceNotFoundException ex) { wasDeleted = true; Console.WriteLine($"{ex.Message} indicates that the table has been deleted."); } // Delete the keyspace. success = await keyspacesWrapper.DeleteKeyspace(keyspaceName); Console.WriteLine("The keyspace has been deleted and the demo is now complete."); } } namespace KeyspacesActions; /// <summary> /// Performs Amazon Keyspaces (for Apache Cassandra) actions. /// </summary> public class KeyspacesWrapper { private readonly IAmazonKeyspaces _amazonKeyspaces; Learn the basics 462 Amazon Keyspaces (for Apache Cassandra) Developer Guide /// <summary> /// Constructor for the KeyspaceWrapper. /// </summary> /// <param name="amazonKeyspaces">An Amazon Keyspaces client object.</param> public KeyspacesWrapper(IAmazonKeyspaces amazonKeyspaces) { _amazonKeyspaces = amazonKeyspaces; } /// <summary> /// Create a new keyspace. /// </summary> /// <param name="keyspaceName">The name for the new keyspace.</param> /// <returns>The Amazon Resource Name (ARN) of the new keyspace.</returns> public async Task<string> CreateKeyspace(string keyspaceName) { var response = await _amazonKeyspaces.CreateKeyspaceAsync( new CreateKeyspaceRequest { KeyspaceName = keyspaceName }); return response.ResourceArn; } /// <summary> /// Create a new Amazon Keyspaces table. /// </summary> /// <param name="keyspaceName">The keyspace where the table will be created.</param> /// <param name="schema">The schema for the new table.</param> /// <param name="tableName">The name of the new table.</param> /// <returns>The Amazon Resource Name (ARN) of the new table.</returns> public async Task<string> CreateTable(string keyspaceName, SchemaDefinition schema, string tableName) { var request = new CreateTableRequest { KeyspaceName = keyspaceName, SchemaDefinition = schema, TableName = tableName, PointInTimeRecovery = new PointInTimeRecovery { Status = PointInTimeRecoveryStatus.ENABLED } }; Learn the basics 463 Amazon Keyspaces (for Apache Cassandra) Developer Guide var response = await _amazonKeyspaces.CreateTableAsync(request); return response.ResourceArn; } /// <summary> /// Delete an existing keyspace. /// </summary> /// <param name="keyspaceName"></param> /// <returns>A Boolean value indicating the success of the action.</returns> public async Task<bool> DeleteKeyspace(string keyspaceName) { var response = await _amazonKeyspaces.DeleteKeyspaceAsync( new DeleteKeyspaceRequest { KeyspaceName = keyspaceName }); return response.HttpStatusCode == HttpStatusCode.OK; } /// <summary> /// Delete an Amazon Keyspaces table. /// </summary> /// <param name="keyspaceName">The keyspace containing the table.</param> /// <param name="tableName">The name of the table to delete.</param> /// <returns>A Boolean value indicating the success of the action.</returns> public async Task<bool> DeleteTable(string keyspaceName, string tableName) { var response = await _amazonKeyspaces.DeleteTableAsync( new DeleteTableRequest { KeyspaceName = keyspaceName, TableName = tableName }); return response.HttpStatusCode == HttpStatusCode.OK; } /// <summary> /// Get data about a keyspace. /// </summary> /// <param name="keyspaceName">The name of the keyspace.</param> /// <returns>The Amazon Resource Name (ARN) of the keyspace.</returns> public async Task<string> GetKeyspace(string keyspaceName) { var response = await _amazonKeyspaces.GetKeyspaceAsync( new GetKeyspaceRequest { KeyspaceName = keyspaceName }); return response.ResourceArn; } Learn the basics 464 Amazon Keyspaces (for Apache Cassandra) Developer Guide /// <summary> /// Get information about an Amazon Keyspaces table. /// </summary> /// <param name="keyspaceName">The keyspace containing the table.</param> /// <param name="tableName">The name of the Amazon Keyspaces table.</param> /// <returns>The response containing data about the table.</returns> public async Task<GetTableResponse> GetTable(string keyspaceName, string tableName) { var response = await _amazonKeyspaces.GetTableAsync( new GetTableRequest { KeyspaceName = keyspaceName, TableName = tableName }); return response; } /// <summary> /// Lists all keyspaces for the account. /// </summary> /// <returns>Async task.</returns> public async Task ListKeyspaces() { var paginator = _amazonKeyspaces.Paginators.ListKeyspaces(new ListKeyspacesRequest()); Console.WriteLine("{0, -30}\t{1}", "Keyspace name", "Keyspace ARN"); Console.WriteLine(new string('-', Console.WindowWidth)); await foreach (var keyspace in paginator.Keyspaces) { Console.WriteLine($"{keyspace.KeyspaceName,-30}\t{keyspace.ResourceArn}"); } } /// <summary> /// Lists the Amazon Keyspaces tables in a keyspace. /// </summary> /// <param name="keyspaceName">The name of the keyspace.</param> /// <returns>A list of TableSummary objects.</returns> public async Task<List<TableSummary>> ListTables(string keyspaceName) { Learn the basics 465 Amazon Keyspaces (for Apache Cassandra) Developer Guide var |
AmazonKeyspaces-153 | AmazonKeyspaces.pdf | 153 | new GetTableRequest { KeyspaceName = keyspaceName, TableName = tableName }); return response; } /// <summary> /// Lists all keyspaces for the account. /// </summary> /// <returns>Async task.</returns> public async Task ListKeyspaces() { var paginator = _amazonKeyspaces.Paginators.ListKeyspaces(new ListKeyspacesRequest()); Console.WriteLine("{0, -30}\t{1}", "Keyspace name", "Keyspace ARN"); Console.WriteLine(new string('-', Console.WindowWidth)); await foreach (var keyspace in paginator.Keyspaces) { Console.WriteLine($"{keyspace.KeyspaceName,-30}\t{keyspace.ResourceArn}"); } } /// <summary> /// Lists the Amazon Keyspaces tables in a keyspace. /// </summary> /// <param name="keyspaceName">The name of the keyspace.</param> /// <returns>A list of TableSummary objects.</returns> public async Task<List<TableSummary>> ListTables(string keyspaceName) { Learn the basics 465 Amazon Keyspaces (for Apache Cassandra) Developer Guide var response = await _amazonKeyspaces.ListTablesAsync(new ListTablesRequest { KeyspaceName = keyspaceName }); response.Tables.ForEach(table => { Console.WriteLine($"{table.KeyspaceName}\t{table.TableName}\t{table.ResourceArn}"); }); return response.Tables; } /// <summary> /// Restores the specified table to the specified point in time. /// </summary> /// <param name="keyspaceName">The keyspace containing the table.</param> /// <param name="tableName">The name of the table to restore.</param> /// <param name="timestamp">The time to which the table will be restored.</ param> /// <returns>The Amazon Resource Name (ARN) of the restored table.</returns> public async Task<string> RestoreTable(string keyspaceName, string tableName, string restoredTableName, DateTime timestamp) { var request = new RestoreTableRequest { RestoreTimestamp = timestamp, SourceKeyspaceName = keyspaceName, SourceTableName = tableName, TargetKeyspaceName = keyspaceName, TargetTableName = restoredTableName }; var response = await _amazonKeyspaces.RestoreTableAsync(request); return response.RestoredTableARN; } /// <summary> /// Updates the movie table to add a boolean column named watched. /// </summary> /// <param name="keyspaceName">The keyspace containing the table.</param> /// <param name="tableName">The name of the table to change.</param> /// <returns>The Amazon Resource Name (ARN) of the updated table.</returns> public async Task<string> UpdateTable(string keyspaceName, string tableName) Learn the basics 466 Amazon Keyspaces (for Apache Cassandra) Developer Guide { var newColumn = new ColumnDefinition { Name = "watched", Type = "boolean" }; var request = new UpdateTableRequest { KeyspaceName = keyspaceName, TableName = tableName, AddColumns = new List<ColumnDefinition> { newColumn } }; var response = await _amazonKeyspaces.UpdateTableAsync(request); return response.ResourceArn; } } using System.Net; using Cassandra; namespace KeyspacesScenario; /// <summary> /// Class to perform CRUD methods on an Amazon Keyspaces (for Apache Cassandra) database. /// /// NOTE: This sample uses a plain text authenticator for example purposes only. /// Recommended best practice is to use a SigV4 authentication plugin, if available. /// </summary> public class CassandraWrapper { private readonly IConfiguration _configuration; private readonly string _localPathToFile; private const string _certLocation = "https://certs.secureserver.net/ repository/sf-class2-root.crt"; private const string _certFileName = "sf-class2-root.crt"; private readonly X509Certificate2Collection _certCollection; private X509Certificate2 _amazoncert; private Cluster _cluster; // User name and password for the service. private string _userName = null!; Learn the basics 467 Amazon Keyspaces (for Apache Cassandra) Developer Guide private string _pwd = null!; public CassandraWrapper() { _configuration = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("settings.json") // Load test settings from .json file. .AddJsonFile("settings.local.json", true) // Optionally load local settings. .Build(); _localPathToFile = Path.GetTempPath(); // Get the Starfield digital certificate and save it locally. var client = new WebClient(); client.DownloadFile(_certLocation, $"{_localPathToFile}/ {_certFileName}"); //var httpClient = new HttpClient(); //var httpResult = httpClient.Get(fileUrl); //using var resultStream = await httpResult.Content.ReadAsStreamAsync(); //using var fileStream = File.Create(pathToSave); //resultStream.CopyTo(fileStream); _certCollection = new X509Certificate2Collection(); _amazoncert = new X509Certificate2($"{_localPathToFile}/ {_certFileName}"); // Get the user name and password stored in the configuration file. _userName = _configuration["UserName"]!; _pwd = _configuration["Password"]!; // For a list of Service Endpoints for Amazon Keyspaces, see: // https://docs.aws.amazon.com/keyspaces/latest/devguide/ programmatic.endpoints.html var awsEndpoint = _configuration["ServiceEndpoint"]; _cluster = Cluster.Builder() .AddContactPoints(awsEndpoint) .WithPort(9142) .WithAuthProvider(new PlainTextAuthProvider(_userName, _pwd)) .WithSSL(new SSLOptions().SetCertificateCollection(_certCollection)) .WithQueryOptions( new QueryOptions() Learn the basics 468 Amazon Keyspaces (for Apache Cassandra) Developer Guide .SetConsistencyLevel(ConsistencyLevel.LocalQuorum) .SetSerialConsistencyLevel(ConsistencyLevel.LocalSerial)) .Build(); } /// <summary> /// Loads the contents of a JSON file into a list of movies to be /// added to the Apache Cassandra table. /// </summary> /// <param name="movieFileName">The full path to the JSON file.</param> /// <returns>A list of movie objects.</returns> public List<Movie> ImportMoviesFromJson(string movieFileName, int numToImport = 0) { if (!File.Exists(movieFileName)) { return null!; } using var sr = new StreamReader(movieFileName); string json = sr.ReadToEnd(); var allMovies = JsonConvert.DeserializeObject<List<Movie>>(json); // If numToImport = 0, return all movies in the collection. if (numToImport == 0) { // Now return the entire list of movies. return allMovies; } else { // Now return the first numToImport entries. return allMovies.GetRange(0, numToImport); } } /// <summary> /// Insert movies into the movie table. /// </summary> /// <param name="keyspaceName">The keyspace containing the table.</param> /// <param name="movieTableName">The Amazon Keyspaces table.</param> /// <param name="movieFilePath">The path to the resource file containing /// movie data to insert into the table.</param> Learn the basics 469 Amazon Keyspaces (for Apache Cassandra) Developer Guide /// <returns>A Boolean value indicating the success of the action.</returns> public async Task<bool> InsertIntoMovieTable(string keyspaceName, string movieTableName, string movieFilePath, int numToImport = 20) { // Get some movie data from the movies.json file var movies = ImportMoviesFromJson(movieFilePath, numToImport); var session = _cluster.Connect(keyspaceName); string insertCql; RowSet rs; // Now we insert the numToImport movies into the table. foreach (var movie in movies) { // Escape single quote characters |
AmazonKeyspaces-154 | AmazonKeyspaces.pdf | 154 | <param name="movieTableName">The Amazon Keyspaces table.</param> /// <param name="movieFilePath">The path to the resource file containing /// movie data to insert into the table.</param> Learn the basics 469 Amazon Keyspaces (for Apache Cassandra) Developer Guide /// <returns>A Boolean value indicating the success of the action.</returns> public async Task<bool> InsertIntoMovieTable(string keyspaceName, string movieTableName, string movieFilePath, int numToImport = 20) { // Get some movie data from the movies.json file var movies = ImportMoviesFromJson(movieFilePath, numToImport); var session = _cluster.Connect(keyspaceName); string insertCql; RowSet rs; // Now we insert the numToImport movies into the table. foreach (var movie in movies) { // Escape single quote characters in the plot. insertCql = $"INSERT INTO {keyspaceName}.{movieTableName} (title, year, release_date, plot) values($${movie.Title}$$, {movie.Year}, '{movie.Info.Release_Date.ToString("yyyy-MM-dd")}', $${movie.Info.Plot}$$)"; rs = await session.ExecuteAsync(new SimpleStatement(insertCql)); } return true; } /// <summary> /// Gets all of the movies in the movies table. /// </summary> /// <param name="keyspaceName">The keyspace containing the table.</param> /// <param name="tableName">The name of the table.</param> /// <returns>A list of row objects containing movie data.</returns> public async Task<List<Row>> GetMovies(string keyspaceName, string tableName) { var session = _cluster.Connect(); RowSet rs; try { rs = await session.ExecuteAsync(new SimpleStatement($"SELECT * FROM {keyspaceName}.{tableName}")); // Extract the row data from the returned RowSet. var rows = rs.GetRows().ToList(); return rows; Learn the basics 470 Amazon Keyspaces (for Apache Cassandra) Developer Guide } catch (Exception ex) { Console.WriteLine(ex.Message); return null!; } } /// <summary> /// Mark a movie in the movie table as watched. /// </summary> /// <param name="keyspaceName">The keyspace containing the table.</param> /// <param name="tableName">The name of the table.</param> /// <param name="title">The title of the movie to mark as watched.</param> /// <param name="year">The year the movie was released.</param> /// <returns>A set of rows containing the changed data.</returns> public async Task<List<Row>> MarkMovieAsWatched(string keyspaceName, string tableName, string title, int year) { var session = _cluster.Connect(); string updateCql = $"UPDATE {keyspaceName}.{tableName} SET watched=true WHERE title = $${title}$$ AND year = {year};"; var rs = await session.ExecuteAsync(new SimpleStatement(updateCql)); var rows = rs.GetRows().ToList(); return rows; } /// <summary> /// Retrieve the movies in the movies table where watched is true. /// </summary> /// <param name="keyspaceName">The keyspace containing the table.</param> /// <param name="tableName">The name of the table.</param> /// <returns>A list of row objects containing information about movies /// where watched is true.</returns> public async Task<List<Row>> GetWatchedMovies(string keyspaceName, string tableName) { var session = _cluster.Connect(); RowSet rs; try { rs = await session.ExecuteAsync(new SimpleStatement($"SELECT title, year, plot FROM {keyspaceName}.{tableName} WHERE watched = true ALLOW FILTERING")); Learn the basics 471 Amazon Keyspaces (for Apache Cassandra) Developer Guide // Extract the row data from the returned RowSet. var rows = rs.GetRows().ToList(); return rows; } catch (Exception ex) { Console.WriteLine(ex.Message); return null!; } } } • For API details, see the following topics in AWS SDK for .NET API Reference. • CreateKeyspace • CreateTable • DeleteKeyspace • DeleteTable • GetKeyspace • GetTable • ListKeyspaces • ListTables • RestoreTable • UpdateTable Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository. Learn the basics 472 Amazon Keyspaces (for Apache Cassandra) Developer Guide /** * Before running this Java (v2) code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get- started.html * * Before running this Java code example, you must create a * Java keystore (JKS) file and place it in your project's resources folder. * * This file is a secure file format used to hold certificate information for * Java applications. This is required to make a connection to Amazon Keyspaces. * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/keyspaces/latest/devguide/using_java_driver.html * * This Java example performs the following tasks: * * 1. Create a keyspace. * 2. Check for keyspace existence. * 3. List keyspaces using a paginator. * 4. Create a table with a simple movie data schema and enable point-in-time * recovery. * 5. Check for the table to be in an Active state. * 6. List all tables in the keyspace. * 7. Use a Cassandra driver to insert some records into the Movie table. * 8. Get all records from the Movie table. * 9. Get a specific Movie. * 10. Get a UTC timestamp for the current time. * 11. Update the table schema to add a ‘watched’ Boolean column. * 12. Update an item as watched. * 13. Query for items with watched = True. * 14. Restore the table back to the previous state using the timestamp. * 15. Check for completion of the restore action. * 16. Delete the table. * 17. Confirm that both tables are deleted. * 18. Delete the keyspace. */ public class ScenarioKeyspaces { Learn the basics 473 Amazon Keyspaces (for Apache Cassandra) Developer Guide public static final String DASHES = |
AmazonKeyspaces-155 | AmazonKeyspaces.pdf | 155 | * 10. Get a UTC timestamp for the current time. * 11. Update the table schema to add a ‘watched’ Boolean column. * 12. Update an item as watched. * 13. Query for items with watched = True. * 14. Restore the table back to the previous state using the timestamp. * 15. Check for completion of the restore action. * 16. Delete the table. * 17. Confirm that both tables are deleted. * 18. Delete the keyspace. */ public class ScenarioKeyspaces { Learn the basics 473 Amazon Keyspaces (for Apache Cassandra) Developer Guide public static final String DASHES = new String(new char[80]).replace("\0", "-"); /* * Usage: * fileName - The name of the JSON file that contains movie data. (Get this file * from the GitHub repo at resources/sample_file.) * keyspaceName - The name of the keyspace to create. */ public static void main(String[] args) throws InterruptedException, IOException { String fileName = "<Replace with the JSON file that contains movie data>"; String keyspaceName = "<Replace with the name of the keyspace to create>"; String titleUpdate = "The Family"; int yearUpdate = 2013; String tableName = "Movie"; String tableNameRestore = "MovieRestore"; Region region = Region.US_EAST_1; KeyspacesClient keyClient = KeyspacesClient.builder() .region(region) .build(); DriverConfigLoader loader = DriverConfigLoader.fromClasspath("application.conf"); CqlSession session = CqlSession.builder() .withConfigLoader(loader) .build(); System.out.println(DASHES); System.out.println("Welcome to the Amazon Keyspaces example scenario."); System.out.println(DASHES); System.out.println(DASHES); System.out.println("1. Create a keyspace."); createKeySpace(keyClient, keyspaceName); System.out.println(DASHES); System.out.println(DASHES); Thread.sleep(5000); System.out.println("2. Check for keyspace existence."); checkKeyspaceExistence(keyClient, keyspaceName); Learn the basics 474 Amazon Keyspaces (for Apache Cassandra) Developer Guide System.out.println(DASHES); System.out.println(DASHES); System.out.println("3. List keyspaces using a paginator."); listKeyspacesPaginator(keyClient); System.out.println(DASHES); System.out.println(DASHES); System.out.println("4. Create a table with a simple movie data schema and enable point-in-time recovery."); createTable(keyClient, keyspaceName, tableName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("5. Check for the table to be in an Active state."); Thread.sleep(6000); checkTable(keyClient, keyspaceName, tableName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("6. List all tables in the keyspace."); listTables(keyClient, keyspaceName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("7. Use a Cassandra driver to insert some records into the Movie table."); Thread.sleep(6000); loadData(session, fileName, keyspaceName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("8. Get all records from the Movie table."); getMovieData(session, keyspaceName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("9. Get a specific Movie."); getSpecificMovie(session, keyspaceName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("10. Get a UTC timestamp for the current time."); ZonedDateTime utc = ZonedDateTime.now(ZoneOffset.UTC); Learn the basics 475 Amazon Keyspaces (for Apache Cassandra) Developer Guide System.out.println("DATETIME = " + Date.from(utc.toInstant())); System.out.println(DASHES); System.out.println(DASHES); System.out.println("11. Update the table schema to add a watched Boolean column."); updateTable(keyClient, keyspaceName, tableName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("12. Update an item as watched."); Thread.sleep(10000); // Wait 10 secs for the update. updateRecord(session, keyspaceName, titleUpdate, yearUpdate); System.out.println(DASHES); System.out.println(DASHES); System.out.println("13. Query for items with watched = True."); getWatchedData(session, keyspaceName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("14. Restore the table back to the previous state using the timestamp."); System.out.println("Note that the restore operation can take up to 20 minutes."); restoreTable(keyClient, keyspaceName, utc); System.out.println(DASHES); System.out.println(DASHES); System.out.println("15. Check for completion of the restore action."); Thread.sleep(5000); checkRestoredTable(keyClient, keyspaceName, "MovieRestore"); System.out.println(DASHES); System.out.println(DASHES); System.out.println("16. Delete both tables."); deleteTable(keyClient, keyspaceName, tableName); deleteTable(keyClient, keyspaceName, tableNameRestore); System.out.println(DASHES); System.out.println(DASHES); System.out.println("17. Confirm that both tables are deleted."); checkTableDelete(keyClient, keyspaceName, tableName); checkTableDelete(keyClient, keyspaceName, tableNameRestore); Learn the basics 476 Amazon Keyspaces (for Apache Cassandra) Developer Guide System.out.println(DASHES); System.out.println(DASHES); System.out.println("18. Delete the keyspace."); deleteKeyspace(keyClient, keyspaceName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("The scenario has completed successfully."); System.out.println(DASHES); } public static void deleteKeyspace(KeyspacesClient keyClient, String keyspaceName) { try { DeleteKeyspaceRequest deleteKeyspaceRequest = DeleteKeyspaceRequest.builder() .keyspaceName(keyspaceName) .build(); keyClient.deleteKeyspace(deleteKeyspaceRequest); } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void checkTableDelete(KeyspacesClient keyClient, String keyspaceName, String tableName) throws InterruptedException { try { String status; GetTableResponse response; GetTableRequest tableRequest = GetTableRequest.builder() .keyspaceName(keyspaceName) .tableName(tableName) .build(); // Keep looping until table cannot be found and a ResourceNotFoundException is // thrown. while (true) { response = keyClient.getTable(tableRequest); Learn the basics 477 Amazon Keyspaces (for Apache Cassandra) Developer Guide status = response.statusAsString(); System.out.println(". The table status is " + status); Thread.sleep(500); } } catch (ResourceNotFoundException e) { System.err.println(e.awsErrorDetails().errorMessage()); } System.out.println("The table is deleted"); } public static void deleteTable(KeyspacesClient keyClient, String keyspaceName, String tableName) { try { DeleteTableRequest tableRequest = DeleteTableRequest.builder() .keyspaceName(keyspaceName) .tableName(tableName) .build(); keyClient.deleteTable(tableRequest); } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void checkRestoredTable(KeyspacesClient keyClient, String keyspaceName, String tableName) throws InterruptedException { try { boolean tableStatus = false; String status; GetTableResponse response = null; GetTableRequest tableRequest = GetTableRequest.builder() .keyspaceName(keyspaceName) .tableName(tableName) .build(); while (!tableStatus) { response = keyClient.getTable(tableRequest); status = response.statusAsString(); System.out.println("The table status is " + status); Learn the basics 478 Amazon Keyspaces (for Apache Cassandra) Developer Guide if (status.compareTo("ACTIVE") == 0) { tableStatus = true; } Thread.sleep(500); } List<ColumnDefinition> cols = response.schemaDefinition().allColumns(); for (ColumnDefinition def : cols) { System.out.println("The column name is " + def.name()); System.out.println("The column type is " + def.type()); } } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void restoreTable(KeyspacesClient keyClient, String keyspaceName, ZonedDateTime utc) { try { Instant myTime = utc.toInstant(); RestoreTableRequest restoreTableRequest = RestoreTableRequest.builder() .restoreTimestamp(myTime) .sourceTableName("Movie") .targetKeyspaceName(keyspaceName) .targetTableName("MovieRestore") .sourceKeyspaceName(keyspaceName) .build(); RestoreTableResponse response = keyClient.restoreTable(restoreTableRequest); System.out.println("The ARN of the restored table is " + response.restoredTableARN()); } |
AmazonKeyspaces-156 | AmazonKeyspaces.pdf | 156 | " + status); Learn the basics 478 Amazon Keyspaces (for Apache Cassandra) Developer Guide if (status.compareTo("ACTIVE") == 0) { tableStatus = true; } Thread.sleep(500); } List<ColumnDefinition> cols = response.schemaDefinition().allColumns(); for (ColumnDefinition def : cols) { System.out.println("The column name is " + def.name()); System.out.println("The column type is " + def.type()); } } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void restoreTable(KeyspacesClient keyClient, String keyspaceName, ZonedDateTime utc) { try { Instant myTime = utc.toInstant(); RestoreTableRequest restoreTableRequest = RestoreTableRequest.builder() .restoreTimestamp(myTime) .sourceTableName("Movie") .targetKeyspaceName(keyspaceName) .targetTableName("MovieRestore") .sourceKeyspaceName(keyspaceName) .build(); RestoreTableResponse response = keyClient.restoreTable(restoreTableRequest); System.out.println("The ARN of the restored table is " + response.restoredTableARN()); } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void getWatchedData(CqlSession session, String keyspaceName) { Learn the basics 479 Amazon Keyspaces (for Apache Cassandra) Developer Guide ResultSet resultSet = session .execute("SELECT * FROM \"" + keyspaceName + "\".\"Movie\" WHERE watched = true ALLOW FILTERING;"); resultSet.forEach(item -> { System.out.println("The Movie title is " + item.getString("title")); System.out.println("The Movie year is " + item.getInt("year")); System.out.println("The plot is " + item.getString("plot")); }); } public static void updateRecord(CqlSession session, String keySpace, String titleUpdate, int yearUpdate) { String sqlStatement = "UPDATE \"" + keySpace + "\".\"Movie\" SET watched=true WHERE title = :k0 AND year = :k1;"; BatchStatementBuilder builder = BatchStatement.builder(DefaultBatchType.UNLOGGED); builder.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM); PreparedStatement preparedStatement = session.prepare(sqlStatement); builder.addStatement(preparedStatement.boundStatementBuilder() .setString("k0", titleUpdate) .setInt("k1", yearUpdate) .build()); BatchStatement batchStatement = builder.build(); session.execute(batchStatement); } public static void updateTable(KeyspacesClient keyClient, String keySpace, String tableName) { try { ColumnDefinition def = ColumnDefinition.builder() .name("watched") .type("boolean") .build(); UpdateTableRequest tableRequest = UpdateTableRequest.builder() .keyspaceName(keySpace) .tableName(tableName) .addColumns(def) .build(); keyClient.updateTable(tableRequest); Learn the basics 480 Amazon Keyspaces (for Apache Cassandra) Developer Guide } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void getSpecificMovie(CqlSession session, String keyspaceName) { ResultSet resultSet = session.execute( "SELECT * FROM \"" + keyspaceName + "\".\"Movie\" WHERE title = 'The Family' ALLOW FILTERING ;"); resultSet.forEach(item -> { System.out.println("The Movie title is " + item.getString("title")); System.out.println("The Movie year is " + item.getInt("year")); System.out.println("The plot is " + item.getString("plot")); }); } // Get records from the Movie table. public static void getMovieData(CqlSession session, String keyspaceName) { ResultSet resultSet = session.execute("SELECT * FROM \"" + keyspaceName + "\".\"Movie\";"); resultSet.forEach(item -> { System.out.println("The Movie title is " + item.getString("title")); System.out.println("The Movie year is " + item.getInt("year")); System.out.println("The plot is " + item.getString("plot")); }); } // Load data into the table. public static void loadData(CqlSession session, String fileName, String keySpace) throws IOException { String sqlStatement = "INSERT INTO \"" + keySpace + "\".\"Movie\" (title, year, plot) values (:k0, :k1, :k2)"; JsonParser parser = new JsonFactory().createParser(new File(fileName)); com.fasterxml.jackson.databind.JsonNode rootNode = new ObjectMapper().readTree(parser); Iterator<JsonNode> iter = rootNode.iterator(); ObjectNode currentNode; int t = 0; while (iter.hasNext()) { // Add 20 movies to the table. if (t == 20) Learn the basics 481 Amazon Keyspaces (for Apache Cassandra) Developer Guide break; currentNode = (ObjectNode) iter.next(); int year = currentNode.path("year").asInt(); String title = currentNode.path("title").asText(); String plot = currentNode.path("info").path("plot").toString(); // Insert the data into the Amazon Keyspaces table. BatchStatementBuilder builder = BatchStatement.builder(DefaultBatchType.UNLOGGED); builder.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM); PreparedStatement preparedStatement = session.prepare(sqlStatement); builder.addStatement(preparedStatement.boundStatementBuilder() .setString("k0", title) .setInt("k1", year) .setString("k2", plot) .build()); BatchStatement batchStatement = builder.build(); session.execute(batchStatement); t++; } System.out.println("You have added " + t + " records successfully!"); } public static void listTables(KeyspacesClient keyClient, String keyspaceName) { try { ListTablesRequest tablesRequest = ListTablesRequest.builder() .keyspaceName(keyspaceName) .build(); ListTablesIterable listRes = keyClient.listTablesPaginator(tablesRequest); listRes.stream() .flatMap(r -> r.tables().stream()) .forEach(content -> System.out.println(" ARN: " + content.resourceArn() + " Table name: " + content.tableName())); } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); Learn the basics 482 Amazon Keyspaces (for Apache Cassandra) Developer Guide } } public static void checkTable(KeyspacesClient keyClient, String keyspaceName, String tableName) throws InterruptedException { try { boolean tableStatus = false; String status; GetTableResponse response = null; GetTableRequest tableRequest = GetTableRequest.builder() .keyspaceName(keyspaceName) .tableName(tableName) .build(); while (!tableStatus) { response = keyClient.getTable(tableRequest); status = response.statusAsString(); System.out.println(". The table status is " + status); if (status.compareTo("ACTIVE") == 0) { tableStatus = true; } Thread.sleep(500); } List<ColumnDefinition> cols = response.schemaDefinition().allColumns(); for (ColumnDefinition def : cols) { System.out.println("The column name is " + def.name()); System.out.println("The column type is " + def.type()); } } catch (KeyspacesException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void createTable(KeyspacesClient keyClient, String keySpace, String tableName) { try { // Set the columns. ColumnDefinition defTitle = ColumnDefinition.builder() Learn the basics 483 Amazon Keyspaces (for Apache Cassandra) Developer Guide .name("title") .type("text") .build(); ColumnDefinition defYear = ColumnDefinition.builder() .name("year") .type("int") .build(); ColumnDefinition defReleaseDate = ColumnDefinition.builder() .name("release_date") .type("timestamp") .build(); ColumnDefinition defPlot = ColumnDefinition.builder() .name("plot") .type("text") .build(); List<ColumnDefinition> colList = new ArrayList<>(); colList.add(defTitle); colList.add(defYear); colList.add(defReleaseDate); colList.add(defPlot); // Set the keys. PartitionKey yearKey = PartitionKey.builder() .name("year") .build(); PartitionKey titleKey = PartitionKey.builder() .name("title") .build(); List<PartitionKey> keyList = new ArrayList<>(); keyList.add(yearKey); keyList.add(titleKey); SchemaDefinition schemaDefinition = SchemaDefinition.builder() .partitionKeys(keyList) .allColumns(colList) .build(); PointInTimeRecovery timeRecovery = PointInTimeRecovery.builder() Learn the basics 484 Amazon Keyspaces (for Apache Cassandra) Developer Guide .status(PointInTimeRecoveryStatus.ENABLED) .build(); CreateTableRequest tableRequest = CreateTableRequest.builder() .keyspaceName(keySpace) .tableName(tableName) .schemaDefinition(schemaDefinition) .pointInTimeRecovery(timeRecovery) .build(); CreateTableResponse response = keyClient.createTable(tableRequest); System.out.println("The table ARN is " + response.resourceArn()); } catch |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.