id
stringlengths 8
78
| source
stringclasses 743
values | chunk_id
int64 1
5.05k
| text
stringlengths 593
49.7k
|
---|---|---|---|
analytics-java-api-161 | analytics-java-api.pdf | 161 | application 530 Managed Service for Apache Flink To run the application Managed Service for Apache Flink Developer Guide 1. On the console for Amazon Managed Service for Apache Flink, choose My Application and choose Run. 2. On the next page, the Application restore configuration page, choose Run with latest snapshot and then choose Run. The Status in Application details transitions from Ready to Starting and then to Running when the application has started. When the application is in the Running status, you can now open the Flink dashboard. To open the dashboard 1. Choose Open Apache Flink dashboard. The dashboard opens on a new page. 2. In the Runing jobs list, choose the single job that you can see. Note If you set the Runtime properties or edited the IAM policies incorrectly, the application status might turn into Running, but the Flink dashboard shows that the job is continuously restarting. This is a common failure scenario if the application is misconfigured or lacks permissions to access the external resources. When this happens, check the Exceptions tab in the Flink dashboard to see the cause of the problem. Observe the metrics of the running application On the MyApplication page, in the Amazon CloudWatch metrics section, you can see some of the fundamental metrics from the running application. To view the metrics 1. Next to the Refresh button, select 10 seconds from the dropdown list. 2. When the application is running and healthy, you can see the uptime metric continuously increasing. Create and configure the Managed Service for Apache Flink application 531 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. The fullrestarts metric should be zero. If it is increasing, the configuration might have issues. To investigate the issue, review the Exceptions tab on the Flink dashboard. 4. The Number of failed checkpoints metric should be zero in a healthy application. Note This dashboard displays a fixed set of metrics with a granularity of 5 minutes. You can create a custom application dashboard with any metrics in the CloudWatch dashboard. Observe output data in Kinesis streams Make sure you are still publishing data to the input, either using the Python script or the Kinesis Data Generator. You can now observe the output of the application running on Managed Service for Apache Flink by using the Data Viewer in the https://console.aws.amazon.com/kinesis/, similarly to what you already did earlier. To view the output 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. Verify that the Region is the same as the one you are using to run this tutorial. By default, it is us-east-1US East (N. Virginia). Change the Region if necessary. 3. Choose Data Streams. 4. Select the stream that you want to observe. For this tutorial, use ExampleOutputStream. 5. Choose the Data viewer tab. 6. Select any Shard, keep Latest as Starting position, and then choose Get records. You might see a "no record found for this request" error. If so, choose Retry getting records. The newest records published to the stream display. 7. Select the value in the Data column to inspect the content of the record in JSON format. Stop the application To stop the applicatio, go to the console page of the Managed Service for Apache Flink application named MyApplication. Create and configure the Managed Service for Apache Flink application 532 Managed Service for Apache Flink To stop the application Managed Service for Apache Flink Developer Guide 1. 2. From the Action dropdown list, choose Stop. The Status in Application details transitions from Running to Stopping, and then to Ready when the application is completely stopped. Note Don't forget to also stop sending data to the input stream from the Python script or the Kinesis Data Generator. Next step Clean up AWS resources Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Getting Started (Python) tutorial. This topic contains the following sections. • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 objects and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application Use the following procedure to delete the application. To delete the application 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. In the Managed Service for Apache Flink panel, choose MyApplication. From the Actions dropdown list, choose Delete and then confirm the deletion. Next step 533 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Kinesis data streams 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. Choose Data streams. 3. Select the two streams that you created, ExampleInputStream and ExampleOutputStream. 4. From the Actions dropdown list, choose Delete, and then confirm the deletion. Delete your Amazon |
analytics-java-api-162 | analytics-java-api.pdf | 162 | delete the application. To delete the application 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. In the Managed Service for Apache Flink panel, choose MyApplication. From the Actions dropdown list, choose Delete and then confirm the deletion. Next step 533 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Kinesis data streams 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. Choose Data streams. 3. Select the two streams that you created, ExampleInputStream and ExampleOutputStream. 4. From the Actions dropdown list, choose Delete, and then confirm the deletion. Delete your Amazon S3 objects and bucket Use the following procedure to delete your S3 objects and bucket. To delete the object from the S3 bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. 3. Select the S3 bucket that you created for the application artifact. Select the application artifact you uploaded, named amazon-msf-java-stream- app-1.0.jar. 4. Choose Delete and confirm the deletion. To delete the S3 bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Select the bucket that you created for the artifacts. 3. Choose Delete and confirm the deletion. Note The S3 bucket must be empty to delete it. Delete your IAM resources Use the following procedure to delete your IAM resources. Delete your Kinesis data streams 534 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-east-1 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-east-1 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources Use the following procedure to delete your CloudWatch resources. To delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Delete your CloudWatch resources 535 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Get started (Scala) Note Starting from version 1.15, Flink is Scala free. Applications can now use the Java API from any Scala version. Flink still uses Scala in a few key components internally, but doesn't expose Scala into the user code classloader. Because of that, you must add Scala dependencies into your JAR-archives. For more information about Scala changes in Flink 1.15, see Scala Free in One Fifteen. In this exercise, you create a Managed Service for Apache Flink application for Scala with a Kinesis stream as a source and a sink. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code • Compile and upload the application code • Create and run the application (console) • Create and run the application (CLI) • Clean up AWS resources Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • Two Kinesis streams for input and output. • An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: Create dependent resources 536 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream and ExampleOutputStream. To create the data streams (AWS CLI) • To create the first stream (ExampleInputStream), use the following Amazon Kinesis create- stream AWS CLI command. aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser • To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream. aws kinesis create-stream \ --stream-name ExampleOutputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Other resources When you create your application, Managed Service for Apache Flink creates the following Amazon CloudWatch resources if they don't already exist: • A log group called /AWS/KinesisAnalytics-java/MyApplication • A log stream called kinesis-analytics-log-stream Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Write sample records to the input stream 537 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note This section requires the AWS SDK |
analytics-java-api-163 | analytics-java-api.pdf | 163 | unique name by appending your login name, such as ka-app- code-<username>. Other resources When you create your application, Managed Service for Apache Flink creates the following Amazon CloudWatch resources if they don't already exist: • A log group called /AWS/KinesisAnalytics-java/MyApplication • A log stream called kinesis-analytics-log-stream Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Write sample records to the input stream 537 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note This section requires the AWS SDK for Python (Boto). Note The Python script in this section uses the AWS CLI. You must configure your AWS CLI to use your account credentials and default region. To configure your AWS CLI, enter the following: aws configure 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") Write sample records to the input stream 538 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Run the stock.py script: $ python stock.py Keep the script running while completing the rest of the tutorial. Download and examine the application code The Python application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/scala/ GettingStarted directory. Note the following about the application code: • A build.sbt file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries. • The BasicStreamingJob.scala file contains the main method that defines the application's functionality. • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: private def createSource: FlinkKinesisConsumer[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val inputProperties = applicationProperties.get("ConsumerConfigProperties") new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName), new SimpleStringSchema, inputProperties) Download and examine the application code 539 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } The application also uses a Kinesis sink to write into the result stream. The following snippet creates the Kinesis sink: private def createSink: KinesisStreamsSink[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val outputProperties = applicationProperties.get("ProducerConfigProperties") KinesisStreamsSink.builder[String] .setKinesisClientProperties(outputProperties) .setSerializationSchema(new SimpleStringSchema) .setStreamName(outputProperties.getProperty(streamNameKey, defaultOutputStreamName)) .setPartitionKeyGenerator((element: String) => String.valueOf(element.hashCode)) .build } • The application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object. • The application creates source and sink connectors using dynamic application properties. Runtime application's properties are read to configure the connectors. For more information about runtime properties, see Runtime Properties. Compile and upload the application code In this section, you compile and upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. Compile the Application Code In this section, you use the SBT build tool to build the Scala code for the application. To install SBT, see Install sbt with cs setup. You also need to install the Java Development Kit (JDK). See Prerequisites for Completing the Exercises. 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT: sbt assembly Compile and upload the application code 540 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. If the application compiles successfully, the following file is created: target/scala-3.2.0/getting-started-scala-1.0.jar Upload the Apache Flink Streaming Scala Code In this section, you create an Amazon S3 bucket and upload your application code. 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose Create bucket 3. 4. 5. Enter ka-app-code-<username> in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next. In Configure options, keep the settings as they are, and choose Next. In Set permissions, keep the settings as they are, and choose Next. 6. Choose Create bucket. 7. Choose the ka-app-code-<username> bucket, and then choose Upload. 8. In the Select files step, choose Add files. Navigate to the getting-started- scala-1.0.jar file that you created in the previous step. 9. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the Application 1. Open the Managed Service |
analytics-java-api-164 | analytics-java-api.pdf | 164 | are, and choose Next. 6. Choose Create bucket. 7. Choose the ka-app-code-<username> bucket, and then choose Upload. 8. In the Select files step, choose Add files. Navigate to the getting-started- scala-1.0.jar file that you created in the previous step. 9. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the Application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. Create and run the application (console) 541 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • For Description, enter My scala test app. • For Runtime, choose Apache Flink. • Keep the version as Apache Flink version 1.19.1. 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application Use the following procedure to configure the application. To configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter getting-started-scala-1.0.jar.. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following: Configure the application 542 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Group ID Key Value ConsumerConfigProp input.stream.name ExampleInputStream erties ConsumerConfigProp aws.region us-west-2 erties ConsumerConfigProp flink.stream.initp LATEST erties os Choose Save. 6. Under Properties, choose Add group again. 7. Enter the following: Group ID Key Value ProducerConfigProp output.stream.name ExampleOutputStream erties ProducerConfigProp aws.region us-west-2 erties 8. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 9. For CloudWatch logging, choose the Enable check box. 10. Choose Update. Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication Configure the application 543 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Log stream: kinesis-analytics-log-stream Edit the IAM policy Edit the IAM policy to add permissions to access the Amazon S3 bucket. To edit the IAM policy to add S3 bucket permissions 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/getting-started-scala-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { Edit the IAM policy 544 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Run the application 545 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Stop the application To stop the application, on the MyApplication page, choose Stop. Confirm the action. Create and run the application (CLI) In this section, you use the AWS Command Line Interface to create and run the Managed Service for Apache Flink application. Use the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Create a permissions policy Note You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. First, you |
analytics-java-api-165 | analytics-java-api.pdf | 165 | for Apache Flink Developer Guide Stop the application To stop the application, on the MyApplication page, choose Stop. Confirm the action. Create and run the application (CLI) In this section, you use the AWS Command Line Interface to create and run the Managed Service for Apache Flink application. Use the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Create a permissions policy Note You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ Stop the application 546 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "arn:aws:s3:::ka-app-code-username/getting-started-scala-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis-analytics/ MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis-analytics/ MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", Create a permissions policy 547 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Create an IAM policy In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles and then Create Role. 3. Under Select type of trusted identity, choose AWS Service 4. Under Choose the service that will use this role, choose Kinesis. 5. Under Select your use case, choose Managed Service for Apache Flink. 6. Choose Next: Permissions. 7. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. 8. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role Create an IAM policy 548 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 9. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the application Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. { "ApplicationName": "getting_started", "ApplicationDescription": |
analytics-java-api-166 | analytics-java-api.pdf | 166 | execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the application Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. { "ApplicationName": "getting_started", "ApplicationDescription": "Scala getting started application", "RuntimeEnvironment": "FLINK-1_19", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", Create the application 549 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "FileKey": "getting-started-scala-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" } } ] } }, "CloudWatchLoggingOptions": [ { "LogStreamARN": "arn:aws:logs:us-west-2:012345678901:log- group:MyApplication:log-stream:kinesis-analytics-log-stream" } ] } Execute the CreateApplication with the following request to create the application: aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json The application is now created. You start the application in the next step. Start the application In this section, you use the StartApplication action to start the application. Start the application 550 Managed Service for Apache Flink To start the application Managed Service for Apache Flink Developer Guide 1. Save the following JSON code to a file named start_request.json. { "ApplicationName": "getting_started", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "s3_sink" } 2. Execute the StopApplication action with the preceding request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json The application is now stopped. Stop the application 551 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Setting Up Application Logging. Update environment properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. { "ApplicationName": "getting_started", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" } } ] } } Add a CloudWatch logging option 552 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Execute the UpdateApplication action with the preceding request to update environment properties: aws kinesisanalyticsv2 update-application --cli-input-json file:// update_properties_request.json Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication CLI action. Note To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the Create dependent resources section. {{ "ApplicationName": "getting_started", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-<username>", "FileKeyUpdate": "getting-started-scala-1.0.jar", "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU" } Update the application code 553 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } } } Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache |
analytics-java-api-167 | analytics-java-api.pdf | 167 | current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the Create dependent resources section. {{ "ApplicationName": "getting_started", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-<username>", "FileKeyUpdate": "getting-started-scala-1.0.jar", "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU" } Update the application code 553 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } } } Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Clean up AWS resources 554 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Delete your Amazon S3 object and bucket 555 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use Apache Beam with Managed Service for Apache Flink applications Note There is no compatible Apache Flink Runner for Flink 1.20. For more information, see Flink Version Compatibility in the Apache Beam Documentation.> You can use the Apache Beam framework with your Managed Service for Apache Flink application to process streaming data. Managed Service for Apache Flink applications that use Apache Beam use Apache Flink runner to execute Beam pipelines. For a tutorial about how to use Apache Beam in a Managed Service for Apache Flink application, see Use CloudFormation. This topic contains the following sections: • Limitations of Apache Flink runner with Managed Service for Apache Flink • Apache Beam capabilities with Managed Service for Apache Flink • Create an application using Apache Beam Limitations of Apache Flink runner with Managed Service for Apache Flink Note the following about using the Apache Flink runner with Managed Service for Apache Flink: • Apache Beam metrics are not viewable in the Managed Service for Apache Flink console. • Apache Beam is only supported with Managed Service for Apache Flink applications that use Apache Flink version 1.8 and above. Apache Beam is not supported with Managed Service for Apache Flink applications that use Apache Flink version 1.6. Limitations of Apache Flink runner with Managed Service for Apache Flink 556 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apache Beam capabilities with Managed Service for Apache Flink Managed Service for Apache Flink supports the same Apache Beam capabilties as the Apache Flink runner. For information about what features are supported with the Apache Flink runner, see the Beam Compatibility Matrix. We recommend that you test your Apache Flink application in the Managed Service for Apache Flink service to verify that we support all the features that your application needs. Create an application using Apache Beam In this exercise, you create a Managed Service for Apache Flink application that transforms data using Apache Beam. Apache Beam is a programming model for processing streaming data. For information about using Apache Beam with Managed Service for Apache Flink, see Use Apache Beam with Managed Service for Apache Flink applications. Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code • Compile the application code • Upload the Apache |
analytics-java-api-168 | analytics-java-api.pdf | 168 | application that transforms data using Apache Beam. Apache Beam is a programming model for processing streaming data. For information about using Apache Beam with Managed Service for Apache Flink, see Use Apache Beam with Managed Service for Apache Flink applications. Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources • Next steps Apache Beam capabilities with Managed Service for Apache Flink 557 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • Two Kinesis data streams (ExampleInputStream and ExampleOutputStream) • An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream and ExampleOutputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write random strings to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). 1. Create a file named ping.py with the following contents: import json import boto3 import random kinesis = boto3.client('kinesis') while True: data = random.choice(['ping', 'telnet', 'ftp', 'tracert', 'netstat']) print(data) kinesis.put_record( Create dependent resources 558 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide StreamName="ExampleInputStream", Data=data, PartitionKey="partitionkey") 2. Run the ping.py script: $ python ping.py Keep the script running while completing the rest of the tutorial. Download and examine the application code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/Beam directory. The application code is located in the BasicBeamStreamingJob.java file. Note the following about the application code: • The application uses the Apache Beam ParDo to process incoming records by invoking a custom transform function called PingPongFn. The code to invoke the PingPongFn function is as follows: .apply("Pong transform", ParDo.of(new PingPongFn()) • Managed Service for Apache Flink applications that use Apache Beam require the following components. If you don't include these components and versions in your pom.xml, your application loads the incorrect versions from the environment dependencies, and since the versions do not match, your application crashes at runtime. <jackson.version>2.10.2</jackson.version> Download and examine the application code 559 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ... <dependency> <groupId>com.fasterxml.jackson.module</groupId> <artifactId>jackson-module-jaxb-annotations</artifactId> <version>2.10.2</version> </dependency> • The PingPongFn transform function passes the input data into the output stream, unless the input data is ping, in which case it emits the string pong\n to the output stream. The code of the transform function is as follows: private static class PingPongFn extends DoFn<KinesisRecord, byte[]> { private static final Logger LOG = LoggerFactory.getLogger(PingPongFn.class); @ProcessElement public void processElement(ProcessContext c) { String content = new String(c.element().getDataAsBytes(), StandardCharsets.UTF_8); if (content.trim().equalsIgnoreCase("ping")) { LOG.info("Ponged!"); c.output("pong\n".getBytes(StandardCharsets.UTF_8)); } else { LOG.info("No action for: " + content); c.output(c.element().getDataAsBytes()); } } } Compile the application code To compile the application, do the following: 1. Install Java and Maven if you haven't already. For more information, see Complete the required prerequisites in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. 2. Compile the application with the following command: mvn package -Dflink.version=1.15.2 -Dflink.version.minor=1.8 Compile the application code 560 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The provided source code relies on libraries from Java 11. Compiling the application creates the application JAR file (target/basic-beam-app-1.0.jar). Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. 1. 2. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the basic-beam-app-1.0.jar file that you created in the previous step. 3. You don't need to change any of the |
analytics-java-api-169 | analytics-java-api.pdf | 169 | Apache Flink Managed Service for Apache Flink Developer Guide Note The provided source code relies on libraries from Java 11. Compiling the application creates the application JAR file (target/basic-beam-app-1.0.jar). Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. 1. 2. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the basic-beam-app-1.0.jar file that you created in the previous step. 3. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the Application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. Note Apache Beam is not presently compatible with Apache Flink version 1.19 or later. Upload the Apache Flink streaming Java code 561 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Select Apache Flink version 1.15 from the version pulldown. 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesis-analytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "logs:DescribeLogGroups", "s3:GetObjectVersion" ], Create and run the Managed Service for Apache Flink application 562 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*", "arn:aws:s3:::ka-app-code-<username>/basic-beam-app-1.0.jar" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": "logs:DescribeLogStreams", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:*" }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": "logs:PutLogEvents", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream" }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] Create and run the Managed Service for Apache Flink application 563 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter basic-beam-app-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Enter the following: Group ID Key Value BeamApplicationPro InputStreamName ExampleInputStream perties BeamApplicationPro OutputStreamName ExampleOutputStream perties BeamApplicationPro AwsRegion us-west-2 perties 5. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 6. For CloudWatch logging, select the Enable check box. 7. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Create and run the Managed Service for Apache Flink application 564 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon |
analytics-java-api-170 | analytics-java-api.pdf | 170 | same log stream that the application uses to send results. Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. In the Kinesis Data Streams panel, choose ExampleInputStream. Clean Up 565 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. 4. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Clean Up 566 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Next steps Now that you've created and run a basic Managed Service for Apache Flink application that transforms data using Apache Beam, see the following application for an example of a more advanced Managed Service for Apache Flink solution. • Beam on Managed Service for Apache Flink Streaming Workshop: In this workshop, we explore an end to end example that combines batch and streaming aspects in one uniform Apache Beam pipeline. Next steps 567 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Training workshops, labs, and solution implementations The following end-to-end examples demonstrate advanced Managed Service for Apache Flink solutions. Topics • Deploy, operate, and scale applications with Amazon Managed Service for Apache Flink • Develop Apache Flink applications locally before deploying to Managed Service for Apache Flink • Use event detection with Managed Service for Apache Flink Studio • Use the AWS Streaming data solution for Amazon Kinesis • Practice using a Clickstream lab with Apache Flink and Apache Kafka • Set up custom scaling using Application Auto Scaling • View a sample Amazon CloudWatch dashboard • Use templates for AWS Streaming data solution for Amazon MSK • Explore more Managed Service for Apache Flink solutions on GitHub Deploy, operate, and scale applications with Amazon Managed Service for Apache Flink This workshop covers the development an Apache Flink application in Java, how to run and debug in your IDE, and how to package, deploy and run on Amazon Managed Service for Apache Flink. You will also learn how to scale, monitor, and troubleshoot your application. Amazon Managed Service for Apache Flink workshop. Develop Apache Flink applications locally before deploying to Managed Service for Apache Flink This workshop demonstrates the basics of getting up and started developing Apache Flink applications locally with the long-term goal of deploying to Managed Service for Apache Flink for Apache Flink. Starters Guide to Local Development with Apache Flink Managed Service for Apache Flink workshop 568 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use event detection with Managed Service for Apache Flink Studio This workshop describes event detection with Managed Service for Apache Flink Studio and deploying it as a Managed Service for Apache Flink application Event Detection with Managed Service for Apache Flink for Apache Flink Use the AWS Streaming data solution for Amazon Kinesis The AWS Streaming Data Solution for Amazon Kinesis automatically configures the AWS services necessary to capture, store, process, and deliver streaming data. The solution provides multiple options for solving streaming data use cases. The Managed Service for Apache Flink option provides an end-to-end streaming ETL example |
analytics-java-api-171 | analytics-java-api.pdf | 171 | Guide Use event detection with Managed Service for Apache Flink Studio This workshop describes event detection with Managed Service for Apache Flink Studio and deploying it as a Managed Service for Apache Flink application Event Detection with Managed Service for Apache Flink for Apache Flink Use the AWS Streaming data solution for Amazon Kinesis The AWS Streaming Data Solution for Amazon Kinesis automatically configures the AWS services necessary to capture, store, process, and deliver streaming data. The solution provides multiple options for solving streaming data use cases. The Managed Service for Apache Flink option provides an end-to-end streaming ETL example demonstrating a real-world application that runs analytical operations on simulated New York taxi data. Each solution includes the following components: • An AWS CloudFormation package to deploy the complete example. • A CloudWatch dashboard for displaying application metrics. • CloudWatch alarms on the most relevant application metrics. • All necessary IAM roles and policies. Streaming Data Solution for Amazon Kinesis Practice using a Clickstream lab with Apache Flink and Apache Kafka An end to end lab for clickstream use cases using Amazon Managed Streaming for Apache Kafka for streaming storage and Managed Service for Apache Flink for Apache Flink applications for stream processing. Clickstream Lab Event detection with Managed Service for Apache Flink Studio 569 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Set up custom scaling using Application Auto Scaling Two samples that show you how to automatically scale your Managed Service for Apache Flink applications using Application Auto Scaling. This lets you set up custom scaling policies and custom scaling attributes. • Managed Service for Apache Flink App Autoscaling • Scheduled Scaling For more information on you can perform custom scaling, see Enable metric-based and scheduled scaling for Amazon Managed Service for Apache Flink. View a sample Amazon CloudWatch dashboard A sample CloudWatch dashboard for monitoring Managed Service for Apache Flink applications. The sample dashboard also includes a demo application to help with demonstrating the functionality of the dashboard. Managed Service for Apache Flink Metrics Dashboard Use templates for AWS Streaming data solution for Amazon MSK The AWS Streaming Data Solution for Amazon MSK provides AWS CloudFormation templates where data flows through producers, streaming storage, consumers, and destinations. AWS Streaming Data Solution for Amazon MSK Explore more Managed Service for Apache Flink solutions on GitHub The following end-to-end examples demonstrate advanced Managed Service for Apache Flink solutions and are available on GitHub: • Amazon Managed Service for Apache Flink Flink – Benchmarking Utility • Snapshot Manager – Amazon Managed Service for Apache Flink for Apache Flink Set up custom scaling using Application Auto Scaling 570 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Streaming ETL with Apache Flink and Amazon Managed Service for Apache Flink • Real-time sentiment analysis on customer feedback Explore more Managed Service for Apache Flink solutions on GitHub 571 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use practical utilities for Managed Service for Apache Flink The following utilities can make using the Managed Service for Apache Flink service easier to use: Topics • Snapshot manager • Benchmarking Snapshot manager It's a best practice for Flink Applications to regularly initiate savepoints/snapshots to allow for more seamless failure recovery. Snapshot manager automates this task and offers the following benefits: • takes a new snapshot of a running Managed Service for Apache Flink for Apache Flink Application • gets a count of application snapshots • checks if the count is more than the required number of snapshots • deletes older snapshots that are older than the required number For an example, see Snapshot manager. Benchmarking Managed Service for Apache Flink Flink Benchmarking Utility helps with capacity planning, integration testing, and benchmarking of Managed Service for Apache Flink for Apache Flink applications. For an example, see Benchmarking Snapshot manager 572 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Examples for creating and working with Managed Service for Apache Flink applications This section provides examples of creating and working with applications in Managed Service for Apache Flink. They include example code and step-by-step instructions to help you create Managed Service for Apache Flink applications and test your results. Before you explore these examples, we recommend that you first review the following: • How it works • Tutorial: Get started using the DataStream API in Managed Service for Apache Flink Note These examples assume that you are using the US East (N. Virginia) Region (us-east-1). If you are using a different Region, update your application code, commands, and IAM roles appropriately. Topics • Java examples for Managed Service for Apache Flink • Python examples for Managed Service for Apache Flink • Scala examples for Managed Service for Apache Flink Java examples for |
analytics-java-api-172 | analytics-java-api.pdf | 172 | Apache Flink applications and test your results. Before you explore these examples, we recommend that you first review the following: • How it works • Tutorial: Get started using the DataStream API in Managed Service for Apache Flink Note These examples assume that you are using the US East (N. Virginia) Region (us-east-1). If you are using a different Region, update your application code, commands, and IAM roles appropriately. Topics • Java examples for Managed Service for Apache Flink • Python examples for Managed Service for Apache Flink • Scala examples for Managed Service for Apache Flink Java examples for Managed Service for Apache Flink The following examples demonstrate how to create applications written in Java. Note Most of the examples are designed to run both locally, on your development machine and your IDE of choice, and on Amazon Managed Service for Apache Flink. They demonstrate the mechanisms that you can use to pass application parameters, and how to set the dependency correctly to run the application in both environments with no changes. Java examples for Managed Service for Apache Flink 573 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Improve serialization performance defining custom TypeInfo This example illustrates how to define custom TypeInfo on your record or state object to prevent serialization falling back to the less efficient Kryo serialization. This is required, for example, when your objects contain a List or Map. For more information, see Data Types & Serialization in the Apache Flink documentation. The example also shows how to test whether the serialization of your object falls back to the less efficient Kryo serialization. Code example: CustomTypeInfo Get started with the DataStream API This example shows a simple application, reading from a Kinesis data stream and writing to another Kinesis data stream, using the DataStream API. The example demonstrates how to set up the file with the correct dependencies, build the uber-JAR, and then parse the configuration parameters, so you can run the application both locally, in your IDE, and on Amazon Managed Service for Apache Flink. Code example: GettingStarted Get started with the Table API and SQL This example shows a simple application using the Table API and SQL. It demonstrates how to integrate the DataStream API with the Table API or SQL in the same Java application. It also demonstrates how to use the DataGen connector to generate random test data from within the Flink application itself, not requiring an external data generator. Complete example: GettingStartedTable Use S3Sink (DataStream API) This example demonstrates how to use the DataStream API's FileSink to write JSON files to an S3 bucket. Code example: S3Sink Use a Kinesis source, standard or EFO consumers, and sink (DataStream API) This example demonstrates how to configure a source consuming from a Kinesis data stream, either using the standard consumer or EFO, and how to set up a sink to the Kinesis data stream. Java examples for Managed Service for Apache Flink 574 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Code example: KinesisConnectors Use an Amazon Data Firehose sink (DataStream API) This example shows how to send data to Amazon Data Firehose (formerly known as Kinesis Data Firehose). Code example: KinesisFirehoseSink Use the Prometheus sink connector This example demonstrates the use of the Prometheus sink connector to write time-series data to Prometheus. Code example: PrometheusSink Use windowing aggregations (DataStream API) This example demonstrates four types of the windowing aggregation in the DataStream API. 1. Sliding Window based on processing time 2. Sliding Window based on event time 3. Tumbling Window based on processing time 4. Tumbling Window based on event time Code example: Windowing Use custom metrics This example shows how to add custom metrics to your Flink application and send them to CloudWatch metrics. Code example: CustomMetrics Use Kafka Configuration Providers to fetch custom keystore and truststore for mTLS at runtime This example illustrates how you can use Kafka Configuration Providers to set up a custom keystore and truststore with certificates for mTLS authentication for the Kafka connector. This technique Java examples for Managed Service for Apache Flink 575 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide lets you load the required custom certificates from Amazon S3 and the secrets from AWS Secrets Manager when the application starts. Code example: Kafka-mTLS-Keystore-ConfigProviders Use Kafka Configuration Providers to fetch secrets for SASL/SCRAM authentication at runtime This example illustrates how you can use Kafka Configuration Providers to fetch credentials from AWS Secrets Manager and download the truststore from Amazon S3 to set up SASL/SCRAM authentication on a Kafka connector. This technique lets you load the required custom certificates from Amazon S3 and the secrets from AWS Secrets Manager when the application starts. Code example: Kafka-SASL_SSL-ConfigProviders Use Kafka Configuration Providers to fetch |
analytics-java-api-173 | analytics-java-api.pdf | 173 | you load the required custom certificates from Amazon S3 and the secrets from AWS Secrets Manager when the application starts. Code example: Kafka-mTLS-Keystore-ConfigProviders Use Kafka Configuration Providers to fetch secrets for SASL/SCRAM authentication at runtime This example illustrates how you can use Kafka Configuration Providers to fetch credentials from AWS Secrets Manager and download the truststore from Amazon S3 to set up SASL/SCRAM authentication on a Kafka connector. This technique lets you load the required custom certificates from Amazon S3 and the secrets from AWS Secrets Manager when the application starts. Code example: Kafka-SASL_SSL-ConfigProviders Use Kafka Configuration Providers to fetch custom keystore and truststore for mTLS at runtime with Table API/SQL This example illustrates how you can use Kafka Configuration Providers in Table API /SQL to set up a custom keystore and truststore with certificates for mTLS authentication for the Kafka connector. This technique lets you load the required custom certificates from Amazon S3 and the secrets from AWS Secrets Manager when the application starts. Code example: Kafka-mTLS-Keystore-Sql-ConfigProviders Use Side Outputs to split a stream This example illustrates how to leverage Side Outputs in Apache Flink for splitting a stream on specified attributes. This pattern is particularly useful when trying to implement the concept of Dead Letter Queues (DLQ) in streaming applications. Code example: SideOutputs Use Async I/O to call an external endpoint This example illustrates how to use Apache Flink Async I/O to call an external endpoint in a non- blocking way, with retries on recoverable errors. Code example: AsyncIO Java examples for Managed Service for Apache Flink 576 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Python examples for Managed Service for Apache Flink The following examples demonstrate how to create applications written in Python. Note Most of the examples are designed to run both locally, on your development machine and your IDE of choice, and on Amazon Managed Service for Apache Flink. They demonstrate the simple mechanism that you can use to pass application parameters, and how to set the dependency correctly to run the application in both environments with no changes. Project dependencies Most PyFlink examples require one or more dependencies as JAR files, for example for Flink connectors. These dependencies must then be packaged with the application when deployed on Amazon Managed Service for Apache Flink. The following examples already include the tooling that lets you run the application locally for development and testing, and to package the required dependencies correctly. This tooling requires using Java JDK11 and Apache Maven. Refer to the README contained in each example for the specific instructions. Examples Get started with PyFlink This example demonstrates the basic structure of a PyFlink application using SQL embedded in Python code. This project also provides a skeleton for any PyFlink application that includes JAR dependencies such as connectors. The README section provides detailed guidance about how to run your Python application locally for development. The example also shows how to include a single JAR dependency, the Kinesis SQL connector in this example, in your PyFlink application. Code example: GettingStarted Add Python dependencies This example shows how to add Python dependencies to your PyFlink application in the most general way. This method works for simple dependencies, like Boto3, or complex dependencies containing C libraries such as PyArrow. Python examples for Managed Service for Apache Flink 577 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Code example: PythonDependencies Use windowing aggregations (DataStream API) This example demonstrates four types of the windowing aggregation in SQL embedded in a Python application. 1. Sliding Window based on processing time 2. Sliding Window based on event time 3. Tumbling Window based on processing time 4. Tumbling Window based on event time Code example: Windowing Use an S3 sink This example shows how to write your output to Amazon S3 as JSON files, using SQL embedded in a Python application. You must enable checkpointing for the S3 sink to write and rotate files to Amazon S3. Code example: S3Sink Use a User Defined Function (UDF) This example demonstrates how to define a User Defined Function, implement it in Python, and use it in SQL code that runs in a Python application. Code example: UDF Use an Amazon Data Firehose sink This example demonstrates how to send data to Amazon Data Firehose using SQL. Code example: FirehoseSink Scala examples for Managed Service for Apache Flink The following examples demonstrate how to create applications using Scala with Apache Flink. Scala examples for Managed Service for Apache Flink 578 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Set up a multi-step application This example shows how to set up a Flink application in Scala. It demonstrates how to configure the SBT project to include dependencies and build the uber-JAR. Code example: |
analytics-java-api-174 | analytics-java-api.pdf | 174 | example: UDF Use an Amazon Data Firehose sink This example demonstrates how to send data to Amazon Data Firehose using SQL. Code example: FirehoseSink Scala examples for Managed Service for Apache Flink The following examples demonstrate how to create applications using Scala with Apache Flink. Scala examples for Managed Service for Apache Flink 578 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Set up a multi-step application This example shows how to set up a Flink application in Scala. It demonstrates how to configure the SBT project to include dependencies and build the uber-JAR. Code example: GettingStarted Scala examples for Managed Service for Apache Flink 579 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Security in Amazon Managed Service for Apache Flink Cloud security at AWS is the highest priority. As an AWS customer, you will benefit from a data center and network architecture built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. The effectiveness of our security is regularly tested and verified by third-party auditors as part of the AWS compliance programs. To learn about the compliance programs that apply to Managed Service for Apache Flink, see AWS Services in Scope by Compliance Program. • Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your organization’s requirements, and applicable laws and regulations. This documentation helps you understand how to apply the shared responsibility model when using Managed Service for Apache Flink. The following topics show you how to configure Managed Service for Apache Flink to meet your security and compliance objectives. You'll also learn how to use other Amazon services that can help you to monitor and secure your Managed Service for Apache Flink resources. Topics • Data protection in Amazon Managed Service for Apache Flink • Identity and Access Management for Amazon Managed Service for Apache Flink • Compliance validation for Amazon Managed Service for Apache Flink • Resilience in Amazon Managed Service for Apache Flink • Infrastructure security in Managed Service for Apache Flink • Security best practices for Managed Service for Apache Flink 580 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Data protection in Amazon Managed Service for Apache Flink You can protect your data using tools that are provided by AWS. Managed Service for Apache Flink can work with services that support encrypting data, including Firehose, and Amazon S3. Data encryption in Managed Service for Apache Flink Encryption at rest Note the following about encrypting data at rest with Managed Service for Apache Flink: • You can encrypt data on the incoming Kinesis data stream using StartStreamEncryption. For more information, see What Is Server-Side Encryption for Kinesis Data Streams?. • Output data can be encrypted at rest using Firehose to store data in an encrypted Amazon S3 bucket. You can specify the encryption key that your Amazon S3 bucket uses. For more information, see Protecting Data Using Server-Side Encryption with KMS–Managed Keys (SSE- KMS). • Managed Service for Apache Flink can read from any streaming source, and write to any streaming or database destination. Ensure that your sources and destinations encrypt all data in transit and data at rest. • Your application's code is encrypted at rest. • Durable application storage is encrypted at rest. • Running application storage is encrypted at rest. Encryption in transit Managed Service for Apache Flink encrypts all data in transit. Encryption in transit is enabled for all Managed Service for Apache Flink applications and cannot be disabled. Managed Service for Apache Flink encrypts data in transit in the following scenarios: • Data in transit from Kinesis Data Streams to Managed Service for Apache Flink. • Data in transit between internal components within Managed Service for Apache Flink. • Data in transit between Managed Service for Apache Flink and Firehose. Data protection 581 Managed Service for Apache Flink Key management Managed Service for Apache Flink Developer Guide Data encryption in Managed Service for Apache Flink uses service-managed keys. Customer managed keys are not supported. Identity and Access Management for Amazon Managed Service for Apache Flink AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Managed Service for Apache Flink resources. IAM is an |
analytics-java-api-175 | analytics-java-api.pdf | 175 | Data in transit between Managed Service for Apache Flink and Firehose. Data protection 581 Managed Service for Apache Flink Key management Managed Service for Apache Flink Developer Guide Data encryption in Managed Service for Apache Flink uses service-managed keys. Customer managed keys are not supported. Identity and Access Management for Amazon Managed Service for Apache Flink AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Managed Service for Apache Flink resources. IAM is an AWS service that you can use with no additional charge. Topics • Audience • Authenticating with identities • Managing access using policies • How Amazon Managed Service for Apache Flink works with IAM • Identity-based policy examples for Amazon Managed Service for Apache Flink • Troubleshooting Amazon Managed Service for Apache Flink identity and access • Cross-service confused deputy prevention Audience How you use AWS Identity and Access Management (IAM) differs, depending on the work that you do in Managed Service for Apache Flink. Service user – If you use the Managed Service for Apache Flink service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more Managed Service for Apache Flink features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. If you cannot access a feature in Managed Service for Apache Flink, see Troubleshooting Amazon Managed Service for Apache Flink identity and access. Identity and Access Management for Managed Service for Apache Flink 582 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Service administrator – If you're in charge of Managed Service for Apache Flink resources at your company, you probably have full access to Managed Service for Apache Flink. It's your job to determine which Managed Service for Apache Flink features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. To learn more about how your company can use IAM with Managed Service for Apache Flink, see How Amazon Managed Service for Apache Flink works with IAM. IAM administrator – If you're an IAM administrator, you might want to learn details about how you can write policies to manage access to Managed Service for Apache Flink. To view example Managed Service for Apache Flink identity-based policies that you can use in IAM, see Identity- based policy examples for Amazon Managed Service for Apache Flink. Authenticating with identities Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in Authenticating with identities 583 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. AWS account root user When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password |
analytics-java-api-176 | analytics-java-api.pdf | 176 | authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in Authenticating with identities 583 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. AWS account root user When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. Federated identity As a best practice, require human users, including users that require administrator access, to use federation with an identity provider to access AWS services by using temporary credentials. A federated identity is a user from your enterprise user directory, a web identity provider, the AWS Directory Service, the Identity Center directory, or any user that accesses AWS services by using credentials provided through an identity source. When federated identities access AWS accounts, they assume roles, and the roles provide temporary credentials. For centralized access management, we recommend that you use AWS IAM Identity Center. You can create users and groups in IAM Identity Center, or you can connect and synchronize to a set of users and groups in your own identity source for use across all your AWS accounts and applications. For information about IAM Identity Center, see What is IAM Identity Center? in the AWS IAM Identity Center User Guide. IAM users and groups An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require long- term credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier Authenticating with identities 584 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see Use cases for IAM users in the IAM User Guide. IAM roles An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Methods to assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. • Cross-account access – You can use an IAM role to allow someone (a trusted principal) |
analytics-java-api-177 | analytics-java-api.pdf | 177 | third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. • Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant cross- account access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. • Cross-service access – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. Authenticating with identities 585 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Forward access sessions (FAS) – When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. • Service role – A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. • Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Use an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. Managing access using policies You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when a principal (user, root user, or role session) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. For more information about the structure and contents of JSON policy documents, see Overview of JSON policies in the IAM User Guide. Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. Managing access using policies 586 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide By default, users and roles have no permissions. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with |
analytics-java-api-178 | analytics-java-api.pdf | 178 | resources, and under what conditions. Managing access using policies 586 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide By default, users and roles have no permissions. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API. Identity-based policies Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see Define custom IAM permissions with customer managed policies in the IAM User Guide. Identity-based policies can be further categorized as inline policies or managed policies. Inline policies are embedded directly into a single user, group, or role. Managed policies are standalone policies that you can attach to multiple users, groups, and roles in your AWS account. Managed policies include AWS managed policies and customer managed policies. To learn how to choose between a managed policy or an inline policy, see Choose between managed policies and inline policies in the IAM User Guide. Resource-based policies Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must specify a principal in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services. Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy. Managing access using policies 587 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Access control lists (ACLs) Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Amazon S3, AWS WAF, and Amazon VPC are examples of services that support ACLs. To learn more about ACLs, see Access control list (ACL) overview in the Amazon Simple Storage Service Developer Guide. Other policy types AWS supports additional, less-common policy types. These policy types can set the maximum permissions granted to you by the more common policy types. • Permissions boundaries – A permissions boundary is an advanced feature in which you set the maximum permissions that an identity-based policy can grant to an IAM entity (IAM user or role). You can set a permissions boundary for an entity. The resulting permissions are the intersection of an entity's identity-based policies and its permissions boundaries. Resource-based policies that specify the user or role in the Principal field are not limited by the permissions boundary. An explicit deny in any of these policies overrides the allow. For more information about permissions boundaries, see Permissions boundaries for IAM entities in the IAM User Guide. • Service control policies (SCPs) – SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for grouping and centrally managing multiple AWS accounts that your business owns. If you enable all features in an organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see Service control policies in the AWS Organizations User Guide. • Resource control policies (RCPs) – RCPs are JSON policies that you can use to set the maximum available permissions for resources in your accounts without updating the IAM policies attached to each resource that you own. The RCP limits permissions for resources in member accounts and can impact the effective permissions for identities, including the AWS account root user, regardless of whether they belong to your organization. For more information about Organizations and RCPs, including a list of AWS services that support RCPs, see Resource control policies (RCPs) in the AWS Organizations User Guide. • Session policies – Session policies are advanced policies that you pass as a parameter |
analytics-java-api-179 | analytics-java-api.pdf | 179 | that you can use to set the maximum available permissions for resources in your accounts without updating the IAM policies attached to each resource that you own. The RCP limits permissions for resources in member accounts and can impact the effective permissions for identities, including the AWS account root user, regardless of whether they belong to your organization. For more information about Organizations and RCPs, including a list of AWS services that support RCPs, see Resource control policies (RCPs) in the AWS Organizations User Guide. • Session policies – Session policies are advanced policies that you pass as a parameter when you programmatically create a temporary session for a role or federated user. The resulting session's Managing access using policies 588 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide permissions are the intersection of the user or role's identity-based policies and the session policies. Permissions can also come from a resource-based policy. An explicit deny in any of these policies overrides the allow. For more information, see Session policies in the IAM User Guide. Multiple policy types When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see Policy evaluation logic in the IAM User Guide. How Amazon Managed Service for Apache Flink works with IAM Before you use IAM to manage access to Managed Service for Apache Flink, learn what IAM features are available to use with Managed Service for Apache Flink. IAM features you can use with Amazon Managed Service for Apache Flink IAM feature Managed Service for Apache Flink support Identity-based policies Resource-based policies Policy actions Policy resources Policy condition keys ACLs ABAC (tags in policies) Temporary credentials Principal permissions Service roles Service-linked roles Yes No Yes Yes No No Yes Yes Yes No No How Amazon Managed Service for Apache Flink works with IAM 589 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To get a high-level view of how Managed Service for Apache Flink and other AWS services work with most IAM features, see AWS services that work with IAM in the IAM User Guide. Identity-based policies for Managed Service for Apache Flink Supports identity-based policies: Yes Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see Define custom IAM permissions with customer managed policies in the IAM User Guide. With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. You can't specify the principal in an identity-based policy because it applies to the user or role to which it is attached. To learn about all of the elements that you can use in a JSON policy, see IAM JSON policy elements reference in the IAM User Guide. Identity-based policy examples for Managed Service for Apache Flink To view examples of Managed Service for Apache Flink identity-based policies, see Identity-based policy examples for Amazon Managed Service for Apache Flink. Resource-based policies within Managed Service for Apache Flink Amazon Managed Service for Apache Flink currently does note support resource-based access control. Cross-account access to resources from the Managed Service for Apache Flink application To allow a Managed Service for Apache Flink application access to a resource such as an Amazon Kinesis stream or Amazon S3 bucket, you must create an IAM role in the account of the resource. The role must have sufficient permissions to access the resource. You must also add a trust policy that authorizes the entire account of the Managed Service for Apache Flink application to assume the role. { "Version": "2012-10-17", "Statement": [ How Amazon Managed Service for Apache Flink works with IAM 590 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::Application-account-ID:root" }, "Action": "sts:AssumeRole", "Condition": {} } ] } Additionally, the IAM role assigned to the Managed Service for Apache Flink application must allow assuming the role in the resource account. { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAssumingRoleInStreamAccount", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::Stream-account-ID:role/Role-to-assume" } ] } For more information, see Cross account resource access in IAM in the IAM User Guide. Policy actions for Managed Service for Apache Flink Supports policy actions: Yes Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Action element of a JSON policy describes the actions that you |
analytics-java-api-180 | analytics-java-api.pdf | 180 | to the Managed Service for Apache Flink application must allow assuming the role in the resource account. { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAssumingRoleInStreamAccount", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::Stream-account-ID:role/Role-to-assume" } ] } For more information, see Cross account resource access in IAM in the IAM User Guide. Policy actions for Managed Service for Apache Flink Supports policy actions: Yes Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Action element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Policy actions usually have the same name as the associated AWS API operation. There are some exceptions, such as permission-only actions that don't have a matching API operation. There are also some operations that require multiple actions in a policy. These additional actions are called dependent actions. Include actions in a policy to grant permissions to perform the associated operation. How Amazon Managed Service for Apache Flink works with IAM 591 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To see a list of Managed Service for Apache Flink actions, see Actions Defined by Amazon Managed Service for Apache Flink in the Service Authorization Reference. Policy actions in Managed Service for Apache Flink use the following prefix before the action: Kinesis Analytics To specify multiple actions in a single statement, separate them with commas. "Action": [ "Kinesis Analytics:action1", "Kinesis Analytics:action2" ] You can specify multiple actions using wildcards (*). For example, to specify all actions that begin with the word Describe, include the following action: "Action": "Kinesis Analytics:Describe*" To view examples of Managed Service for Apache Flink identity-based policies, see Identity-based policy examples for Amazon Managed Service for Apache Flink. Policy resources for Managed Service for Apache Flink Supports policy resources: Yes Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Resource JSON policy element specifies the object or objects to which the action applies. Statements must include either a Resource or a NotResource element. As a best practice, specify a resource using its Amazon Resource Name (ARN). You can do this for actions that support a specific resource type, known as resource-level permissions. For actions that don't support resource-level permissions, such as listing operations, use a wildcard (*) to indicate that the statement applies to all resources. "Resource": "*" How Amazon Managed Service for Apache Flink works with IAM 592 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To see a list of Managed Service for Apache Flink resource types and their ARNs, see Resources Defined by Amazon Managed Service for Apache Flink in the Service Authorization Reference. To learn with which actions you can specify the ARN of each resource, see Actions Defined by Amazon Managed Service for Apache Flink. To view examples of Managed Service for Apache Flink identity-based policies, see Identity-based policy examples for Amazon Managed Service for Apache Flink. Policy condition keys for Managed Service for Apache Flink Supports service-specific policy condition keys: Yes Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Condition element (or Condition block) lets you specify conditions in which a statement is in effect. The Condition element is optional. You can create conditional expressions that use condition operators, such as equals or less than, to match the condition in the policy with values in the request. If you specify multiple Condition elements in a statement, or multiple keys in a single Condition element, AWS evaluates them using a logical AND operation. If you specify multiple values for a single condition key, AWS evaluates the condition using a logical OR operation. All of the conditions must be met before the statement's permissions are granted. You can also use placeholder variables when you specify conditions. For example, you can grant an IAM user permission to access a resource only if it is tagged with their IAM user name. For more information, see IAM policy elements: variables and tags in the IAM User Guide. AWS supports global condition keys and service-specific condition keys. To see all AWS global condition keys, see AWS global condition context keys in the IAM User Guide. To see a list of Managed Service for Apache Flink condition keys, see Condition Keys for Amazon Managed Service for Apache Flink in the Service Authorization Reference. To learn with which actions and resources you can use a condition key, see Actions Defined by Amazon Managed Service for Apache Flink. To view examples of Managed Service |
analytics-java-api-181 | analytics-java-api.pdf | 181 | user name. For more information, see IAM policy elements: variables and tags in the IAM User Guide. AWS supports global condition keys and service-specific condition keys. To see all AWS global condition keys, see AWS global condition context keys in the IAM User Guide. To see a list of Managed Service for Apache Flink condition keys, see Condition Keys for Amazon Managed Service for Apache Flink in the Service Authorization Reference. To learn with which actions and resources you can use a condition key, see Actions Defined by Amazon Managed Service for Apache Flink. To view examples of Managed Service for Apache Flink identity-based policies, see Identity-based policy examples for Amazon Managed Service for Apache Flink. How Amazon Managed Service for Apache Flink works with IAM 593 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Access control lists (ACLs) in Managed Service for Apache Flink Supports ACLs: No Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Attribute-based access control (ABAC) with Managed Service for Apache Flink Supports ABAC (tags in policies): Yes Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes. In AWS, these attributes are called tags. You can attach tags to IAM entities (users or roles) and to many AWS resources. Tagging entities and resources is the first step of ABAC. Then you design ABAC policies to allow operations when the principal's tag matches the tag on the resource that they are trying to access. ABAC is helpful in environments that are growing rapidly and helps with situations where policy management becomes cumbersome. To control access based on tags, you provide tag information in the condition element of a policy using the aws:ResourceTag/key-name, aws:RequestTag/key-name, or aws:TagKeys condition keys. If a service supports all three condition keys for every resource type, then the value is Yes for the service. If a service supports all three condition keys for only some resource types, then the value is Partial. For more information about ABAC, see Define permissions with ABAC authorization in the IAM User Guide. To view a tutorial with steps for setting up ABAC, see Use attribute-based access control (ABAC) in the IAM User Guide. Using Temporary credentials with Managed Service for Apache Flink Supports temporary credentials: Yes Some AWS services don't work when you sign in using temporary credentials. For additional information, including which AWS services work with temporary credentials, see AWS services that work with IAM in the IAM User Guide. How Amazon Managed Service for Apache Flink works with IAM 594 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide You are using temporary credentials if you sign in to the AWS Management Console using any method except a user name and password. For example, when you access AWS using your company's single sign-on (SSO) link, that process automatically creates temporary credentials. You also automatically create temporary credentials when you sign in to the console as a user and then switch roles. For more information about switching roles, see Switch from a user to an IAM role (console) in the IAM User Guide. You can manually create temporary credentials using the AWS CLI or AWS API. You can then use those temporary credentials to access AWS. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see Temporary security credentials in IAM. Cross-service principal permissions for Managed Service for Apache Flink Supports forward access sessions (FAS): Yes When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. Service roles for Managed Service for Apache Flink Supports service roles: Yes A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. Warning Changing the permissions for a service role might break Managed Service for Apache Flink functionality. Edit service roles only when Managed Service for Apache Flink provides guidance to |
analytics-java-api-182 | analytics-java-api.pdf | 182 | details when making FAS requests, see Forward access sessions. Service roles for Managed Service for Apache Flink Supports service roles: Yes A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. Warning Changing the permissions for a service role might break Managed Service for Apache Flink functionality. Edit service roles only when Managed Service for Apache Flink provides guidance to do so. How Amazon Managed Service for Apache Flink works with IAM 595 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Service-linked roles for Managed Service for Apache Flink Supports service-linked roles: Yes A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. For details about creating or managing service-linked roles, see AWS services that work with IAM. Find a service in the table that includes a Yes in the Service-linked role column. Choose the Yes link to view the service-linked role documentation for that service. Identity-based policy examples for Amazon Managed Service for Apache Flink By default, users and roles don't have permission to create or modify Managed Service for Apache Flink resources. They also can't perform tasks by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS API. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. To learn how to create an IAM identity-based policy by using these example JSON policy documents, see Create IAM policies (console) in the IAM User Guide. For details about actions and resource types defined by Managed Service for Apache Flink, including the format of the ARNs for each of the resource types, see Actions, Resources, and Condition Keys for Amazon Managed Service for Apache Flink in the Service Authorization Reference. Topics • Policy best practices • Using the Managed Service for Apache Flink console • Allow users to view their own permissions Identity-based policy examples 596 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Policy best practices Identity-based policies determine whether someone can create, access, or delete Managed Service for Apache Flink resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations: • Get started with AWS managed policies and move toward least-privilege permissions – To get started granting permissions to your users and workloads, use the AWS managed policies that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see AWS managed policies or AWS managed policies for job functions in the IAM User Guide. • Apply least-privilege permissions – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as least-privilege permissions. For more information about using IAM to apply permissions, see Policies and permissions in IAM in the IAM User Guide. • Use conditions in IAM policies to further restrict access – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as AWS CloudFormation. For more information, see IAM JSON policy elements: Condition in the IAM User Guide. • Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see Validate policies with IAM Access Analyzer in the IAM User Guide. • Require multi-factor authentication (MFA) – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional |
analytics-java-api-183 | analytics-java-api.pdf | 183 | Access Analyzer to validate your IAM policies to ensure secure and functional permissions – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see Validate policies with IAM Access Analyzer in the IAM User Guide. • Require multi-factor authentication (MFA) – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see Secure API access with MFA in the IAM User Guide. For more information about best practices in IAM, see Security best practices in IAM in the IAM User Guide. Identity-based policy examples 597 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Using the Managed Service for Apache Flink console To access the Amazon Managed Service for Apache Flink console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the Managed Service for Apache Flink resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy. You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that they're trying to perform. To ensure that users and roles can still use the Managed Service for Apache Flink console, also attach the Managed Service for Apache Flink ConsoleAccess or ReadOnly AWS managed policy to the entities. For more information, see Adding permissions to a user in the IAM User Guide. Allow users to view their own permissions This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API. { "Version": "2012-10-17", "Statement": [ { "Sid": "ViewOwnUserInfo", "Effect": "Allow", "Action": [ "iam:GetUserPolicy", "iam:ListGroupsForUser", "iam:ListAttachedUserPolicies", "iam:ListUserPolicies", "iam:GetUser" ], "Resource": ["arn:aws:iam::*:user/${aws:username}"] }, { "Sid": "NavigateInConsole", "Effect": "Allow", "Action": [ "iam:GetGroupPolicy", Identity-based policy examples 598 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "iam:GetPolicyVersion", "iam:GetPolicy", "iam:ListAttachedGroupPolicies", "iam:ListGroupPolicies", "iam:ListPolicyVersions", "iam:ListPolicies", "iam:ListUsers" ], "Resource": "*" } ] } Troubleshooting Amazon Managed Service for Apache Flink identity and access Use the following information to help you diagnose and fix common issues that you might encounter when working with Managed Service for Apache Flink and IAM. Topics • I am not authorized to perform an action in Managed Service for Apache Flink • I am not authorized to perform iam:PassRole • I want to allow people outside of my AWS account to access my Managed Service for Apache Flink resources I am not authorized to perform an action in Managed Service for Apache Flink If the AWS Management Console tells you that you're not authorized to perform an action, then you must contact your administrator for assistance. Your administrator is the person that provided you with your user name and password. The following example error occurs when the mateojackson user tries to use the console to view details about a fictional my-example-widget resource but does not have the fictional Kinesis Analytics:GetWidget permissions. User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: Kinesis Analytics:GetWidget on resource: my-example-widget Troubleshooting 599 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide In this case, Mateo asks his administrator to update his policies to allow him to access the my- example-widget resource using the Kinesis Analytics:GetWidget action. I am not authorized to perform iam:PassRole If you receive an error that you're not authorized to perform the iam:PassRole action, your policies must be updated to allow you to pass a role to Managed Service for Apache Flink. Some AWS services allow you to pass an existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service. The following example error occurs when an IAM user named marymajor tries to use the console to perform an action in Managed Service for Apache Flink. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service. User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole In this case, Mary's policies must be updated to |
analytics-java-api-184 | analytics-java-api.pdf | 184 | existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service. The following example error occurs when an IAM user named marymajor tries to use the console to perform an action in Managed Service for Apache Flink. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service. User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole In this case, Mary's policies must be updated to allow her to perform the iam:PassRole action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. I want to allow people outside of my AWS account to access my Managed Service for Apache Flink resources You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources. To learn more, consult the following: • To learn whether Managed Service for Apache Flink supports these features, see How Amazon Managed Service for Apache Flink works with IAM. • To learn how to provide access to your resources across AWS accounts that you own, see Providing access to an IAM user in another AWS account that you own in the IAM User Guide. Troubleshooting 600 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • To learn how to provide access to your resources to third-party AWS accounts, see Providing access to AWS accounts owned by third parties in the IAM User Guide. • To learn how to provide access through identity federation, see Providing access to externally authenticated users (identity federation) in the IAM User Guide. • To learn the difference between using roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. Cross-service confused deputy prevention In AWS, cross-service impersonation can occur when one service (the calling service) calls another service (the called service). The calling service can be manipulated to act on another customer's resources even though it shouldn't have the proper permissions, resulting in the confused deputy problem. To prevent confused deputies, AWS provides tools that help you protect your data for all services using service principals that have been given access to resources in your account. This section focuses on cross-service confused deputy prevention specific to Managed Service for Apache Flink however, you can learn more about this topic at The confused deputy problem section of the IAM User Guide. In the context of Managed Service for Apache Flink, we recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in your role trust policy to limit access to the role to only those requests that are generated by expected resources. Use aws:SourceArn if you want only one resource to be associated with the cross-service access. Use aws:SourceAccount if you want to allow any resource in that account to be associated with the cross-service use. The value of aws:SourceArn must be the ARN of the resource used by Managed Service for Apache Flink, which is specified with the following format: arn:aws:kinesisanalytics:region:account:resource. The recommended approach to the confused deputy problem is to use the aws:SourceArn global condition context key with the full resource ARN. If you don't know the full ARN of the resource or if you are specifying multiple resources, use the aws:SourceArn key with wildcard characters (*) for the unknown portions of the ARN. For example: arn:aws:kinesisanalytics::111122223333:*. Cross-service confused deputy prevention 601 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Policies of roles that you provide to Managed Service for Apache Flink as well as trust policies of roles generated for you can make use of these keys. In order to protect against the confused deputy problem, carry out the following steps: To protect against the confused deputy problem 1. Sign in to the AWS Management Console and open the IAM console at https:// console.aws.amazon.com/iam/. 2. Choose Roles and then choose the role you want to modify. 3. Choose Edit trust policy. 4. On the Edit trust policy page, replace the default JSON policy with a policy that uses one or both of the aws:SourceArn and aws:SourceAccount global condition context keys. See the following example policy: 5. Choose Update policy. { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Principal":{ "Service":"kinesisanalytics.amazonaws.com" }, "Action":"sts:AssumeRole", "Condition":{ "StringEquals":{ "aws:SourceAccount":"Account ID" }, "ArnEquals":{ "aws:SourceArn":"arn:aws:kinesisanalytics:us- east-1:123456789012:application/my-app" } } } ] } Cross-service confused deputy prevention 602 Managed Service for Apache Flink Managed Service for Apache |
analytics-java-api-185 | analytics-java-api.pdf | 185 | Console and open the IAM console at https:// console.aws.amazon.com/iam/. 2. Choose Roles and then choose the role you want to modify. 3. Choose Edit trust policy. 4. On the Edit trust policy page, replace the default JSON policy with a policy that uses one or both of the aws:SourceArn and aws:SourceAccount global condition context keys. See the following example policy: 5. Choose Update policy. { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Principal":{ "Service":"kinesisanalytics.amazonaws.com" }, "Action":"sts:AssumeRole", "Condition":{ "StringEquals":{ "aws:SourceAccount":"Account ID" }, "ArnEquals":{ "aws:SourceArn":"arn:aws:kinesisanalytics:us- east-1:123456789012:application/my-app" } } } ] } Cross-service confused deputy prevention 602 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Compliance validation for Amazon Managed Service for Apache Flink Third-party auditors assess the security and compliance of Amazon Managed Service for Apache Flink as part of multiple AWS compliance programs. These include SOC, PCI, HIPAA, and others. For a list of AWS services in scope of specific compliance programs, see . For general information, see AWS Compliance Programs. You can download third-party audit reports using AWS Artifact. For more information, see Downloading Reports in AWS Artifact. Your compliance responsibility when using Managed Service for Apache Flink is determined by the sensitivity of your data, your company's compliance objectives, and applicable laws and regulations. If your use of Managed Service for Apache Flink is subject to compliance with standards such as HIPAA or PCI, AWS provides resources to help: • Security and Compliance Quick Start Guides – These deployment guides discuss architectural considerations and provide steps for deploying security- and compliance-focused baseline environments on AWS. • Architecting for HIPAA Security and Compliance on Amazon Web Services. This whitepaper describes how companies can use AWS to create HIPAA-compliant applications. • AWS Compliance Resources – This collection of workbooks and guides might apply to your industry and location. • AWS Config – This AWS service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations. • AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS that helps you check your compliance with security industry standards and best practices. FedRAMP The AWS FedRAMP Compliance program includes Managed Service for Apache Flink as a FedRAMP- authorized service. If you are a federal or commercial customer, you can use the service to process and store sensitive workloads in the AWS GovCloud (US) Region’s authorization boundary with data up to the high impact level, as well as US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon) Regions with data up to a moderate level. Compliance validation for Managed Service for Apache Flink 603 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide You can request access to the AWS FedRAMP Security Packages through the FedRAMP PMO, your AWS Sales Account Manager, or you can download them through AWS Artifact at AWS Artifact. For more information, see FedRAMP. Resilience in Amazon Managed Service for Apache Flink The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between Availability Zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure. In addition to the AWS global infrastructure, a Managed Service for Apache Flink offers several features to help support your data resiliency and backup needs. Disaster recovery Managed Service for Apache Flink runs in a serverless mode, and takes care of host degradations, Availability Zone availability, and other infrastructure related issues by performing automatic migration. Managed Service for Apache Flink achieves this through multiple, redundant mechanisms. Each Managed Service for Apache Flink application runs in a single-tenant Apache Flink cluster. The Apache Flink cluster is run with the JobMananger in high availability mode using Zookeeper across multiple availability zones. Managed Service for Apache Flink deploys Apache Flink using Amazon EKS. Multiple Kubernetes pods are used in Amazon EKS for each AWS region across availability zones. In the event of a failure, Managed Service for Apache Flink first tries to recover the application within the running Apache Flink cluster using your application’s checkpoints, if available. Managed Service for Apache Flink backs up application state using Checkpoints and Snapshots: • Checkpoints are backups of application state that Managed Service for Apache Flink automatically creates periodically and uses to restore from faults. • Snapshots are backups of application state that you create and restore from manually. For more information about checkpoints and snapshots, see Implement fault tolerance. Resilience and disaster recovery in Managed Service for Apache Flink 604 Managed |
analytics-java-api-186 | analytics-java-api.pdf | 186 | the event of a failure, Managed Service for Apache Flink first tries to recover the application within the running Apache Flink cluster using your application’s checkpoints, if available. Managed Service for Apache Flink backs up application state using Checkpoints and Snapshots: • Checkpoints are backups of application state that Managed Service for Apache Flink automatically creates periodically and uses to restore from faults. • Snapshots are backups of application state that you create and restore from manually. For more information about checkpoints and snapshots, see Implement fault tolerance. Resilience and disaster recovery in Managed Service for Apache Flink 604 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Versioning Stored versions of application state are versioned as follows: • Checkpoints are versioned automatically by the service. If the service uses a checkpoint to restart the application, the latest checkpoint will be used. • Savepoints are versioned using the SnapshotName parameter of the CreateApplicationSnapshot action. Managed Service for Apache Flink encrypts data stored in checkpoints and savepoints. Infrastructure security in Managed Service for Apache Flink As a managed service, Managed Service for Apache Flink is protected by the AWS global network security procedures that are described in the Amazon Web Services: Overview of Security Processes whitepaper. You use AWS published API calls to access Managed Service for Apache Flink through the network. All API calls to Managed Service for Apache Flink are secured via Transport Layer Security (TLS) and authenticated via IAM. Clients must support TLS 1.2 or later. Clients must also support cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. Security best practices for Managed Service for Apache Flink Amazon Managed Service for Apache Flink provides a number of security features to consider as you develop and implement your own security policies. The following best practices are general guidelines and don’t represent a complete security solution. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful considerations rather than prescriptions. Versioning 605 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Implement least privilege access When granting permissions, you decide who is getting what permissions to which Managed Service for Apache Flink resources. You enable specific actions that you want to allow on those resources. Therefore you should grant only the permissions that are required to perform a task. Implementing least privilege access is fundamental in reducing security risk and the impact that could result from errors or malicious intent. Use IAM roles to access other Amazon services Your Managed Service for Apache Flink application must have valid credentials to access resources in other services, such as Kinesis data streams, Firehose streams, or Amazon S3 buckets. You should not store AWS credentials directly in the application or in an Amazon S3 bucket. These are long- term credentials that are not automatically rotated and could have a significant business impact if they are compromised. Instead, you should use an IAM role to manage temporary credentials for your application to access other resources. When you use a role, you don't have to use long-term credentials to access other resources. For more information, see the following topics in the IAM User Guide: • IAM Roles • Common Scenarios for Roles: Users, Applications, and Services Implement server-side encryption in dependent resources Data at rest and data in transit is encrypted in Managed Service for Apache Flink, and this encryption cannot be disabled. You should implement server-side encryption in your dependent resources, such as Kinesis data streams, Firehose streams, and Amazon S3 buckets. For more information on implementing server-side encryption in dependent resources, see Data protection . Use CloudTrail to monitor API calls Managed Service for Apache Flink is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an Amazon service in Managed Service for Apache Flink. Implement least privilege access 606 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Using the information collected by CloudTrail, you can determine the request that was made to Managed Service for Apache Flink, the IP address from which the request was made, who made the request, when it was made, and additional details. For more information, see the section called “Log Managed Service for Apache Flink API calls with AWS CloudTrail”. Use CloudTrail to monitor API calls 607 Managed Service for Apache Flink Managed Service for Apache Flink |
analytics-java-api-187 | analytics-java-api.pdf | 187 | Amazon service in Managed Service for Apache Flink. Implement least privilege access 606 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Using the information collected by CloudTrail, you can determine the request that was made to Managed Service for Apache Flink, the IP address from which the request was made, who made the request, when it was made, and additional details. For more information, see the section called “Log Managed Service for Apache Flink API calls with AWS CloudTrail”. Use CloudTrail to monitor API calls 607 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Logging and monitoring in Amazon Managed Service for Apache Flink Monitoring is an important part of maintaining the reliability, availability, and performance of Managed Service for Apache Flink applications. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multipoint failure if one occurs. Before you start monitoring Managed Service for Apache Flink, you should create a monitoring plan that includes answers to the following questions: • What are your monitoring goals? • What resources will you monitor? • How often will you monitor these resources? • What monitoring tools will you use? • Who will perform the monitoring tasks? • Who should be notified when something goes wrong? The next step is to establish a baseline for normal Managed Service for Apache Flink performance in your environment. You do this by measuring performance at various times and under different load conditions. As you monitor Managed Service for Apache Flink, you can store historical monitoring data. You can then compare it with current performance data, identify normal performance patterns and performance anomalies, and devise methods to address issues. Topics • Logging in Managed Service for Apache Flink • Monitoring in Managed Service for Apache Flink • Set up application logging in Managed Service for Apache Flink • Analyze logs with CloudWatch Logs Insights • Metrics and dimensions in Managed Service for Apache Flink • Write custom messages to CloudWatch Logs • Log Managed Service for Apache Flink API calls with AWS CloudTrail 608 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Logging in Managed Service for Apache Flink Logging is important for production applications to understand errors and failures. However, the logging subsystem needs to collect and forward log entries to CloudWatch Logs While some logging is fine and desirable, extensive logging can overload the service and cause the Flink application to fall behind. Logging exceptions and warnings is certainly a good idea. But you cannot generate a log message for each and every message that is processed by the Flink application. Flink is optimized for high throughout and low latency, the logging subsystem is not. In case it is really required to generate log output for every processed message, use an additional DataStream inside the Flink application and a proper sink to send the data to Amazon S3 or CloudWatch. Do not use the Java logging system for this purpose. Moreover, Managed Service for Apache Flink’ Debug Monitoring Log Level setting generates a large amount of traffic, which can create backpressure. You should only use it while actively investigating issues with the application. Query logs with CloudWatch Logs Insights CloudWatch Logs Insights is a powerful service to query log at scale. Customers should leverage its capabilities to quickly search through logs to identify and mitigate errors during operational events. The following query looks for exceptions in all task manager logs and orders them according to the time they occurred. fields @timestamp, @message | filter isPresent(throwableInformation.0) or isPresent(throwableInformation) or @message like /(Error|Exception)/ | sort @timestamp desc For other useful queries, see Example Queries. Monitoring in Managed Service for Apache Flink When running streaming applications in production, you set out to execute the application continuously and indefinitely. It is crucial to implement monitoring and proper alarming of all components not only the Flink application. Otherwise you risk to miss emerging problems early on and only realize an operational event once it is fully unravelling and much harder to mitigate. General things to monitor include: Logging in Managed Service for Apache Flink 609 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Is the source ingesting data? • Is data read from the source (from the perspective of the source)? • Is the Flink application receiving data? • Is the Flink application able to keep up or is it falling behind? • Is the Flink application persisting data into the sink (from the application perspective)? • Is the sink receiving data? More specific metrics should then be considered for the Flink application. This CloudWatch dashboard provides a good starting point. For more information on what metrics to monitor for production |
analytics-java-api-188 | analytics-java-api.pdf | 188 | Service for Apache Flink Managed Service for Apache Flink Developer Guide • Is the source ingesting data? • Is data read from the source (from the perspective of the source)? • Is the Flink application receiving data? • Is the Flink application able to keep up or is it falling behind? • Is the Flink application persisting data into the sink (from the application perspective)? • Is the sink receiving data? More specific metrics should then be considered for the Flink application. This CloudWatch dashboard provides a good starting point. For more information on what metrics to monitor for production applications, see Use CloudWatch Alarms with Amazon Managed Service for Apache Flink. These metrics include: • records_lag_max and millisbehindLatest – If the application is consuming from Kinesis or Kafka, these metrics indicate if the application is falling behind and needs to be scaled in order to keep up with the current load. This is a good generic metric that is easy to track for all kinds of applications. But it can only be used for reactive scaling, i.e., when the application has already fallen behind. • cpuUtilization and heapMemoryUtilization – These metrics give a good indication of the overall resource utilization of the application and can be used for proactive scaling unless the application is I/O bound. • downtime – A downtime greater than zero indicates that the application has failed. If the value is larger than 0, the application is not processing any data. • lastCheckpointSize and lastCheckpointDuration – These metrics monitor how much data is stored in state and how long it takes to take a checkpoint. If checkpoints grow or take long, the application is continuously spending time on checkpointing and has less cycles for actual processing. At some points, checkpoints may grow too large or take so long that they fail. In addition to monitoring absolute values, customers should also considering monitoring the change rate with RATE(lastCheckpointSize) and RATE(lastCheckpointDuration). • numberOfFailedCheckpoints – This metric counts the number of failed checkpoints since the application started. Depending on the application, it can be tolerable if checkpoints fail occasionally. But if checkpoints are regularly failing, the application is likely unhealthy and needs further attention. We recommend monitoring RATE(numberOfFailedCheckpoints) to alarm on the gradient and not on absolute values. Monitoring in Managed Service for Apache Flink 610 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Set up application logging in Managed Service for Apache Flink By adding an Amazon CloudWatch logging option to your Managed Service for Apache Flink application, you can monitor for application events or configuration problems. This topic describes how to configure your application to write application events to a CloudWatch Logs stream. A CloudWatch logging option is a collection of application settings and permissions that your application uses to configure the way it writes application events to CloudWatch Logs. You can add and configure a CloudWatch logging option using either the AWS Management Console or the AWS Command Line Interface (AWS CLI). Note the following about adding a CloudWatch logging option to your application: • When you add a CloudWatch logging option using the console, Managed Service for Apache Flink creates the CloudWatch log group and log stream for you and adds the permissions your application needs to write to the log stream. • When you add a CloudWatch logging option using the API, you must also create the application's log group and log stream, and add the permissions your application needs to write to the log stream. Set up CloudWatch logging using the console When you enable CloudWatch logging for your application in the console, a CloudWatch log group and log stream is created for you. Also, your application's permissions policy is updated with permissions to write to the stream. Managed Service for Apache Flink creates a log group named using the following convention, where ApplicationName is your application's name. /aws/kinesis-analytics/ApplicationName Managed Service for Apache Flink creates a log stream in the new log group with the following name. kinesis-analytics-log-stream Set up application logging in Managed Service for Apache Flink 611 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide You set the application monitoring metrics level and monitoring log level using the Monitoring log level section of the Configure application page. For information about application log levels, see the section called “Control application monitoring levels”. Set up CloudWatch logging using the CLI To add a CloudWatch logging option using the AWS CLI, you complete the following: • Create a CloudWatch log group and log stream. • Add a logging option when you create an application by using the CreateApplication action, or add a logging option to an existing application using the AddApplicationCloudWatchLoggingOption action. • Add permissions to your application's policy to write to the logs. Create a |
analytics-java-api-189 | analytics-java-api.pdf | 189 | monitoring log level using the Monitoring log level section of the Configure application page. For information about application log levels, see the section called “Control application monitoring levels”. Set up CloudWatch logging using the CLI To add a CloudWatch logging option using the AWS CLI, you complete the following: • Create a CloudWatch log group and log stream. • Add a logging option when you create an application by using the CreateApplication action, or add a logging option to an existing application using the AddApplicationCloudWatchLoggingOption action. • Add permissions to your application's policy to write to the logs. Create a CloudWatch log group and log stream You create a CloudWatch log group and stream using either the CloudWatch Logs console or the API. For information about creating a CloudWatch log group and log stream, see Working with Log Groups and Log Streams. Work with application CloudWatch logging options Use the following API actions to add a CloudWatch log option to a new or existing application or change a log option for an existing application. For information about how to use a JSON file for input for an API action, see Managed Service for Apache Flink API example code. Add a CloudWatch log option when creating an application The following example demonstrates how to use the CreateApplication action to add a CloudWatch log option when you create an application. In the example, replace Amazon Resource Name (ARN) of the CloudWatch Log stream to add to the new application with your own information. For more information about the action, see CreateApplication. { "ApplicationName": "test", "ApplicationDescription": "test-application-description", "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::123456789123:role/myrole", Set up CloudWatch logging using the CLI 612 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation":{ "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket", "FileKey": "myflink.jar" } }, "CodeContentType": "ZIPFILE" } }, "CloudWatchLoggingOptions": [{ "LogStreamARN": "<Amazon Resource Name (ARN) of the CloudWatch log stream to add to the new application>" }] } Add a CloudWatch log option to an existing application The following example demonstrates how to use the AddApplicationCloudWatchLoggingOption action to add a CloudWatch log option to an existing application. In the example, replace each user input placeholder with your own information. For more information about the action, see AddApplicationCloudWatchLoggingOption. { "ApplicationName": "<Name of the application to add the log option to>", "CloudWatchLoggingOption": { "LogStreamARN": "<ARN of the log stream to add to the application>" }, "CurrentApplicationVersionId": <Version of the application to add the log to> } Update an existing CloudWatch log option The following example demonstrates how to use the UpdateApplication action to modify an existing CloudWatch log option. In the example, replace each user input placeholder with your own information. For more information about the action, see UpdateApplication. { Set up CloudWatch logging using the CLI 613 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationName": "<Name of the application to update the log option for>", "CloudWatchLoggingOptionUpdates": [ { "CloudWatchLoggingOptionId": "<ID of the logging option to modify>", "LogStreamARNUpdate": "<ARN of the new log stream to use>" } ], "CurrentApplicationVersionId": <ID of the application version to modify> } Delete a CloudWatch log option from an application The following example demonstrates how to use the DeleteApplicationCloudWatchLoggingOption action to delete an existing CloudWatch log option. In the example, replace each user input placeholder with your own information. For more information about the action, see DeleteApplicationCloudWatchLoggingOption. { "ApplicationName": "<Name of application to delete log option from>", "CloudWatchLoggingOptionId": "<ID of the application log option to delete>", "CurrentApplicationVersionId": <Version of the application to delete the log option from> } Set the application logging level To set the level of application logging, use the MonitoringConfiguration parameter of the CreateApplication action or the MonitoringConfigurationUpdate parameter of the UpdateApplication action. For information about application log levels, see the section called “Control application monitoring levels”. Set the application logging level when creating an application The following example request for the CreateApplication action sets the application log level to INFO. Set up CloudWatch logging using the CLI 614 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "ApplicationName": "MyApplication", "ApplicationDescription": "My Application Description", "ApplicationConfiguration": { "ApplicationCodeConfiguration":{ "CodeContent":{ "S3ContentLocation":{ "BucketARN":"arn:aws:s3:::amzn-s3-demo-bucket", "FileKey":"myflink.jar", "ObjectVersion":"AbCdEfGhIjKlMnOpQrStUvWxYz12345" } }, "CodeContentType":"ZIPFILE" }, "FlinkApplicationConfiguration": "MonitoringConfiguration": { "ConfigurationType": "CUSTOM", "LogLevel": "INFO" } }, "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::123456789123:role/myrole" } Update the application logging level The following example request for the UpdateApplication action sets the application log level to INFO. { "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "MonitoringConfigurationUpdate": { "ConfigurationTypeUpdate": "CUSTOM", "LogLevelUpdate": "INFO" } } } } Set up CloudWatch logging using the CLI 615 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Add permissions to write to the CloudWatch log stream Managed Service for Apache Flink needs permissions to write misconfiguration errors to CloudWatch. You can add these permissions to the AWS Identity and Access |
analytics-java-api-190 | analytics-java-api.pdf | 190 | { "ConfigurationType": "CUSTOM", "LogLevel": "INFO" } }, "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::123456789123:role/myrole" } Update the application logging level The following example request for the UpdateApplication action sets the application log level to INFO. { "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "MonitoringConfigurationUpdate": { "ConfigurationTypeUpdate": "CUSTOM", "LogLevelUpdate": "INFO" } } } } Set up CloudWatch logging using the CLI 615 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Add permissions to write to the CloudWatch log stream Managed Service for Apache Flink needs permissions to write misconfiguration errors to CloudWatch. You can add these permissions to the AWS Identity and Access Management (IAM) role that Managed Service for Apache Flink assumes. For more information about using an IAM role for Managed Service for Apache Flink, see Identity and Access Management for Amazon Managed Service for Apache Flink. Trust policy To grant Managed Service for Apache Flink permissions to assume an IAM role, you can attach the following trust policy to the service execution role. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "kinesisanlaytics.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Permissions policy To grant permissions to an application to write log events to CloudWatch from a Managed Service for Apache Flink resource, you can use the following IAM permissions policy. Provide the correct Amazon Resource Names (ARNs) for your log group and stream. { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt0123456789000", "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams" Set up CloudWatch logging using the CLI 616 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ], "Resource": [ "arn:aws:logs:us-east-1:123456789012:log-group:my-log-group:log- stream:my-log-stream*", "arn:aws:logs:us-east-1:123456789012:log-group:my-log-group:*", "arn:aws:logs:us-east-1:123456789012:log-group:*", ] } ] } Control application monitoring levels You control the generation of application log messages using the application's Monitoring Metrics Level and Monitoring Log Level. The application's monitoring metrics level controls the granularity of log messages. Monitoring metrics levels are defined as follows: • Application: Metrics are scoped to the entire application. • Task: Metrics are scoped to each task. For information about tasks, see the section called “Implement application scaling”. • Operator: Metrics are scoped to each operator. For information about operators, see the section called “Operators”. • Parallelism: Metrics are scoped to application parallelism. You can only set this metrics level using the MonitoringConfigurationUpdate parameter of the UpdateApplication API. You cannot set this metrics level using the console. For information about parallelism, see the section called “Implement application scaling”. The application's monitoring log level controls the verbosity of the application's log. Monitoring log levels are defined as follows: • Error: Potential catastrophic events of the application. • Warn: Potentially harmful situations of the application. • Info: Informational and transient failure events of the application. We recommend that you use this logging level. • Debug: Fine-grained informational events that are most useful to debug an application. Note: Only use this level for temporary debugging purposes. Control application monitoring levels 617 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apply logging best practices We recommend that your application use the Info logging level. We recommend this level to ensure that you see Apache Flink errors, which are logged at the Info level rather than the Error level. We recommend that you use the Debug level only temporarily while investigating application issues. Switch back to the Info level when the issue is resolved. Using the Debug logging level will significantly affect your application's performance. Excessive logging can also significantly impact application performance. We recommend that you do not write a log entry for every record processed, for example. Excessive logging can cause severe bottlenecks in data processing and can lead to back pressure in reading data from the sources. Perform logging troubleshooting If application logs are not being written to the log stream, verify the following: • Verify that your application's IAM role and policies are correct. Your application's policy needs the following permissions to access your log stream: • logs:PutLogEvents • logs:DescribeLogGroups • logs:DescribeLogStreams For more information, see the section called “Add permissions to write to the CloudWatch log stream”. • Verify that your application is running. To check your application's status, view your application's page in the console, or use the DescribeApplication or ListApplications actions. • Monitor CloudWatch metrics such as downtime to diagnose other application issues. For information about reading CloudWatch metrics, see ???. Use CloudWatch Logs Insights After you have enabled CloudWatch logging in your application, you can use CloudWatch Logs Insights to analyze your application logs. For more information, see the section called “Analyze logs with CloudWatch Logs Insights”. Apply logging best practices 618 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Analyze logs with CloudWatch Logs Insights After you've added a CloudWatch logging option to your application as described in the previous section, you can use CloudWatch Logs Insights to |
analytics-java-api-191 | analytics-java-api.pdf | 191 | metrics such as downtime to diagnose other application issues. For information about reading CloudWatch metrics, see ???. Use CloudWatch Logs Insights After you have enabled CloudWatch logging in your application, you can use CloudWatch Logs Insights to analyze your application logs. For more information, see the section called “Analyze logs with CloudWatch Logs Insights”. Apply logging best practices 618 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Analyze logs with CloudWatch Logs Insights After you've added a CloudWatch logging option to your application as described in the previous section, you can use CloudWatch Logs Insights to query your log streams for specific events or errors. CloudWatch Logs Insights enables you to interactively search and analyze your log data in CloudWatch Logs. For information on getting started with CloudWatch Logs Insights, see Analyze Log Data with CloudWatch Logs Insights. Run a sample query This section describes how to run a sample CloudWatch Logs Insights query. Prerequisites • Existing log groups and log streams set up in CloudWatch Logs. • Existing logs stored in CloudWatch Logs. If you use services such as AWS CloudTrail, Amazon Route 53, or Amazon VPC, you've probably already set up logs from those services to go to CloudWatch Logs. For more information about sending logs to CloudWatch Logs, see Getting Started with CloudWatch Logs. Queries in CloudWatch Logs Insights return either a set of fields from log events, or the result of a mathematical aggregation or other operation performed on log events. This section demonstrates a query that returns a list of log events. To run a CloudWatch Logs Insights sample query 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation pane, choose Insights. 3. The query editor near the top of the screen contains a default query that returns the 20 most recent log events. Above the query editor, select a log group to query. When you select a log group, CloudWatch Logs Insights automatically detects fields in the data in the log group and displays them in Discovered fields in the right pane. It also displays a bar Analyze logs with CloudWatch Logs Insights 619 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide graph of log events in this log group over time. This bar graph shows the distribution of events in the log group that matches your query and time range, not just the events displayed in the table. 4. Choose Run query. The results of the query appear. In this example, the results are the most recent 20 log events of any type. 5. To see all of the fields for one of the returned log events, choose the arrow to the left of that log event. For more information about how to run and modify CloudWatch Logs Insights queries, see Run and Modify a Sample Query. Review example queries This section contains CloudWatch Logs Insights example queries for analyzing Managed Service for Apache Flink application logs. These queries search for several example error conditions, and serve as templates for writing queries that find other error conditions. Note Replace the Region (us-west-2), Account ID (012345678901) and application name (YourApplication) in the following query examples with your application's Region and your Account ID. This topic contains the following sections: • Analyze operations: Distribution of tasks • Analyze operations: Change in parallelism • Analyze errors: Access denied • Analyze errors: Source or sink not found • Analyze errors: Application task-related failures Review example queries 620 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Analyze operations: Distribution of tasks The following CloudWatch Logs Insights query returns the number of tasks the Apache Flink Job Manager distributes between Task Managers. You need to set the query's time frame to match one job run so that the query doesn't return tasks from previous jobs. For more information about Parallelism, see Implement application scaling. fields @timestamp, message | filter message like /Deploying/ | parse message " to flink-taskmanager-*" as @tmid | stats count(*) by @tmid | sort @timestamp desc | limit 2000 The following CloudWatch Logs Insights query returns the subtasks assigned to each Task Manager. The total number of subtasks is the sum of every task's parallelism. Task parallelism is derived from operator parallelism, and is the same as the application's parallelism by default, unless you change it in code by specifying setParallelism. For more information about setting operator parallelism, see Setting the Parallelism: Operator Level in the Apache Flink documentation. fields @timestamp, @tmid, @subtask | filter message like /Deploying/ | parse message "Deploying * to flink-taskmanager-*" as @subtask, @tmid | sort @timestamp desc | limit 2000 For more information about task scheduling, see Jobs and Scheduling in the Apache Flink documentation. Analyze operations: Change in parallelism The following CloudWatch Logs Insights query returns |
analytics-java-api-192 | analytics-java-api.pdf | 192 | every task's parallelism. Task parallelism is derived from operator parallelism, and is the same as the application's parallelism by default, unless you change it in code by specifying setParallelism. For more information about setting operator parallelism, see Setting the Parallelism: Operator Level in the Apache Flink documentation. fields @timestamp, @tmid, @subtask | filter message like /Deploying/ | parse message "Deploying * to flink-taskmanager-*" as @subtask, @tmid | sort @timestamp desc | limit 2000 For more information about task scheduling, see Jobs and Scheduling in the Apache Flink documentation. Analyze operations: Change in parallelism The following CloudWatch Logs Insights query returns changes to an application's parallelism (for example, due to automatic scaling). This query also returns manual changes to the application's parallelism. For more information about automatic scaling, see the section called “Use automatic scaling”. fields @timestamp, @parallelism | filter message like /property: parallelism.default, / | parse message "default, *" as @parallelism | sort @timestamp asc Review example queries 621 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Analyze errors: Access denied The following CloudWatch Logs Insights query returns Access Denied logs. fields @timestamp, @message, @messageType | filter applicationARN like /arn:aws:kinesisanalyticsus- west-2:012345678901:application\/YourApplication/ | filter @message like /AccessDenied/ | sort @timestamp desc Analyze errors: Source or sink not found The following CloudWatch Logs Insights query returns ResourceNotFound logs. ResourceNotFound logs result if a Kinesis source or sink is not found. fields @timestamp,@message | filter applicationARN like /arn:aws:kinesisanalyticsus- west-2:012345678901:application\/YourApplication/ | filter @message like /ResourceNotFoundException/ | sort @timestamp desc Analyze errors: Application task-related failures The following CloudWatch Logs Insights query returns an application's task-related failure logs. These logs result if an application's status switches from RUNNING to RESTARTING. fields @timestamp,@message | filter applicationARN like /arn:aws:kinesisanalyticsus- west-2:012345678901:application\/YourApplication/ | filter @message like /switched from RUNNING to RESTARTING/ | sort @timestamp desc For applications using Apache Flink version 1.8.2 and prior, task-related failures will result in the application status switching from RUNNING to FAILED instead. When using Apache Flink 1.8.2 and prior, use the following query to search for application task-related failures: fields @timestamp,@message | filter applicationARN like /arn:aws:kinesisanalyticsus- west-2:012345678901:application\/YourApplication/ | filter @message like /switched from RUNNING to FAILED/ | sort @timestamp desc Review example queries 622 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metrics and dimensions in Managed Service for Apache Flink When your Managed Service for Apache Flink processes a data source, Managed Service for Apache Flink reports the following metrics and dimensions to Amazon CloudWatch. Application metrics Metric Unit Description Level Usage Notes backPress Milliseconds uredTimeM sPerSecon d* The time (in milliseconds) this task or operator is back pressured per second. Task, Operator, Parallelism *Available for Managed Service for Apache Flink applications running Flink version 1.13 only. These metrics can be useful in identifying bottlenecks in an application. busyTimeM Milliseconds sPerSecon d* The time (in milliseconds) this task or operator is busy (neither idle nor back pressured) per second. Can be NaN, if the value could not be calculated. Task, Operator, Parallelism *Available for Managed Service for Apache Flink applications running Flink version 1.13 only. These metrics can be useful in identifying Metrics and dimensions in Managed Service for Apache Flink 623 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes cpuUtiliz Percentage ation Application Overall percentage of CPU utilizati on across task managers. For example, if there are five task managers, Managed Service for Apache Flink publishes five samples of this metric per reporting interval. bottlenecks in an application. You can use this metric to monitor minimum, average, and maximum CPU utilization in your applicati on. The CPUUtiliz ation metric only accounts for CPU usage of the TaskManag er JVM process running inside the container. Application metrics 624 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes container Percentage CPUUtiliz ation Application Overall percentage of CPU utilizati on across task manager containers in Flink applicati on cluster. For example, if there are five task managers, correspon dingly there are five TaskManag er containers and Managed Service for Apache Flink publishes 2 * five samples of this metric per 1 minute reporting interval. It is calculated per container as: Total CPU time (in seconds) consumed by container * 100 / Container CPU limit (in CPUs/ seconds) The CPUUtiliz ation metric only accounts for CPU usage of the TaskManag er JVM process running inside the container . There are other component s running outside the JVM within the same container. The container CPUUtiliz ation metric gives you a Application metrics 625 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes more complete picture, including all processes in terms of CPU exhaustion at the container and failures resulting from that. |
analytics-java-api-193 | analytics-java-api.pdf | 193 | container as: Total CPU time (in seconds) consumed by container * 100 / Container CPU limit (in CPUs/ seconds) The CPUUtiliz ation metric only accounts for CPU usage of the TaskManag er JVM process running inside the container . There are other component s running outside the JVM within the same container. The container CPUUtiliz ation metric gives you a Application metrics 625 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes more complete picture, including all processes in terms of CPU exhaustion at the container and failures resulting from that. Application metrics 626 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes container Percentage MemoryUti lization Application Overall percentage of memory utilization across task manager containers in Flink applicati on cluster. For example, if there are five task managers, correspon dingly there are five TaskManag er containers and Managed Service for Apache Flink publishes 2 * five samples of this metric per 1 minute reporting interval. It is calculated per container as: Container memory usage (bytes) * 100 / Container memory limit as per pod deployment spec (in bytes) The HeapMemor yUtilizat ion and ManagedMe moryUtilz ations metrics only account for specific memory metrics like Heap Memory Usage of TaskManag er JVM or Managed Memory (memory usage outside JVM for native processes like Application metrics 627 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes RocksDB State Backend). The container MemoryUti lization metric gives you a more complete picture by including the working set memory, which is a better tracker of total memory exhaustio n. Upon its exhaustion, it will result in Out of Memory Error for the TaskManager pod. Application metrics 628 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes container Percentage DiskUtili zation Application It is calculated per container as: Disk usage in bytes * 100 / Disk Limit for container in bytes For container s, it represent s utilization of the filesystem on which root volume of the container is set up. Overall percentage of disk utilizati on across task manager containers in Flink applicati on cluster. For example, if there are five task managers, correspon dingly there are five TaskManag er containers and Managed Service for Apache Flink publishes 2 * five samples of this metric per 1 minute reporting interval. Application metrics 629 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes currentIn Milliseconds putWaterm ark The last watermark Application, Operator, Task, This record is only emitted this applicati Parallelism for dimension s with two inputs. This is the minimum value of the last received watermarks. Application, Operator, Task, Parallelism on/operator/ task/thread has received The last watermark this applicati on/operator/ task/thread has emitted currentOu Milliseconds tputWater mark Application metrics 630 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes downtime Milliseconds Application For jobs currently in a failing/r ecovering situation, the time elapsed during this outage. This metric measures the time elapsed while a job is failing or recovering. This metric returns 0 for running jobs and -1 for completed jobs. If this metric is not 0 or -1, this indicates that the Apache Flink job for the application failed to run. Application metrics 631 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes fullResta Count rts Application The total number of times this job has fully restarted since it was submitted. This metric does not measure fine-grained restarts. You can use this metric to evaluate general applicati on health. Restarts can occur during internal maintenance by Managed Service for Apache Flink. Restarts higher than normal can indicate a problem with the applicati on. Application metrics 632 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes heapMemor Percentage yUtilizat ion Application Overall heap memory utilization across task managers. For example, if there are five task managers, Managed Service for Apache Flink publishes five samples of this metric per reporting interval. You can use this metric to monitor minimum, average, and maximum heap memory utilization in your applicati on. The HeapMemor yUtilizat ion only accounts for specific memory metrics like Heap Memory Usage of TaskManager JVM. Application metrics 633 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes idleTimeM Milliseconds sPerSecon d* lastCheck Bytes pointSize The time (in milliseconds) this task or operator is idle (has no data to process) per second. Idle time excludes back pressured time, so if the task is |
analytics-java-api-194 | analytics-java-api.pdf | 194 | publishes five samples of this metric per reporting interval. You can use this metric to monitor minimum, average, and maximum heap memory utilization in your applicati on. The HeapMemor yUtilizat ion only accounts for specific memory metrics like Heap Memory Usage of TaskManager JVM. Application metrics 633 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes idleTimeM Milliseconds sPerSecon d* lastCheck Bytes pointSize The time (in milliseconds) this task or operator is idle (has no data to process) per second. Idle time excludes back pressured time, so if the task is back pressured it is not idle. The total size of the last checkpoint Task, Operator, Parallelism *Available for Managed Application Service for Apache Flink applications running Flink version 1.13 only. These metrics can be useful in identifying bottlenecks in an application. You can use this metric to determine running applicati on storage utilization. If this metric is increasing in value, this may indicate that there is an issue with your applicati on, such as a memory leak or bottleneck. Application metrics 634 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes lastCheck Milliseconds pointDura tion Application The time it took to complete the last checkpoint This metric measures the time it took to complete the most recent checkpoint. If this metric is increasing in value, this may indicate that there is an issue with your applicati on, such as a memory leak or bottlenec k. In some cases, you can troublesh oot this issue by disabling checkpointing. Application metrics 635 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes managedMe Bytes moryUsed* The amount of managed Application, Operator, Task, *Available for Managed memory Parallelism Service for currently used. Apache Flink applications running Flink version 1.13 only. This relates to memory managed by Flink outside the Java heap. It is used for the RocksDB state backend, and is also available to applications. Application metrics 636 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes managedMe Bytes moryTotal * The total amount of managed memory. Application, Operator, Task, *Available for Managed Parallelism Service for Apache Flink applications running Flink version 1.13 only. This relates to memory managed by Flink outside the Java heap. It is used for the RocksDB state backend, and is also available to applicati ons. The ManagedMe moryUtilz ations metric only accounts for specific memory metrics like Managed Memory (memory usage outside JVM for native processes like Application metrics 637 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes RocksDB State Backend) managedMe Percentage moryUtili zation* Derived by managedMe Application, Operator, Task, *Available for Managed moryUsed/ Parallelism Service for managedMe moryTotal Apache Flink applications running Flink version 1.13 only. This relates to memory managed by Flink outside the Java heap. It is used for the RocksDB state backend, and is also available to applications. Application metrics 638 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes numberOfF Count ailedChec kpoints Application The number of times checkpointing has failed. You can use this metric to monitor application health and progress. Checkpoints may fail due to application problems, such as throughput or permissions issues. Application metrics 639 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes numRecord Count sIn* The total number of Application, Operator, Task, *To apply the SUM records this Parallelism statistic over application, operator, or task has received. a period of time (second/ minute): • Select the metric at the correct Level. If you’re tracking the metric for an Operator, you need to select the correspon ding operator metrics. • As Managed Service for Apache Flink takes 4 metric snapshots per minute, the following metric math should be used: m1/4 where m1 is the SUM Application metrics 640 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes statistic over a period (second/m inute) The metric's Level specifies whether this metric measures the total number of records the entire applicati on, a specific operator, or a specific task has received. Application metrics 641 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes numRecord Count/Second sInPerSec ond* The total number of Application, Operator, Task, *To apply the SUM records this Parallelism statistic over application, operator or task has received per second. a period of time (second/ minute): • Select the metric at the correct Level. If you’re tracking the metric for an Operator, you need to select the |
analytics-java-api-195 | analytics-java-api.pdf | 195 | Level specifies whether this metric measures the total number of records the entire applicati on, a specific operator, or a specific task has received. Application metrics 641 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes numRecord Count/Second sInPerSec ond* The total number of Application, Operator, Task, *To apply the SUM records this Parallelism statistic over application, operator or task has received per second. a period of time (second/ minute): • Select the metric at the correct Level. If you’re tracking the metric for an Operator, you need to select the correspon ding operator metrics. • As Managed Service for Apache Flink takes 4 metric snapshots per minute, the following metric math should be used: m1/4 where m1 is the SUM Application metrics 642 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes statistic over a period (second/m inute) The metric's Level specifies whether this metric measures the total number of records the entire applicati on, a specific operator, or a specific task has received per second. Application metrics 643 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes numRecord Count sOut* The total number of Application, Operator, Task, *To apply the SUM records this Parallelism statistic over application, operator or task has emitted. a period of time (second/ minute): • Select the metric at the correct Level. If you’re tracking the metric for an Operator, you need to select the correspon ding operator metrics. • As Managed Service for Apache Flink takes 4 metric snapshots per minute, the following metric math should be used: m1/4 where m1 is the SUM Application metrics 644 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes statistic over a period (second/m inute) The metric's Level specifies whether this metric measures the total number of records the entire applicati on, a specific operator, or a specific task has emitted. Application metrics 645 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes numLateRe Count cordsDrop ped* Application, Operator, Task, Parallelism *To apply the SUM statistic over a period of time (second/ minute): • Select the metric at the correct Level. If you’re tracking the metric for an Operator, you need to select the correspon ding operator metrics. • As Managed Service for Apache Flink takes 4 metric snapshots per minute, the following metric math should be used: m1/4 where m1 is the SUM Application metrics 646 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes statistic over a period (second/m inute) The number of records this operator or task has dropped due to arriving late. Application metrics 647 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes numRecord Count/Second sOutPerSe cond* The total number of Application, Operator, Task, *To apply the SUM records this Parallelism statistic over application, operator or task has emitted per second. a period of time (second/ minute): • Select the metric at the correct Level. If you’re tracking the metric for an Operator, you need to select the correspon ding operator metrics. • As Managed Service for Apache Flink takes 4 metric snapshots per minute, the following metric math should be used: m1/4 where m1 is the SUM Application metrics 648 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes statistic over a period (second/m inute) The metric's Level specifies whether this metric measures the total number of records the entire applicati on, a specific operator, or a specific task has emitted per second. oldGenera Count tionGCCou nt Application The total number of old garbage collection operation s that have occurred across all task managers. Application metrics 649 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes oldGenera Milliseconds tionGCTim e threadCou Count nt Application Application The total time spent performing old garbage collection operations. The total number of live threads used by the application. uptime Milliseconds Application The time that the job has been running without interruption. You can use this metric to monitor sum, average, and maximum garbage collection time. This metric measures the number of threads used by the application code. This is not the same as application parallelism. You can use this metric to determine if a job is running successfully. This metric returns -1 for completed jobs. Application metrics 650 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric KPUs* Unit Count Description Level Usage Notes Application The total number of |
analytics-java-api-196 | analytics-java-api.pdf | 196 | threads used by the application. uptime Milliseconds Application The time that the job has been running without interruption. You can use this metric to monitor sum, average, and maximum garbage collection time. This metric measures the number of threads used by the application code. This is not the same as application parallelism. You can use this metric to determine if a job is running successfully. This metric returns -1 for completed jobs. Application metrics 650 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric KPUs* Unit Count Description Level Usage Notes Application The total number of KPUs used by the applicati on. *This metric receives one sample per billing period (one hour). To visualize the number of KPUs over time, use MAX or AVG over a period of at least one (1) hour. The KPU count includes the orchestra tion KPU. For more information, see Managed Service for Apache Flink Pricing. Kinesis Data Streams connector metrics AWS emits all records for Kinesis Data Streams in addition to the following: Metric Unit Description Level Usage Notes millisbeh Milliseconds indLatest The number of milliseconds the consumer Application (for Stream), • A value of 0 indicates that record Kinesis Data Streams connector metrics 651 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes is behind the Parallelism (for processing ShardId) head of the stream, indicatin g how far behind current time the consumer is. is caught up, and there are no new records to process at this moment. A particula r shard's metric can be specified by stream name and shard id. • A value of -1 indicates that the service has not yet reported a value for the metric. Bytes bytesRequ estedPerF etch The bytes requested in a Application (for Stream), single call to Parallelism (for getRecords . ShardId) Amazon MSK connector metrics AWS emits all records for Amazon MSK in addition to the following: Metric currentof fsets Unit N/A Description Level Usage Notes The consumer' s current read Application (for Topic), Paralleli Amazon MSK connector metrics 652 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Level Usage Notes offset, for sm (for Partition Id) each partition . A particula r partition's metric can be specified by topic name and partition id. commitsFa N/A iled The total number of offset Application, Operator, Task, Committing offsets back to Parallelism commit failures to Kafka, if offset committin g and checkpoin ting are enabled. Kafka is only a means to expose consumer progress, so a commit failure does not affect the integrity of Flink's checkpoin ted partition offsets. commitsSu N/A cceeded The total number of Application, Operator, Task, successful offset Parallelism commits to Kafka, if offset committing and checkpointing are enabled. Amazon MSK connector metrics 653 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric committed offsets Unit N/A Description Level Usage Notes Application (for Topic), Paralleli sm (for Partition Id) The last successfully committed offsets to Kafka, for each partition . A particula r partition's metric can be specified by topic name and partition id. records_l Count ag_max The maximum lag in terms Application, Operator, Task, of number of Parallelism bytes_con sumed_rate Bytes records for any partition in this window The average number of bytes Application, Operator, Task, consumed per Parallelism second for a topic Apache Zeppelin metrics For Studio notebooks, AWS emits the following metrics at the application level: KPUs, cpuUtilization, heapMemoryUtilization, oldGenerationGCTime, oldGenerationGCCount, and threadCount. In addition, it emits the metrics shown in the following table, also at the application level. Apache Zeppelin metrics 654 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Unit Description Prometheus name zeppelinC puUtilization Percentage zeppelinH eapMemory Utilization Percentage zeppelinT hreadCount zeppelinW aitingJobs Count Count process_c pu_usage jvm_memor y_used_bytes Overall percentage of CPU utilization in the Apache Zeppelin server. Overall percentag e of heap memory utilization for the Apache Zeppelin server. The total number of live threads used by jvm_threa ds_live_t the Apache Zeppelin hreads server. The number of queued Apache jetty_thr eads_jobs Zeppelin jobs waiting for a thread. zeppelinS erverUptime Seconds The total time that the server has been process_u ptime_seconds up and running. View CloudWatch metrics You can view CloudWatch metrics for your application using the Amazon CloudWatch console or the AWS CLI. To view metrics using the CloudWatch console 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation pane, choose Metrics. View CloudWatch metrics 655 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. In the CloudWatch Metrics by Category pane for Managed Service for Apache Flink, choose a metrics category. 4. In the upper pane, scroll to view the full list of metrics. To view metrics using the |
analytics-java-api-197 | analytics-java-api.pdf | 197 | has been process_u ptime_seconds up and running. View CloudWatch metrics You can view CloudWatch metrics for your application using the Amazon CloudWatch console or the AWS CLI. To view metrics using the CloudWatch console 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation pane, choose Metrics. View CloudWatch metrics 655 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. In the CloudWatch Metrics by Category pane for Managed Service for Apache Flink, choose a metrics category. 4. In the upper pane, scroll to view the full list of metrics. To view metrics using the AWS CLI • At a command prompt, use the following command. aws cloudwatch list-metrics --namespace "AWS/KinesisAnalytics" --region region Set CloudWatch metrics reporting levels You can control the level of application metrics that your application creates. Managed Service for Apache Flink supports the following metrics levels: • Application: The application only reports the highest level of metrics for each application. Managed Service for Apache Flink metrics are published at the Application level by default. • Task: The application reports task-specific metric dimensions for metrics defined with the Task metric reporting level, such as number of records in and out of the application per second. • Operator: The application reports operator-specific metric dimensions for metrics defined with the Operator metric reporting level, such as metrics for each filter or map operation. • Parallelism: The application reports Task and Operator level metrics for each execution thread. This reporting level is not recommended for applications with a Parallelism setting above 64 due to excessive costs. Note You should only use this metric level for troubleshooting because of the amount of metric data that the service generates. You can only set this metric level using the CLI. This metric level is not available in the console. The default level is Application. The application reports metrics at the current level and all higher levels. For example, if the reporting level is set to Operator, the application reports Application, Task, and Operator metrics. Set CloudWatch metrics reporting levels 656 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide You set the CloudWatch metrics reporting level using the MonitoringConfiguration parameter of the CreateApplication action, or the MonitoringConfigurationUpdate parameter of the UpdateApplication action. The following example request for the UpdateApplication action sets the CloudWatch metrics reporting level to Task: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 4, "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "MonitoringConfigurationUpdate": { "ConfigurationTypeUpdate": "CUSTOM", "MetricsLevelUpdate": "TASK" } } } } You can also configure the logging level using the LogLevel parameter of the CreateApplication action or the LogLevelUpdate parameter of the UpdateApplication action. You can use the following log levels: • ERROR: Logs potentially recoverable error events. • WARN: Logs warning events that might lead to an error. • INFO: Logs informational events. • DEBUG: Logs general debugging events. For more information about Log4j logging levels, see Custom Log Levels in the Apache Log4j documentation. Use custom metrics with Amazon Managed Service for Apache Flink Managed Service for Apache Flink exposes 19 metrics to CloudWatch, including metrics for resource usage and throughput. In addition, you can create your own metrics to track application- specific data, such as processing events or accessing external resources. This topic contains the following sections: • How it works Use custom metrics with Amazon Managed Service for Apache Flink 657 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • View examples for creating a mapping class • View custom metrics How it works Custom metrics in Managed Service for Apache Flink use the Apache Flink metric system. Apache Flink metrics have the following attributes: • Type: A metric's type describes how it measures and reports data. Available Apache Flink metric types include Count, Gauge, Histogram, and Meter. For more information about Apache Flink metric types, see Metric Types. Note AWS CloudWatch Metrics does not support the Histogram Apache Flink metric type. CloudWatch can only display Apache Flink metrics of the Count, Gauge, and Meter types. • Scope: A metric's scope consists of its identifier and a set of key-value pairs that indicate how the metric will be reported to CloudWatch. A metric's identifier consists of the following: • A system scope, which indicates the level at which the metric is reported (e.g. Operator). • A user scope, that defines attributes such as user variables or the metric group names. These attributes are defined using MetricGroup.addGroup(key, value) or MetricGroup.addGroup(name). For more information about metric scope, see Scope. For more information about Apache Flink metrics, see Metrics in the Apache Flink documentation. To create a custom metric in your Managed Service for Apache Flink, you can access the Apache Flink metric system from any user function that extends RichFunction by calling GetMetricGroup. This method returns a MetricGroup object you can use to |
analytics-java-api-198 | analytics-java-api.pdf | 198 | scope, which indicates the level at which the metric is reported (e.g. Operator). • A user scope, that defines attributes such as user variables or the metric group names. These attributes are defined using MetricGroup.addGroup(key, value) or MetricGroup.addGroup(name). For more information about metric scope, see Scope. For more information about Apache Flink metrics, see Metrics in the Apache Flink documentation. To create a custom metric in your Managed Service for Apache Flink, you can access the Apache Flink metric system from any user function that extends RichFunction by calling GetMetricGroup. This method returns a MetricGroup object you can use to create and register custom metrics. Managed Service for Apache Flink reports all metrics created with the group key KinesisAnalytics to CloudWatch. Custom metrics that you define have the following characteristics: • Your custom metric has a metric name and a group name. These names must consist of alphanumeric characters according to Prometheus naming rules. Use custom metrics with Amazon Managed Service for Apache Flink 658 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Attributes that you define in user scope (except for the KinesisAnalytics metric group) are published as CloudWatch dimensions. • Custom metrics are published at the Application level by default. • Dimensions (Task/ Operator/ Parallelism) are added to the metric based on the application's monitoring level. You set the application's monitoring level using the MonitoringConfiguration parameter of the CreateApplication action, or the or MonitoringConfigurationUpdate parameter of the UpdateApplication action. View examples for creating a mapping class The following code examples demonstrate how to create a mapping class that creates and increments a custom metric, and how to implement the mapping class in your application by adding it to a DataStream object. Record count custom metric The following code example demonstrates how to create a mapping class that creates a metric that counts records in a data stream (the same functionality as the numRecordsIn metric): private static class NoOpMapperFunction extends RichMapFunction<String, String> { private transient int valueToExpose = 0; private final String customMetricName; public NoOpMapperFunction(final String customMetricName) { this.customMetricName = customMetricName; } @Override public void open(Configuration config) { getRuntimeContext().getMetricGroup() .addGroup("KinesisAnalytics") .addGroup("Program", "RecordCountApplication") .addGroup("NoOpMapperFunction") .gauge(customMetricName, (Gauge<Integer>) () -> valueToExpose); } @Override public String map(String value) throws Exception { valueToExpose++; return value; } Use custom metrics with Amazon Managed Service for Apache Flink 659 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } In the preceding example, the valueToExpose variable is incremented for each record that the application processes. After defining your mapping class, you then create an in-application stream that implements the map: DataStream<String> noopMapperFunctionAfterFilter = kinesisProcessed.map(new NoOpMapperFunction("FilteredRecords")); For the complete code for this application, see Record Count Custom Metric Application. Word count custom metric The following code example demonstrates how to create a mapping class that creates a metric that counts words in a data stream: private static final class Tokenizer extends RichFlatMapFunction<String, Tuple2<String, Integer>> { private transient Counter counter; @Override public void open(Configuration config) { this.counter = getRuntimeContext().getMetricGroup() .addGroup("KinesisAnalytics") .addGroup("Service", "WordCountApplication") .addGroup("Tokenizer") .counter("TotalWords"); } @Override public void flatMap(String value, Collector<Tuple2<String, Integer>>out) { // normalize and split the line String[] tokens = value.toLowerCase().split("\\W+"); // emit the pairs for (String token : tokens) { if (token.length() > 0) { counter.inc(); out.collect(new Tuple2<>(token, 1)); } Use custom metrics with Amazon Managed Service for Apache Flink 660 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } } In the preceding example, the counter variable is incremented for each word that the application processes. After defining your mapping class, you then create an in-application stream that implements the map: // Split up the lines in pairs (2-tuples) containing: (word,1), and // group by the tuple field "0" and sum up tuple field "1" DataStream<Tuple2<String, Integer>> wordCountStream = input.flatMap(new Tokenizer()).keyBy(0).sum(1); // Serialize the tuple to string format, and publish the output to kinesis sink wordCountStream.map(tuple -> tuple.toString()).addSink(createSinkFromStaticConfig()); For the complete code for this application, see Word Count Custom Metric Application. View custom metrics Custom metrics for your application appear in the CloudWatch Metrics console in the AWS/ KinesisAnalytics dashboard, under the Application metric group. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink Using Amazon CloudWatch metric alarms, you watch a CloudWatch metric over a time period that you specify. The alarm performs one or more actions based on the value of the metric or expression relative to a threshold over a number of time periods. An example of an action is sending a notification to an Amazon Simple Notification Service (Amazon SNS) topic. For more information about CloudWatch alarms, see Using Amazon CloudWatch Alarms. Review recommended alarms This section contains the recommended alarms for monitoring Managed Service for Apache Flink applications. The table describes the recommended alarms and has the following columns: Use CloudWatch Alarms with Amazon Managed |
analytics-java-api-199 | analytics-java-api.pdf | 199 | watch a CloudWatch metric over a time period that you specify. The alarm performs one or more actions based on the value of the metric or expression relative to a threshold over a number of time periods. An example of an action is sending a notification to an Amazon Simple Notification Service (Amazon SNS) topic. For more information about CloudWatch alarms, see Using Amazon CloudWatch Alarms. Review recommended alarms This section contains the recommended alarms for monitoring Managed Service for Apache Flink applications. The table describes the recommended alarms and has the following columns: Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 661 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Metric Expression: The metric or metric expression to test against the threshold. • Statistic: The statistic used to check the metric—for example, Average. • Threshold: Using this alarm requires you to determine a threshold that defines the limit of expected application performance. You need to determine this threshold by monitoring your application under normal conditions. • Description: Causes that might trigger this alarm, and possible solutions for the condition. Metric Expression Statistic Threshold Description downtime > 0 Average 0 RATE (numberOf Average 0 FailedChe ckpoints) > 0 A downtime greater than zero indicates that the applicati on has failed. If the value is larger than 0, the application is not processing any data. Recommend ed for all applicati ons. The Downtime metric measures the duration of an outage. A downtime greater than zero indicates that the application has failed. For troubleshooting, see Application is restarting. This metric counts the number of failed checkpoints since the application started. Depending on the application, it can be tolerable if checkpoin Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 662 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description ts fail occasionally. But if checkpoints are regularly failing, the application is likely unhealthy and needs further attention . We recommend monitoring RATE(numberOfFaile dCheckpoints) to alarm on the gradient and not on absolute values. Recommended for all applications. Use this metric to monitor application health and checkpoin ting progress. The application saves state data to checkpoints when it's healthy. Checkpoin ting can fail due to timeouts if the application isn't making progress in processing the input data. For troublesh ooting, see Checkpoin ting is timing out. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 663 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description Average Operator. numRecord sOutPerSecond threshold < The minimum number of records Recommended for all applications. Falling emitted from the below this threshold application during can indicate that normal conditions. the application isn't making expected progress on the input data. For troubleshooting, see Throughput is too slow. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 664 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description records_l ag_max|mi llisbehin dLatest > threshold Maximum The maximum expected latency If the application is consuming from during normal Kinesis or Kafka, conditions. these metrics indicate if the application is falling behind and needs to be scaled in order to keep up with the current load. This is a good generic metric that is easy to track for all kinds of applicati ons. But it can only be used for reactive scaling, i.e., when the application has already fallen behind. Recommended for all applications. Use the records_l ag_max metric for a Kafka source, or the millisbeh indLatest for a Kinesis stream source. Rising above this threshold can indicate that the application isn't making expected progress on the input data. For troubleshooting, see Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 665 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description Throughput is too slow. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 666 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description lastCheck Maximum pointDuration threshold > The maximum expected checkpoin Monitors how much data is stored in state t duration during and how long it takes normal conditions. to take a checkpoin t. If checkpoints grow or take long, the application is continuously spending time on checkpointing and has less cycles for actual processin g. At some points, checkpoints may grow too large or take so long that they fail. In addition to monitoring absolute values, customers should also consideri ng monitoring the change rate with RATE(last Checkpoin tSize) and RATE(last Checkpoin tDuration) If the lastCheck . pointDura tion continuou sly increases, rising above this threshold can indicate that Use CloudWatch Alarms with Amazon Managed Service for Apache |
analytics-java-api-200 | analytics-java-api.pdf | 200 | t duration during and how long it takes normal conditions. to take a checkpoin t. If checkpoints grow or take long, the application is continuously spending time on checkpointing and has less cycles for actual processin g. At some points, checkpoints may grow too large or take so long that they fail. In addition to monitoring absolute values, customers should also consideri ng monitoring the change rate with RATE(last Checkpoin tSize) and RATE(last Checkpoin tDuration) If the lastCheck . pointDura tion continuou sly increases, rising above this threshold can indicate that Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 667 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description the application isn't making expected progress on the input data, or that there are problems with application health such as backpressure. For troubleshooting, see Unbounded state growth. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 668 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description lastCheck pointSize > threshold Maximum The maximum expected checkpoin Monitors how much data is stored in state t size during normal and how long it takes conditions. to take a checkpoin t. If checkpoints grow or take long, the application is continuously spending time on checkpointing and has less cycles for actual processin g. At some points, checkpoints may grow too large or take so long that they fail. In addition to monitoring absolute values, customers should also consideri ng monitoring the change rate with RATE(last Checkpoin tSize) and RATE(last Checkpoin tDuration) If the lastCheck . pointSize continuously increases, rising above this threshold can indicate that Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 669 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description heapMemor Maximum yUtilization threshold > the application is accumulating state data. If the state data becomes too large, the application can run out of memory when recovering from a checkpoint, or recovering from a checkpoint might take too long. For troubleshooting, see Unbounded state growth. This gives a good indication of the You can use this metric to monitor the overall resource maximum memory utilization of the utilization of task application and can managers across the be used for proactive application. If the scaling unless the application reaches application is I/O this threshold, you bound. The maximum need to provision expected heapMemor more resources yUtilization size during normal conditions, with a recommended value of 90 percent. . You do this by enabling automatic scaling or increasin g the application parallelism. For more information about increasing resources , see Implement application scaling. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 670 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description cpuUtilization > threshold Maximum This gives a good indication of the You can use this metric to monitor overall resource the maximum CPU utilization of the utilization of task application and can managers across the be used for proactive application. If the scaling unless the application reaches application is I/O this threshold, you bound. The maximum need to provision expected cpuUtiliz ation size during normal conditions, more resources You do this by enabling automatic with a recommended scaling or increasin value of 80 percent. threadsCount > threshold Maximum The maximum expected threadsCo unt size during normal conditions. g the application parallelism. For more information about increasing resources , see Implement application scaling. You can use this metric to watch for thread leaks in task managers across the application. If this metric reaches this threshold, check your application code for threads being created without being closed. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 671 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description (oldGarba geCollect ionTime * 100)/60_000 over 1 min period') > threshold Maximum The maximum expected oldGarbag eCollecti onTime duration. We recommend If this metric is continually increasin g, this can indicate that there is a memory leak in task setting a threshold managers across the such that typical application. garbage collection time is 60 percent of the specified threshold, but the correct threshold for your application will vary. RATE(oldG arbageCol lectionCount) > threshold Maximum The maximum expected oldGarbag eCollecti onCount under normal conditions. If this metric is continually increasin g, this can indicate that there is a memory leak in task The correct threshold managers across the for your application application. will vary. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 672 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description Minimum Operator. currentOu tputWatermark - Operator. currentIn putWatermark threshold |
analytics-java-api-201 | analytics-java-api.pdf | 201 | collection time is 60 percent of the specified threshold, but the correct threshold for your application will vary. RATE(oldG arbageCol lectionCount) > threshold Maximum The maximum expected oldGarbag eCollecti onCount under normal conditions. If this metric is continually increasin g, this can indicate that there is a memory leak in task The correct threshold managers across the for your application application. will vary. Use CloudWatch Alarms with Amazon Managed Service for Apache Flink 672 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Metric Expression Statistic Threshold Description Minimum Operator. currentOu tputWatermark - Operator. currentIn putWatermark threshold > The minimum expected watermark If this metric is continually increasin increment under g, this can indicate normal conditions. that either the The correct threshold application is for your application processing increasin will vary. gly older events, or that an upstream subtask has not sent a watermark in an increasingly long time. Write custom messages to CloudWatch Logs You can write custom messages to your Managed Service for Apache Flink application's CloudWatch log. You do this by using the Apache log4j library or the Simple Logging Facade for Java (SLF4J) library. Topics • Write to CloudWatch logs using Log4J • Write to CloudWatch logs using SLF4J Write to CloudWatch logs using Log4J 1. Add the following dependencies to your application's pom.xml file: <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.6.1</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> Write custom messages to CloudWatch Logs 673 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide <version>2.6.1</version> </dependency> 2. Include the object from the library: import org.apache.logging.log4j.Logger; 3. Instantiate the Logger object, passing in your application class: private static final Logger log = LogManager.getLogger.getLogger(YourApplicationClass.class); 4. Write to the log using log.info. A large number of messages are written to the application log. To make your custom messages easier to filter, use the INFO application log level. log.info("This message will be written to the application's CloudWatch log"); The application writes a record to the log with a message similar to the following: { "locationInformation": "com.amazonaws.services.managed- flink.StreamingJob.main(StreamingJob.java:95)", "logger": "com.amazonaws.services.managed-flink.StreamingJob", "message": "This message will be written to the application's CloudWatch log", "threadName": "Flink-DispatcherRestEndpoint-thread-2", "applicationARN": "arn:aws:kinesisanalyticsus-east-1:123456789012:application/test", "applicationVersionId": "1", "messageSchemaVersion": "1", "messageType": "INFO" } Write to CloudWatch logs using SLF4J 1. Add the following dependency to your application's pom.xml file: <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.7</version> <scope>runtime</scope> </dependency> Write to CloudWatch logs using SLF4J 674 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Include the objects from the library: import org.slf4j.Logger; import org.slf4j.LoggerFactory; 3. Instantiate the Logger object, passing in your application class: private static final Logger log = LoggerFactory.getLogger(YourApplicationClass.class); 4. Write to the log using log.info. A large number of messages are written to the application log. To make your custom messages easier to filter, use the INFO application log level. log.info("This message will be written to the application's CloudWatch log"); The application writes a record to the log with a message similar to the following: { "locationInformation": "com.amazonaws.services.managed- flink.StreamingJob.main(StreamingJob.java:95)", "logger": "com.amazonaws.services.managed-flink.StreamingJob", "message": "This message will be written to the application's CloudWatch log", "threadName": "Flink-DispatcherRestEndpoint-thread-2", "applicationARN": "arn:aws:kinesisanalyticsus-east-1:123456789012:application/test", "applicationVersionId": "1", "messageSchemaVersion": "1", "messageType": "INFO" } Log Managed Service for Apache Flink API calls with AWS CloudTrail Managed Service for Apache Flink is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Managed Service for Apache Flink. CloudTrail captures all API calls for Managed Service for Apache Flink as events. The calls captured include calls from the Managed Service for Apache Flink console and code calls to the Managed Service for Apache Flink API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Managed Service for Apache Flink. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Log Managed Service for Apache Flink API calls with AWS CloudTrail 675 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Event history. Using the information collected by CloudTrail, you can determine the request that was made to Managed Service for Apache Flink, the IP address from which the request was made, who made the request, when it was made, and additional details. To learn more about CloudTrail, see the AWS CloudTrail User Guide. Managed Service for Apache Flink information in CloudTrail CloudTrail is enabled on your AWS account when you create the account. When activity occurs in Managed Service for Apache Flink, that activity is recorded in a CloudTrail event along with other AWS service events in Event history. You can view, search, and download recent events in your AWS account. For more information, see Viewing Events with CloudTrail Event History. For an ongoing record of events in your AWS account, including events |
analytics-java-api-202 | analytics-java-api.pdf | 202 | the request, when it was made, and additional details. To learn more about CloudTrail, see the AWS CloudTrail User Guide. Managed Service for Apache Flink information in CloudTrail CloudTrail is enabled on your AWS account when you create the account. When activity occurs in Managed Service for Apache Flink, that activity is recorded in a CloudTrail event along with other AWS service events in Event history. You can view, search, and download recent events in your AWS account. For more information, see Viewing Events with CloudTrail Event History. For an ongoing record of events in your AWS account, including events for Managed Service for Apache Flink, create a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see the following: • Overview for Creating a Trail • CloudTrail Supported Services and Integrations • Configuring Amazon SNS Notifications for CloudTrail • Receiving CloudTrail Log Files from Multiple Regions and Receiving CloudTrail Log Files from Multiple Accounts All Managed Service for Apache Flink actions are logged by CloudTrail and are documented in the Managed Service for Apache Flink API reference. For example, calls to the CreateApplication and UpdateApplication actions generate entries in the CloudTrail log files. Every event or log entry contains information about who generated the request. The identity information helps you determine the following: • Whether the request was made with root or AWS Identity and Access Management (IAM) user credentials. • Whether the request was made with temporary security credentials for a role or federated user. • Whether the request was made by another AWS service. Managed Service for Apache Flink information in CloudTrail 676 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For more information, see the CloudTrail userIdentity Element. Understand Managed Service for Apache Flink log file entries A trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket that you specify. CloudTrail log files contain one or more log entries. An event represents a single request from any source and includes information about the requested action, the date and time of the action, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they don't appear in any specific order. The following example shows a CloudTrail log entry that demonstrates the AddApplicationCloudWatchLoggingOption and DescribeApplication actions. { "Records": [ { "eventVersion": "1.05", "userIdentity": { "type": "IAMUser", "principalId": "EX_PRINCIPAL_ID", "arn": "arn:aws:iam::012345678910:user/Alice", "accountId": "012345678910", "accessKeyId": "EXAMPLE_KEY_ID", "userName": "Alice" }, "eventTime": "2019-03-07T01:19:47Z", "eventSource": "kinesisanlaytics.amazonaws.com", "eventName": "AddApplicationCloudWatchLoggingOption", "awsRegion": "us-east-1", "sourceIPAddress": "127.0.0.1", "userAgent": "aws-sdk-java/unknown-version Linux/x.xx", "requestParameters": { "applicationName": "cloudtrail-test", "currentApplicationVersionId": 1, "cloudWatchLoggingOption": { "logStreamARN": "arn:aws:logs:us-east-1:012345678910:log- group:cloudtrail-test:log-stream:flink-cloudwatch" } }, "responseElements": { "cloudWatchLoggingOptionDescriptions": [ { "cloudWatchLoggingOptionId": "2.1", Understand Managed Service for Apache Flink log file entries 677 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "logStreamARN": "arn:aws:logs:us-east-1:012345678910:log- group:cloudtrail-test:log-stream:flink-cloudwatch" } ], "applicationVersionId": 2, "applicationARN": "arn:aws:kinesisanalyticsus- east-1:012345678910:application/cloudtrail-test" }, "requestID": "18dfb315-4077-11e9-afd3-67f7af21e34f", "eventID": "d3c9e467-db1d-4cab-a628-c21258385124", "eventType": "AwsApiCall", "apiVersion": "2018-05-23", "recipientAccountId": "012345678910" }, { "eventVersion": "1.05", "userIdentity": { "type": "IAMUser", "principalId": "EX_PRINCIPAL_ID", "arn": "arn:aws:iam::012345678910:user/Alice", "accountId": "012345678910", "accessKeyId": "EXAMPLE_KEY_ID", "userName": "Alice" }, "eventTime": "2019-03-12T02:40:48Z", "eventSource": "kinesisanlaytics.amazonaws.com", "eventName": "DescribeApplication", "awsRegion": "us-east-1", "sourceIPAddress": "127.0.0.1", "userAgent": "aws-sdk-java/unknown-version Linux/x.xx", "requestParameters": { "applicationName": "sample-app" }, "responseElements": null, "requestID": "3e82dc3e-4470-11e9-9d01-e789c4e9a3ca", "eventID": "90ffe8e4-9e47-48c9-84e1-4f2d427d98a5", "eventType": "AwsApiCall", "apiVersion": "2018-05-23", "recipientAccountId": "012345678910" } ] } Understand Managed Service for Apache Flink log file entries 678 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Tune performance in Amazon Managed Service for Apache Flink This topic describes techniques to monitor and improve the performance of your Managed Service for Apache Flink application. Topics • Troubleshoot performance issues • Use performance best practices • Monitor performance Troubleshoot performance issues This section contains a list of symptoms that you can check to diagnose and fix performance issues. If your data source is a Kinesis stream, performance issues typically present as a high or increasing millisbehindLatest metric. For other sources, you can check a similar metric that represents lag in reading from the source. Understand the data path When investigating a performance issue with your application, consider the entire path that your data takes. The following application components may become performance bottlenecks and create backpressure if they are not properly designed or provisioned: • Data sources and destinations: Ensure that the external resources your application interacts with are properly provisioned for the throughput your application will experience. • State data: Ensure that your application doesn't interact with the state |
analytics-java-api-203 | analytics-java-api.pdf | 203 | present as a high or increasing millisbehindLatest metric. For other sources, you can check a similar metric that represents lag in reading from the source. Understand the data path When investigating a performance issue with your application, consider the entire path that your data takes. The following application components may become performance bottlenecks and create backpressure if they are not properly designed or provisioned: • Data sources and destinations: Ensure that the external resources your application interacts with are properly provisioned for the throughput your application will experience. • State data: Ensure that your application doesn't interact with the state store too frequently. You can optimize the serializer your application is using. The default Kryo serializer can handle any serializable type, but you can use a more performant serializer if your application only stores data in POJO types. For information about Apache Flink serializers, see Data Types & Serialization in the Apache Flink documentation. • Operators: Ensure that the business logic implemented by your operators isn't too complicated, or that you aren't creating or using resources with every record processed. Also ensure that your application isn't creating sliding or tumbling windows too frequently. Troubleshoot performance issues 679 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Performance troubleshooting solutions This section contains potential solutions to performance issues. Topics • CloudWatch monitoring levels • Application CPU metric • Application parallelism • Application logging • Operator parallelism • Application logic • Application memory CloudWatch monitoring levels Verify that the CloudWatch Monitoring Levels are not set to too verbose a setting. The Debug Monitoring Log Level setting generates a large amount of traffic, which can create backpressure. You should only use it while actively investigating issues with the application. If your application has a high Parallelism setting, using the Parallelism Monitoring Metrics Level will similarly generate a large amount of traffic that can lead to backpressure. Only use this metrics level when Parallelism for your application is low, or while investigating issues with the application. For more information, see Control application monitoring levels. Application CPU metric Check the application's CPU metric. If this metric is above 75 percent, you can allow the application to allocate more resources for itself by enabling auto scaling. If auto scaling is enabled, the application allocates more resources if CPU usage is over 75 percent for 15 minutes. For more information about scaling, see the Manage scaling properly section following, and the Implement application scaling. Performance troubleshooting solutions 680 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note An application will only scale automatically in response to CPU usage. The application will not auto scale in response to other system metrics, such as heapMemoryUtilization. If your application has a high level of usage for other metrics, increase your application's parallelism manually. Application parallelism Increase the application's parallelism. You update the application's parallelism using the ParallelismConfigurationUpdate parameter of the UpdateApplication action. The maximum KPUs for an application is 64 by default, and can be increased by requesting a limit increase. It is important to also assign parallelism to each operator based on its workload, rather than just increasing application parallelism alone. See Operator parallelism following. Application logging Check if the application is logging an entry for every record being processed. Writing a log entry for each record during times when the application has high throughput will cause severe bottlenecks in data processing. To check for this condition, query your logs for log entries that your application writes with every record it processes. For more information about reading application logs, see the section called “Analyze logs with CloudWatch Logs Insights”. Operator parallelism Verify that your application's workload is distributed evenly among worker processes. For information about tuning the workload of your application's operators, see Operator scaling. Application logic Examine your application logic for inefficient or non-performant operations, such as accessing an external dependency (such as a database or a web service), accessing application state, etc. An external dependency can also hinder performance if it is not performant or not reliably accessible, which may lead to the external dependency returing HTTP 500 errors. Performance troubleshooting solutions 681 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide If your application uses an external dependency to enrich or otherwise process incoming data, consider using asynchronous IO instead. For more information, see Async I/O in the Apache Flink documentation. Application memory Check your application for resource leaks. If your application is not properly disposing of threads or memory, you might see the millisbehindLatest, CheckpointSize, and CheckpointDurationmetric spiking or gradually increasing. This condition may also lead to task manager or job manager failures. Use performance best practices This section describes special considerations for designing an application for performance. Manage scaling properly This section contains information |
analytics-java-api-204 | analytics-java-api.pdf | 204 | for Apache Flink Developer Guide If your application uses an external dependency to enrich or otherwise process incoming data, consider using asynchronous IO instead. For more information, see Async I/O in the Apache Flink documentation. Application memory Check your application for resource leaks. If your application is not properly disposing of threads or memory, you might see the millisbehindLatest, CheckpointSize, and CheckpointDurationmetric spiking or gradually increasing. This condition may also lead to task manager or job manager failures. Use performance best practices This section describes special considerations for designing an application for performance. Manage scaling properly This section contains information about managing application-level and operator-level scaling. This section contains the following topics: • Manage application scaling properly • Manage operator scaling properly Manage application scaling properly You can use autoscaling to handle unexpected spikes in application activity. Your application's KPUs will increase automatically if the following criteria are met: • Autoscaling is enabled for the application. • CPU usage remains above 75 percent for 15 minutes. If autoscaling is enabled, but CPU usage does not remain at this threshold, the application will not scale up KPUs. If you experience a spike in CPU usage that does not meet this threshold, or a spike in a different usage metric such as heapMemoryUtilization, increase scaling manually to allow your application to handle activity spikes. Use performance best practices 682 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note If the application has automatically added more resources through auto scaling, the application will release the new resources after a period of inactivity. Downscaling resources will temporarily affect performance. For more information about scaling, see Implement application scaling. Manage operator scaling properly You can improve your application's performance by verifying that your application's workload is distributed evenly among worker processes, and that the operators in your application have the system resources they need to be stable and performant. You can set the parallelism for each operator in your application's code using the parallelism setting. If you don't set the parallelism for an operator, it will use the application-level parallelism setting. Operators that use the application-level parallelism setting can potentially use all of the system resources available for the application, making the application unstable. To best determine the parallelism for each operator, consider the operator's relative resource requirements compared to the other operators in the application. Set operators that are more resource-intensive to a higher operator parallelism setting than less resource-intensive operators. The total operator parallelism for the application is the sum of the parallelism for all the operators in the application. You tune the total operator parallelism for your application by determining the best ratio between it and the total task slots available for your application. A typical stable ratio of total operator parallelism to task slots is 4:1, that is, the application has one task slot available for every four operator subtasks available. An application with more resource intensive operators may need a ratio of 3:1 or 2:1, while an application with less resource-intensive operators may be stable with a ratio of 10:1. You can set the ratio for the operator using Use runtime properties, so you can tune the operator's parallelism without compiling and uploading your application code. The following code example demonstrates how to set operator parallelism as a tunable ratio of the current application parallelism: Map<String, Properties> applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties(); Manage scaling properly 683 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide operatorParallelism = StreamExecutionEnvironment.getParallelism() / Integer.getInteger( applicationProperties.get("OperatorProperties").getProperty("MyOperatorParallelismRatio") ); For information about subtasks, task slots, and other application resources, see Review Managed Service for Apache Flink application resources. To control the distribution of workload across your application's worker processes, use the Parallelism setting and the KeyBy partition method. For more information, see the following topics in the Apache Flink documentation: • Parallel Execution • DataStream Transformations Monitor external dependency resource usage If there is a performance bottleneck in a destination (such as Kinesis Streams, Firehose, DynamoDB or OpenSearch Service), your application will experience backpressure. Verify that your external dependencies are properly provisioned for your application throughput. Note Failures in other services can cause failures in your application. If you are seeing failures in your application, check the CloudWatch logs for your destination services for failures. Run your Apache Flink application locally To troubleshoot memory issues, you can run your application in a local Flink installation. This will give you access to debugging tools such as the stack trace and heap dumps that are not available when running your application in Managed Service for Apache Flink. For information about creating a local Flink installation, see Standalone in the Apache Flink Documentation. Monitor external dependency resource usage 684 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Monitor performance This section |
analytics-java-api-205 | analytics-java-api.pdf | 205 | in your application, check the CloudWatch logs for your destination services for failures. Run your Apache Flink application locally To troubleshoot memory issues, you can run your application in a local Flink installation. This will give you access to debugging tools such as the stack trace and heap dumps that are not available when running your application in Managed Service for Apache Flink. For information about creating a local Flink installation, see Standalone in the Apache Flink Documentation. Monitor external dependency resource usage 684 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Monitor performance This section describes tools for monitoring an application's performance. Monitor performance using CloudWatch metrics You monitor your application's resource usage, throughput, checkpointing, and downtime using CloudWatch metrics. For information about using CloudWatch metrics with your Managed Service for Apache Flink application, see ???. Monitor performance using CloudWatch logs and alarms You monitor error conditions that could potentially cause performance issues using CloudWatch Logs. Error conditions appear in log entries as Apache Flink job status changes from the RUNNING status to the FAILED status. You use CloudWatch alarms to create notifications for performance issues, such as resource use or checkpoint metrics above a safe threshold, or unexpected application status changes. For information about creating CloudWatch alarms for a Managed Service for Apache Flink application, see ???. Monitor performance 685 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink and Studio notebook quota Note Apache Flink versions 1.6, 1.8, and 1.11 have not been supported by the Apache Flink community for over three years. We now plan to end support for these versions in Amazon Managed Service for Apache Flink. From November 5, 2024, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. For all Regions with exception of the China Regions and the AWS GovCloud (US) Regions, from February 5, 2025, you will no longer be able to create, start, or run applications using these versions of Apache Flink in Amazon Managed Service for Apache Flink. For the China Regions and the AWS GovCloud (US) Regions, from March 19, 2025, you will no longer be able to create, start, or run applications using these versions of Apache Flink in Amazon Managed Service for Apache Flink. You can upgrade your applications statefully using the in-place version upgrades feature in Managed Service for Apache Flink. For more information, see Use in-place version upgrades for Apache Flink. When working with Amazon Managed Service for Apache Flink, note the following quota: • You can create up to 100 Managed Service for Apache Flink applications per Region in your account. You can create a case to request additional applications via the service quota increase form. For more information, see the AWS Support Center. For a list of Regions that support Managed Service for Apache Flink, see Managed Service for Apache Flink Regions and Endpoints. • The number of Kinesis processing units (KPU) is limited to 64 by default. For instructions on how to request an increase to this quota, see To request a quota increase in Service Quotas. Make sure you specify the application prefix to which the new KPU limit needs to be applied. 686 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide With Managed Service for Apache Flink, your AWS account is charged for allocated resources, rather than resources that your application uses. You are charged an hourly rate based on the maximum number of KPUs that are used to run your stream-processing application. A single KPU provides you with 1 vCPU and 4 GiB of memory. For each KPU, the service also provisions 50 GiB of running application storage. • You can create up to 1,000 Managed Service for Apache Flink snapshots per application. For more information, see Manage application backups using snapshots. • You can assign up to 50 tags per application. • The maximum size for an application JAR file is 512 MiB. If you exceed this quota, your application will fail to start. For Studio notebooks, the following quotas apply. To request higher quotas, create a support case. • websocketMessageSize = 5 MiB • noteSize = 5 MiB • noteCount = 1000 • Max cumulative UDF size = 100 MiB • Max cumulative dependency jar size = 300 MiB 687 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Manage maintenance tasks for Managed Service for Apache Flink Managed Service for Apache Flink patches your applications periodically with operating-system and container-image security updates to maintain compliance and meet AWS security goals. A maintenance window for a Managed Service for Apache Flink application is a time window of 8 hours during which Managed Service for |
analytics-java-api-206 | analytics-java-api.pdf | 206 | case. • websocketMessageSize = 5 MiB • noteSize = 5 MiB • noteCount = 1000 • Max cumulative UDF size = 100 MiB • Max cumulative dependency jar size = 300 MiB 687 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Manage maintenance tasks for Managed Service for Apache Flink Managed Service for Apache Flink patches your applications periodically with operating-system and container-image security updates to maintain compliance and meet AWS security goals. A maintenance window for a Managed Service for Apache Flink application is a time window of 8 hours during which Managed Service for Apache Flink performs application maintenance activities on an application. The maintenance might begin on different days for different AWS Regions as scheduled by the service team. Consult the table in the following section for maintenance time windows. As part of the maintenance procedure, your Managed Service for Apache Flink application will be restarted. This causes a downtime of 10 to 30 seconds during the application's maintenance window. The actual downtime duration depends on the application state, size, and snapshot/ checkpoint recency. For information on how to minimize the impact of this downtime, see the section called “Fault tolerance: checkpoints and savepoints”. You can find out if Managed Service for Apache Flink has performed a maintenance action on your application using the ListApplicationOperations API. For more information, see Identify when maintenance has ocurred on your application. Maintenance time windows in AWS Regions AWS Region Maintenance time window AWS GovCloud (US-West) 06:00–14:00 UTC AWS GovCloud (US-East) 03:00–11:00 UTC US East (N. Virginia) US East (Ohio) US West (N. California) US West (Oregon) 03:00–11:00 UTC 03:00–11:00 UTC 06:00–14:00 UTC 06:00–14:00 UTC Asia Pacific (Hong Kong) 13:00–21:00 UTC Asia Pacific (Mumbai) 16:30–00:30 UTC 688 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide AWS Region Maintenance time window Asia Pacific (Hyderabad) 16:30–00:30 UTC Asia Pacific (Seoul) 13:00–21:00 UTC Asia Pacific (Singapore) 14:00–22:00 UTC Asia Pacific (Sydney) Asia Pacific (Jakarta) Asia Pacific (Tokyo) Canada (Central) China (Beijing) China (Ningxia) Europe (Frankfurt) Europe (Zurich) Europe (Ireland) Europe (London) Europe (Stockholm) Europe (Milan) Europe (Spain) Africa (Cape Town) Europe (Ireland) Europe (London) Europe (Paris) 12:00–20:00 UTC 15:00–23:00 UTC 13:00–21:00 UTC 03:00–11:00 UTC 13:00–21:00 UTC 13:00–21:00 UTC 06:00–14:00 UTC 20:00–04:00 UTC 22:00–06:00 UTC 22:00–06:00 UTC 23:00–07:00 UTC 21:00–05:00 UTC 21:00–05:00 UTC 20:00–04:00 UTC 22:00–06:00 UTC 23:00–07:00 UTC 23:00–07:00 UTC 689 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide AWS Region Maintenance time window Europe (Stockholm) Middle East (Bahrain) Middle East (UAE) 23:00–07:00 UTC 13:00–21:00 UTC 18:00–02:00 UTC South America (São Paulo) 19:00–03:00 UTC Israel (Tel Aviv) 20:00–04:00 UTC Choose a maintenance window Managed Service for Apache Flink notifies you about upcoming planned maintenance events through email and AWS Health notifications. In Managed Service for Apache Flink, you can change the time of the day during which maintenance begins by using the UpdateApplicationMaintenanceConfiguration API and updating your maintenance window configuration. For more information, see UpdateApplicationMaintenanceConfiguration. Managed Service for Apache Flink uses the updated maintenance configuration the next time it schedules maintenance for the application. If you invoke this operation after the service has already scheduled maintenance, the service applies the configuration update the next time it schedules maintenance for the application. Note To provide the highest possible security posture, Managed Service for Apache Flink does not support any exception to opt out of maintenance, pause maintenance, or perform maintenance on specific days. Identify when maintenance has occurred on your application You can find if Managed Service for Apache Flink has performed a maintenance action on your application by using the ListApplicationOperations API. The following is an example request for ListApplicationOperations that can help you filter the list for maintenance on the application: Choose a maintenance window 690 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "ApplicationName": "MyApplication", "operation": "ApplicationMaintenance" } Identify maintenance instances 691 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Achieve production readiness for your Managed Service for Apache Flink applications This is a collection of important aspects of running production applications on Managed Service for Apache Flink. It's not an exhaustive list, but rather the bare minimum of what you should pay attention to before putting an application into production. Load-test your applications Some problems with applications only manifest under heavy load. We have seen cases where applications seemed healthy, yet an operational event substantially amplified the load on the application. This can happen completely independent of the application itself. If the data source or the data sink is unavailable for a couple of hours, the Flink application cannot make progress. When that issue is fixed, there is a backlog of unprocessed data that has accumulated, which can completely exhaust the available resources. The load can then amplify bugs or |
analytics-java-api-207 | analytics-java-api.pdf | 207 | should pay attention to before putting an application into production. Load-test your applications Some problems with applications only manifest under heavy load. We have seen cases where applications seemed healthy, yet an operational event substantially amplified the load on the application. This can happen completely independent of the application itself. If the data source or the data sink is unavailable for a couple of hours, the Flink application cannot make progress. When that issue is fixed, there is a backlog of unprocessed data that has accumulated, which can completely exhaust the available resources. The load can then amplify bugs or performance issues that had not emerged before. It is therefore essential that you run proper load tests for production applications. Questions that should be answered during those load tests include: • Is the application stable under sustained high load? • Can the application still take a savepoint under peak load? • How long does it take to process a backlog of 1 hour? And how long for 24 hours (depending on the max retention of the data in the stream)? • Does the throughput of the application increase when the application is scaled? When consuming from a data stream, these scenarios can be simulated by producing into the stream for some time. Then start the application and have it consume data from the beginning of time. For example, use a start position of TRIM_HORIZON in the case of a Kinesis data stream. Define Max parallelism The max parallelism defines the maximum parallelism a stateful application can scale to. This is defined when the state is first created and there is no way of scaling the operator beyond this maximum without discarding the state. Load-test your applications 692 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Max parallelism is set when the state is first created. By default, Max parallelism is set to: • 128, if parallelism <= 128 • MIN(nextPowerOfTwo(parallelism + (parallelism / 2)), 2^15): if parallelism > 128 If you are planning to scale your application > 128 parallelism, you should explicitly define the Max parallelism. You can define Max parallelism at level of application, with env.setMaxParallelism(x) or single operator. Unless differently specified, all operators inherit the Max parallelism of the application. For more information, see Setting the Maximum Parallelism in the Apache Flink Documentation. Set a UUID for all operators A UUID is used in the operation in which Flink maps a savepoint back to an individual operator. Setting a specific UUID for each operator gives a stable mapping for the savepoint process to restore. .map(...).uid("my-map-function") For more information, see Production Readiness Checklist. Set a UUID for all operators 693 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Maintain best practices for Managed Service for Apache Flink applications This section contains information and recommendations for developing a stable, performant Managed Service for Apache Flink applications. Topics • Minimize the size of the uber JAR • Fault tolerance: checkpoints and savepoints • Unsupported connector versions • Performance and parallelism • Setting per-operator parallelism • Logging • Coding • Managing credentials • Reading from sources with few shards/partitions • Studio notebook refresh interval • Studio notebook optimum performance • How watermark strategies and idle shards affect time windows • Set a UUID for all operators • Add ServiceResourceTransformer to the Maven shade plugin Minimize the size of the uber JAR Java/Scala application must be packaged in an uber (super/fat) JAR and include all the additional required dependencies that are not already provided by the runtime. However, the size of the uber JAR affects the application start and restart times and may cause the JAR to exceed the limit of 512 MB. To optimize the deployment time, your uber JAR should not include the following: • Any dependencies provided by the runtime as illustrated in the following example. They should have provided scope in the POM file or compileOnly in your Gradle configuration. Minimize the size of the uber JAR 694 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Any dependencies used for testing only, for example JUnit or Mockito. They should have test scope in the POM file or testImplementation in your Gradle configuration. • Any dependencies not actually used by your application. • Any static data or metadata required by your application. Static data should be loaded by the application at runtime, for example from a datastore or from Amazon S3. • See this POM example file for details on the preceding configuration settings. Provided dependencies The Managed Service for Apache Flink runtime provides a number of dependencies. These dependencies should not be included in the fat JAR and must have provided scope in the POM file or be explicitly excluded in the maven-shade-plugin configuration. Any |
analytics-java-api-208 | analytics-java-api.pdf | 208 | file or testImplementation in your Gradle configuration. • Any dependencies not actually used by your application. • Any static data or metadata required by your application. Static data should be loaded by the application at runtime, for example from a datastore or from Amazon S3. • See this POM example file for details on the preceding configuration settings. Provided dependencies The Managed Service for Apache Flink runtime provides a number of dependencies. These dependencies should not be included in the fat JAR and must have provided scope in the POM file or be explicitly excluded in the maven-shade-plugin configuration. Any of these dependencies included in the fat JAR is ignored at runtime, but increases the size of the JAR adding overhead during the deployment. Dependencies provided by the runtime, in runtime versions 1.18, 1.19, and 1.20: • org.apache.flink:flink-core • org.apache.flink:flink-java • org.apache.flink:flink-streaming-java • org.apache.flink:flink-scala_2.12 • org.apache.flink:flink-table-runtime • org.apache.flink:flink-table-planner-loader • org.apache.flink:flink-json • org.apache.flink:flink-connector-base • org.apache.flink:flink-connector-files • org.apache.flink:flink-clients • org.apache.flink:flink-runtime-web • org.apache.flink:flink-metrics-code • org.apache.flink:flink-table-api-java • org.apache.flink:flink-table-api-bridge-base • org.apache.flink:flink-table-api-java-bridge • org.apache.logging.log4j:log4j-slf4j-impl Minimize the size of the uber JAR 695 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • org.apache.logging.log4j:log4j-api • org.apache.logging.log4j:log4j-core • org.apache.logging.log4j:log4j-1.2-api Additionally, the runtime provides the library that is used to fetch application runtime properties in Managed Service for Apache Flink, com.amazonaws:aws-kinesisanalytics-runtime:1.2.0. All dependencies provided by the runtime must use the following recommendations to not include them in the uber JAR: • In Maven (pom.xml) and SBT (build.sbt), use provided scope. • In Gradle (build.gradle), use compileOnly configuration. Any provided dependency accidentally included in the uber JAR will be ignored at runtime because of Apache Flink's parent-first class loading. For more information, see parent-first-patterns in the Apache Flink documentation. Connectors Most of the connectors, except the FileSystem connector, that are not included in the runtime must be included in the POM file with the default scope (compile). Other recommendations As a rule, your Apache Flink uber JAR provided to Managed Service for Apache Flink should contain the minimum code required to run the application. Including dependencies that include the source classes, test datasets, or bootstrapping state should not be included in this jar. If static resources need to be pulled in at runtime, separate this concern into a resource such as Amazon S3. Examples of this include state bootstraps or an inference model. Take some time to consider your deep dependency tree and remove non-runtime dependencies. Although Managed Service for Apache Flink supports 512MB jar sizes, this should be seen as the exception to the rule. Apache Flink currently supports ~104MB jar sizes through its default configuration, and that should be the maximum target size of a jar needed. Minimize the size of the uber JAR 696 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Fault tolerance: checkpoints and savepoints Use checkpoints and savepoints to implement fault tolerance in your Managed Service for Apache Flink application. Keep the following in mind when developing and maintaining your application: • We recommend that you keep checkpointing enabled for your application. Checkpointing provides fault tolerance for your application during scheduled maintenance, and also for unexpected failures due to service issues, application dependency failures, and other issues. For information about scheduled maintenance, see Manage maintenance tasks for Managed Service for Apache Flink. • Set ApplicationSnapshotConfiguration::SnapshotsEnabled to false during application development or troubleshooting. A snapshot is created during every application stop, which may cause issues if the application is in an unhealthy state or isn't performant. Set SnapshotsEnabled to true after the application is in production and is stable. Note We recommend that you set your application to create a snapshot several times a day to restart properly with correct state data. The correct frequency for your snapshots depends on your application's business logic. Taking frequent snapshots lets you recover more recent data, but increases cost and requires more system resources. For information about monitoring application downtime, see ???. For more information about implementing fault tolerance, see Implement fault tolerance. Unsupported connector versions From Apache Flink version 1.15 or later, Managed Service for Apache Flink automatically prevents applications from starting or updating if they are using unsupported Kinesis connector versions bundled into application JARs. When upgrading to Managed Service for Apache Flink version 1.15 or later, make sure that you are using the most recent Kinesis connector. This is any version equal to or newer than version 1.15.2. All other versions are not supported by Managed Service for Apache Flink because they might cause consistency issues or failures with the Stop with Savepoint feature, preventing clean stop/update operations. To learn more about connector compatibility in Amazon Managed Service for Apache Flink versions, see Apache Flink connectors. Fault tolerance: checkpoints and savepoints 697 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Performance and parallelism Your |
analytics-java-api-209 | analytics-java-api.pdf | 209 | Service for Apache Flink version 1.15 or later, make sure that you are using the most recent Kinesis connector. This is any version equal to or newer than version 1.15.2. All other versions are not supported by Managed Service for Apache Flink because they might cause consistency issues or failures with the Stop with Savepoint feature, preventing clean stop/update operations. To learn more about connector compatibility in Amazon Managed Service for Apache Flink versions, see Apache Flink connectors. Fault tolerance: checkpoints and savepoints 697 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Performance and parallelism Your application can scale to meet any throughput level by tuning your application parallelism, and avoiding performance pitfalls. Keep the following in mind when developing and maintaining your application: • Verify that all of your application sources and sinks are sufficiently provisioned and are not being throttled. If the sources and sinks are other AWS services, monitor those services using CloudWatch. • For applications with very high parallelism, check if the high levels of parallelism are applied to all operators in the application. By default, Apache Flink applies the same application parallelism for all operators in the application graph. This can lead to either provisioning issues on sources or sinks, or bottlenecks in operator data processing. You can change the parallelism of each operator in code with setParallelism. • Understand the meaning of the parallelism settings for the operators in your application. If you change the parallelism for an operator, you may not be able to restore the application from a snapshot created when the operator had a parallelism that is incompatible with the current settings. For more information about setting operator parallelism, see Set maximum parallelism for operators explicitly. For more information about implementing scaling, see Implement application scaling. Setting per-operator parallelism By default, all operators have the parallelism set at application level. You can override the parallelism of a single operator using the DataStream API using .setParallelism(x). You can set an operator parallelism to any parallelism equal or lower than the application parallelism. If possible, define the operator parallelism as a function of the application parallelism. This way, the operator parallelism will vary with the application parallelism. If you are using autoscaling, for example, all operators will vary their parallelism in the same proportion: int appParallelism = env.getParallelism(); ... ...ops.setParalleism(appParallelism/2); Performance and parallelism 698 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide In some cases, you may want to set the operator parallelism to a constant. For example, setting the parallelism of a Kinesis Stream source to the number of shards. In these cases, consider passing the operator parallelism as application configuration parameter to change it without changing the code, for example to reshard the source stream. Logging You can monitor your application's performance and error conditions using CloudWatch Logs. Keep the following in mind when configuring logging for your application: • Enable CloudWatch logging for the application so that any runtime issues can be debugged. • Do not create a log entry for every record being processed in the application. This causes severe bottlenecks during processing and might lead to backpressure in the processing of data. • Create CloudWatch alarms to notify you when your application is not running properly. For more information, see ??? For more information about implementing logging, see ???. Coding You can make your application performant and stable by using recommended programming practices. Keep the following in mind when writing application code: • Do not use system.exit() in your application code, in either your application's main method or in user-defined functions. If you want to shut down your application from within code, throw an exception derived from Exception or RuntimeException, containing a message about what went wrong with the application. Note the following about how the service handles this exception: • If the exception is thrown from your application's main method, the service will wrap it in a ProgramInvocationException when the application transitions to the RUNNING status, and the job manager will fail to submit the job. • If the exception is thrown from a user-defined function, the job manager will fail the job and restart it, and details of the exception will be written to the exception log. • Consider shading your application JAR file and its included dependencies. Shading is recommended when there are potential conflicts in package names between your application Logging 699 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide and the Apache Flink runtime. If a conflict occurs, your application logs may contain an exception of type java.util.concurrent.ExecutionException. For more information about shading your application JAR file, see Apache Maven Shade Plugin. Managing credentials You should not bake any long-term credentials into production (or any other) applications. Long-term credentials are likely |
analytics-java-api-210 | analytics-java-api.pdf | 210 | the exception will be written to the exception log. • Consider shading your application JAR file and its included dependencies. Shading is recommended when there are potential conflicts in package names between your application Logging 699 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide and the Apache Flink runtime. If a conflict occurs, your application logs may contain an exception of type java.util.concurrent.ExecutionException. For more information about shading your application JAR file, see Apache Maven Shade Plugin. Managing credentials You should not bake any long-term credentials into production (or any other) applications. Long-term credentials are likely checked into a version control system and can easily get lost. Instead, you can associate a role to the Managed Service for Apache Flink application and grant permissions to that role. The running Flink application can then select temporary credentials with the respective permissions from the environment. In case authentication is needed for a service that is not natively integrated with IAM, for example, a database that requires a username and password for authentication, you should consider storing secrets in AWS Secrets Manager. Many AWS native services support authentication: • Kinesis Data Streams – ProcessTaxiStream.java • Amazon MSK – https://github.com/aws/aws-msk-iam-auth/#using-the-amazon-msk-library-for- iam-authentication • Amazon Elasticsearch Service – AmazonElasticsearchSink.java • Amazon S3 – works out of the box on Managed Service for Apache Flink Reading from sources with few shards/partitions When reading from Apache Kafka or a Kinesis Data Stream, there may be a mismatch between the parallelism of the stream (the number of partitions for Kafka and the number of shards for Kinesis) and the parallelism of the application. With a naive design, the parallelism of an application cannot scale beyond the parallelism of a stream: Each subtask of a source operator can only read from 1 or more shards/partitions. That means for a stream with only 2 shards and an application with a parallelism of 8, that only two subtasks are actually consuming from the stream and 6 subtasks remain idle. This can substantially limit the throughput of the application, in particular if the deserialization is expensive and carried out by the source (which is the default). To mitigate this effect, you can either scale the stream. But that may not always be desirable or possible. Alternatively, you can restructure the source so that it does not do any serialization and just passes on the byte[]. You can then rebalance the data to distribute it evenly across all tasks Managing credentials 700 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide and then deserialize the data there. In this way, you can leverage all subtasks for the deserialization and this potentially expensive operation is no longer bound by the number of shards/partitions of the stream. Studio notebook refresh interval If you change the paragraph result refresh interval, set it to a value that is at least 1000 milliseconds. Studio notebook optimum performance We tested with the following statement and got the optimal performance when events-per- second multiplied by number-of-keys was under 25,000,000. This was for events-per- second under 150,000. SELECT key, sum(value) FROM key-values GROUP BY key How watermark strategies and idle shards affect time windows When reading events from Apache Kafka and Kinesis Data Streams, the source can set the event time based on attributes of the stream. In case of Kinesis, the event time equals the approximate arrival time of events. But setting event time at the source for events is not sufficient for a Flink application to use event time. The source must also generate watermarks that propagate information about event time from the source to all other operators. The Flink documentation has a good overview of how that process works. By default, the timestamp of an event read from Kinesis is set to the approximate arrival time determined by Kinesis. An additional prerequisite for event time to work in the application is a watermark strategy. WatermarkStrategy<String> s = WatermarkStrategy .<String>forMonotonousTimestamps() .withIdleness(Duration.ofSeconds(...)); The watermark strategy is then applied to a DataStream with the assignTimestampsAndWatermarks method. There are some useful built-in strategies: Studio notebook refresh interval 701 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • forMonotonousTimestamps() will just use the event time (approximate arrival time) and periodically emit the maximum value as a watermark (for each specific subtask) • forBoundedOutOfOrderness(Duration.ofSeconds(...)) similar to the previous strategy, but will use the event time – duration for watermark generation. From the Flink documentation: Each parallel subtask of a source function usually generates its watermarks independently. These watermarks define the event time at that particular parallel source. As the watermarks flow through the streaming program, they advance the event time at the operators where they arrive. Whenever an operator advances its event time, it generates a new watermark downstream for its successor operators. Some |
analytics-java-api-211 | analytics-java-api.pdf | 211 | event time (approximate arrival time) and periodically emit the maximum value as a watermark (for each specific subtask) • forBoundedOutOfOrderness(Duration.ofSeconds(...)) similar to the previous strategy, but will use the event time – duration for watermark generation. From the Flink documentation: Each parallel subtask of a source function usually generates its watermarks independently. These watermarks define the event time at that particular parallel source. As the watermarks flow through the streaming program, they advance the event time at the operators where they arrive. Whenever an operator advances its event time, it generates a new watermark downstream for its successor operators. Some operators consume multiple input streams; a union, for example, or operators following a keyBy(…) or partition(…) function. Such an operator’s current event time is the minimum of its input streams' event times. As its input streams update their event times, so does the operator. That means, if a source subtask is consuming from an idle shard, downstream operators do not receive new watermarks from that subtask and hence processing stalls for all downstream operators that use time windows. To avoid this, customers can add the withIdleness option to the watermark strategy. With that option, an operator excludes the watermarks from idle upstream subtasks when computing the event time of the operator. The idle subtask therefore no longer blocks the advancement of event time in downstream operators. Depending on the shard assigner you use, some workers might not be assigned any Kinesis shards. In that case, these workers will manifest the idle source behavior even if all Kinesis shards continuously deliver event data. You can mitigate this risk by using uniformShardAssigner with the source operator. This makes sure that all source subtasks have shards to process as long as the number of workers is less or equal to the number of active shards. However, the idleness option with the build-in watermark strategies will not advance the event time if no subtask is reading any event, that is there are no events in the stream. This becomes particularly visible for test cases where a finite set of events is read from the stream. As event time does not advance after the last event has been read, the last window (containing the last event) will not close. How watermark strategies and idle shards affect time windows 702 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Summary • The withIdleness setting will not generate new watermarks in case a shard is idle. It will exclude the last watermark sent by idle subtasks from the min watermark calculation in downstream operators. • With the build-in watermark strategies, the last open window will not close (unless new events that advance the watermark will be sent, but that creates a new window that then remains open). • Even when the time is set by the Kinesis stream, late arriving events can still happen if one shard is consumed faster than others (for example during app initialization or when using TRIM_HORIZON where all existing shards are consumed in parallel ignoring their parent/child relationship). • The withIdleness settings of the watermark strategy seem interrupt the Kinesis source- specific settings for idle shards (ConsumerConfigConstants.SHARD_IDLE_INTERVAL_MILLIS. Example The following application is reading from a stream and creating session windows based on event time. Properties consumerConfig = new Properties(); consumerConfig.put(AWSConfigConstants.AWS_REGION, "eu-west-1"); consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "TRIM_HORIZON"); FlinkKinesisConsumer<String> consumer = new FlinkKinesisConsumer<>("...", new SimpleStringSchema(), consumerConfig); WatermarkStrategy<String> s = WatermarkStrategy .<String>forMonotonousTimestamps() .withIdleness(Duration.ofSeconds(15)); env.addSource(consumer) .assignTimestampsAndWatermarks(s) .map(new MapFunction<String, Long>() { @Override public Long map(String s) throws Exception { return Long.parseLong(s); } Summary 703 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide }) .keyBy(l -> 0l) .window(EventTimeSessionWindows.withGap(Time.seconds(10))) .process(new ProcessWindowFunction<Long, Object, Long, TimeWindow>() { @Override public void process(Long aLong, ProcessWindowFunction<Long, Object, Long, TimeWindow>.Context context, Iterable<Long>iterable, Collector<Object> collector) throws Exception { long count = StreamSupport.stream(iterable.spliterator(), false).count(); long timestamp = context.currentWatermark(); System.out.print("XXXXXXXXXXXXXX Window with " + count + " events"); System.out.println("; Watermark: " + timestamp + ", " + Instant.ofEpochMilli(timestamp)); for (Long l : iterable) { System.out.println(l); } } }); In the following example, 8 events are written to a 16 shard stream (the first 2 and the last event happen to land in the same shard). $ aws kinesis put-record --stream-name hp-16 --partition-key 1 --data MQ== $ aws kinesis put-record --stream-name hp-16 --partition-key 2 --data Mg== $ aws kinesis put-record --stream-name hp-16 --partition-key 3 --data Mw== $ date { "ShardId": "shardId-000000000012", "SequenceNumber": "49627894338614655560500811028721934184977530127978070210" } { "ShardId": "shardId-000000000012", "SequenceNumber": "49627894338614655560500811028795678659974022576354623682" } { "ShardId": "shardId-000000000014", "SequenceNumber": "49627894338659257050897872275134360684221592378842022114" } Wed Mar 23 11:19:57 CET 2022 Example 704 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide $ sleep 10 $ aws kinesis put-record --stream-name hp-16 --partition-key 4 --data NA== $ aws kinesis put-record --stream-name hp-16 --partition-key 5 --data NQ== $ date { "ShardId": "shardId-000000000010", "SequenceNumber": "49627894338570054070103749783042116732419934393936642210" } { "ShardId": "shardId-000000000014", "SequenceNumber": "49627894338659257050897872275659034489934342334017700066" } Wed Mar |
analytics-java-api-212 | analytics-java-api.pdf | 212 | MQ== $ aws kinesis put-record --stream-name hp-16 --partition-key 2 --data Mg== $ aws kinesis put-record --stream-name hp-16 --partition-key 3 --data Mw== $ date { "ShardId": "shardId-000000000012", "SequenceNumber": "49627894338614655560500811028721934184977530127978070210" } { "ShardId": "shardId-000000000012", "SequenceNumber": "49627894338614655560500811028795678659974022576354623682" } { "ShardId": "shardId-000000000014", "SequenceNumber": "49627894338659257050897872275134360684221592378842022114" } Wed Mar 23 11:19:57 CET 2022 Example 704 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide $ sleep 10 $ aws kinesis put-record --stream-name hp-16 --partition-key 4 --data NA== $ aws kinesis put-record --stream-name hp-16 --partition-key 5 --data NQ== $ date { "ShardId": "shardId-000000000010", "SequenceNumber": "49627894338570054070103749783042116732419934393936642210" } { "ShardId": "shardId-000000000014", "SequenceNumber": "49627894338659257050897872275659034489934342334017700066" } Wed Mar 23 11:20:10 CET 2022 $ sleep 10 $ aws kinesis put-record --stream-name hp-16 --partition-key 6 --data Ng== $ date { "ShardId": "shardId-000000000001", "SequenceNumber": "49627894338369347363316974173886988345467035365375213586" } Wed Mar 23 11:20:22 CET 2022 $ sleep 10 $ aws kinesis put-record --stream-name hp-16 --partition-key 7 --data Nw== $ date { "ShardId": "shardId-000000000008", "SequenceNumber": "49627894338525452579706688535878947299195189349725503618" } Wed Mar 23 11:20:34 CET 2022 $ sleep 60 $ aws kinesis put-record --stream-name hp-16 --partition-key 8 --data OA== $ date { "ShardId": "shardId-000000000012", "SequenceNumber": "49627894338614655560500811029600823255837371928900796610" } Example 705 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Wed Mar 23 11:21:27 CET 2022 This input should result in 5 session windows: event 1,2,3; event 4,5; event 6; event 7; event 8. However, the program only yields the first 4 windows. 11:59:21,529 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 5 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000006,HashKeyRange: {StartingHashKey: 127605887595351923798765477786913079296,EndingHashKey: 148873535527910577765226390751398592511},SequenceNumberRange: {StartingSequenceNumber: 49627894338480851089309627289524549239292625588395704418,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,530 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 5 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000006,HashKeyRange: {StartingHashKey: 127605887595351923798765477786913079296,EndingHashKey: 148873535527910577765226390751398592511},SequenceNumberRange: {StartingSequenceNumber: 49627894338480851089309627289524549239292625588395704418,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 11:59:21,530 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 6 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000007,HashKeyRange: {StartingHashKey: 148873535527910577765226390751398592512,EndingHashKey: 170141183460469231731687303715884105727},SequenceNumberRange: {StartingSequenceNumber: 49627894338503151834508157912666084957565273949901684850,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,530 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 6 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000010,HashKeyRange: {StartingHashKey: 212676479325586539664609129644855132160,EndingHashKey: 233944127258145193631070042609340645375},SequenceNumberRange: {StartingSequenceNumber: 49627894338570054070103749782090692112383219034419626146,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,530 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 6 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000007,HashKeyRange: {StartingHashKey: 148873535527910577765226390751398592512,EndingHashKey: 170141183460469231731687303715884105727},SequenceNumberRange: {StartingSequenceNumber: 49627894338503151834508157912666084957565273949901684850,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 Example 706 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 11:59:21,531 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 4 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000005,HashKeyRange: {StartingHashKey: 106338239662793269832304564822427566080,EndingHashKey: 127605887595351923798765477786913079295},SequenceNumberRange: {StartingSequenceNumber: 49627894338458550344111096666383013521019977226889723986,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 4 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000005,HashKeyRange: {StartingHashKey: 106338239662793269832304564822427566080,EndingHashKey: 127605887595351923798765477786913079295},SequenceNumberRange: {StartingSequenceNumber: 49627894338458550344111096666383013521019977226889723986,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 3 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000004,HashKeyRange: {StartingHashKey: 85070591730234615865843651857942052864,EndingHashKey: 106338239662793269832304564822427566079},SequenceNumberRange: {StartingSequenceNumber: 49627894338436249598912566043241477802747328865383743554,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 2 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000003,HashKeyRange: {StartingHashKey: 63802943797675961899382738893456539648,EndingHashKey: 85070591730234615865843651857942052863},SequenceNumberRange: {StartingSequenceNumber: 49627894338413948853714035420099942084474680503877763122,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 3 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000015,HashKeyRange: {StartingHashKey: 319014718988379809496913694467282698240,EndingHashKey: 340282366920938463463374607431768211455},SequenceNumberRange: {StartingSequenceNumber: 49627894338681557796096402897798370703746460841949528306,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 2 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000014,HashKeyRange: {StartingHashKey: 297747071055821155530452781502797185024,EndingHashKey: 319014718988379809496913694467282698239},SequenceNumberRange: {StartingSequenceNumber: 49627894338659257050897872274656834985473812480443547874,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM Example 707 Managed Service for Apache Flink 11:59:21,532 INFO Managed Service for Apache Flink Developer Guide org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 3 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000004,HashKeyRange: {StartingHashKey: 85070591730234615865843651857942052864,EndingHashKey: 106338239662793269832304564822427566079},SequenceNumberRange: {StartingSequenceNumber: 49627894338436249598912566043241477802747328865383743554,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 2 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000003,HashKeyRange: {StartingHashKey: 63802943797675961899382738893456539648,EndingHashKey: 85070591730234615865843651857942052863},SequenceNumberRange: {StartingSequenceNumber: 49627894338413948853714035420099942084474680503877763122,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 0 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000001,HashKeyRange: {StartingHashKey: 21267647932558653966460912964485513216,EndingHashKey: 42535295865117307932921825928971026431},SequenceNumberRange: {StartingSequenceNumber: 49627894338369347363316974173816870647929383780865802258,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 0 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000009,HashKeyRange: {StartingHashKey: 191408831393027885698148216680369618944,EndingHashKey: 212676479325586539664609129644855132159},SequenceNumberRange: {StartingSequenceNumber: 49627894338547753324905219158949156394110570672913645714,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,532 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 7 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000000,HashKeyRange: {StartingHashKey: 0,EndingHashKey: 21267647932558653966460912964485513215},SequenceNumberRange: {StartingSequenceNumber: 49627894338347046618118443550675334929656735419359821826,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,533 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 0 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000012,HashKeyRange: {StartingHashKey: 255211775190703847597530955573826158592,EndingHashKey: 276479423123262501563991868538311671807},SequenceNumberRange: {StartingSequenceNumber: 49627894338614655560500811028373763548928515757431587010,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM Example 708 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 11:59:21,533 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 7 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000008,HashKeyRange: {StartingHashKey: 170141183460469231731687303715884105728,EndingHashKey: 191408831393027885698148216680369618943},SequenceNumberRange: {StartingSequenceNumber: 49627894338525452579706688535807620675837922311407665282,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,533 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 0 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000001,HashKeyRange: {StartingHashKey: 21267647932558653966460912964485513216,EndingHashKey: 42535295865117307932921825928971026431},SequenceNumberRange: {StartingSequenceNumber: 49627894338369347363316974173816870647929383780865802258,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 11:59:21,533 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask |
analytics-java-api-213 | analytics-java-api.pdf | 213 | [] - Subtask 0 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000012,HashKeyRange: {StartingHashKey: 255211775190703847597530955573826158592,EndingHashKey: 276479423123262501563991868538311671807},SequenceNumberRange: {StartingSequenceNumber: 49627894338614655560500811028373763548928515757431587010,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM Example 708 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 11:59:21,533 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 7 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000008,HashKeyRange: {StartingHashKey: 170141183460469231731687303715884105728,EndingHashKey: 191408831393027885698148216680369618943},SequenceNumberRange: {StartingSequenceNumber: 49627894338525452579706688535807620675837922311407665282,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,533 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 0 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000001,HashKeyRange: {StartingHashKey: 21267647932558653966460912964485513216,EndingHashKey: 42535295865117307932921825928971026431},SequenceNumberRange: {StartingSequenceNumber: 49627894338369347363316974173816870647929383780865802258,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 11:59:21,533 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 7 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000011,HashKeyRange: {StartingHashKey: 233944127258145193631070042609340645376,EndingHashKey: 255211775190703847597530955573826158591},SequenceNumberRange: {StartingSequenceNumber: 49627894338592354815302280405232227830655867395925606578,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,533 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 7 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000000,HashKeyRange: {StartingHashKey: 0,EndingHashKey: 21267647932558653966460912964485513215},SequenceNumberRange: {StartingSequenceNumber: 49627894338347046618118443550675334929656735419359821826,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 11:59:21,568 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 1 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000002,HashKeyRange: {StartingHashKey: 42535295865117307932921825928971026432,EndingHashKey: 63802943797675961899382738893456539647},SequenceNumberRange: {StartingSequenceNumber: 49627894338391648108515504796958406366202032142371782690,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM 11:59:21,568 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 1 will be seeded with initial shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000013,HashKeyRange: {StartingHashKey: 276479423123262501563991868538311671808,EndingHashKey: 297747071055821155530452781502797185023},SequenceNumberRange: {StartingSequenceNumber: 49627894338636956305699341651515299267201164118937567442,}}'}, starting state set as sequence number EARLIEST_SEQUENCE_NUM Example 709 Managed Service for Apache Flink 11:59:21,568 INFO Managed Service for Apache Flink Developer Guide org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 1 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000002,HashKeyRange: {StartingHashKey: 42535295865117307932921825928971026432,EndingHashKey: 63802943797675961899382738893456539647},SequenceNumberRange: {StartingSequenceNumber: 49627894338391648108515504796958406366202032142371782690,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 0 11:59:23,209 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 0 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000009,HashKeyRange: {StartingHashKey: 191408831393027885698148216680369618944,EndingHashKey: 212676479325586539664609129644855132159},SequenceNumberRange: {StartingSequenceNumber: 49627894338547753324905219158949156394110570672913645714,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 1 11:59:23,244 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 6 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000010,HashKeyRange: {StartingHashKey: 212676479325586539664609129644855132160,EndingHashKey: 233944127258145193631070042609340645375},SequenceNumberRange: {StartingSequenceNumber: 49627894338570054070103749782090692112383219034419626146,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 1 event: 6; timestamp: 1648030822428, 2022-03-23T10:20:22.428Z 11:59:23,377 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 3 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000015,HashKeyRange: {StartingHashKey: 319014718988379809496913694467282698240,EndingHashKey: 340282366920938463463374607431768211455},SequenceNumberRange: {StartingSequenceNumber: 49627894338681557796096402897798370703746460841949528306,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 1 11:59:23,405 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 2 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000014,HashKeyRange: {StartingHashKey: 297747071055821155530452781502797185024,EndingHashKey: 319014718988379809496913694467282698239},SequenceNumberRange: {StartingSequenceNumber: 49627894338659257050897872274656834985473812480443547874,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 1 11:59:23,581 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 7 will start consuming seeded shard StreamShardHandle{streamName='hp-16', Example 710 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide shard='{ShardId: shardId-000000000008,HashKeyRange: {StartingHashKey: 170141183460469231731687303715884105728,EndingHashKey: 191408831393027885698148216680369618943},SequenceNumberRange: {StartingSequenceNumber: 49627894338525452579706688535807620675837922311407665282,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 1 11:59:23,586 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 1 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000013,HashKeyRange: {StartingHashKey: 276479423123262501563991868538311671808,EndingHashKey: 297747071055821155530452781502797185023},SequenceNumberRange: {StartingSequenceNumber: 49627894338636956305699341651515299267201164118937567442,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 1 11:59:24,790 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 0 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000012,HashKeyRange: {StartingHashKey: 255211775190703847597530955573826158592,EndingHashKey: 276479423123262501563991868538311671807},SequenceNumberRange: {StartingSequenceNumber: 49627894338614655560500811028373763548928515757431587010,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 2 event: 4; timestamp: 1648030809282, 2022-03-23T10:20:09.282Z event: 3; timestamp: 1648030797697, 2022-03-23T10:19:57.697Z event: 5; timestamp: 1648030810871, 2022-03-23T10:20:10.871Z 11:59:24,907 INFO org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - Subtask 7 will start consuming seeded shard StreamShardHandle{streamName='hp-16', shard='{ShardId: shardId-000000000011,HashKeyRange: {StartingHashKey: 233944127258145193631070042609340645376,EndingHashKey: 255211775190703847597530955573826158591},SequenceNumberRange: {StartingSequenceNumber: 49627894338592354815302280405232227830655867395925606578,}}'} from sequence number EARLIEST_SEQUENCE_NUM with ShardConsumer 2 event: 7; timestamp: 1648030834105, 2022-03-23T10:20:34.105Z event: 1; timestamp: 1648030794441, 2022-03-23T10:19:54.441Z event: 2; timestamp: 1648030796122, 2022-03-23T10:19:56.122Z event: 8; timestamp: 1648030887171, 2022-03-23T10:21:27.171Z XXXXXXXXXXXXXX Window with 3 events; Watermark: 1648030809281, 2022-03-23T10:20:09.281Z 3 1 2 XXXXXXXXXXXXXX Window with 2 events; Watermark: 1648030834104, 2022-03-23T10:20:34.104Z 4 5 XXXXXXXXXXXXXX Window with 1 events; Watermark: 1648030834104, 2022-03-23T10:20:34.104Z Example 711 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 6 XXXXXXXXXXXXXX Window with 1 events; Watermark: 1648030887170, 2022-03-23T10:21:27.170Z 7 The output is only showing 4 windows (missing the last window containing event 8). This is due to event time and the watermark strategy. The last window cannot close because the pre-built watermark strategies the time never advances beyond the time of the last event that has been read from the stream. But for the window to close, time needs to advance more than 10 seconds after the last event. In this case, the last watermark is 2022-03-23T10:21:27.170Z, but for the session window to close, a watermark 10s and 1ms later is required. If the withIdleness option is removed from the watermark strategy, no session window will ever close, because the “global watermark” of the window operator cannot advance. When the Flink application starts (or if there is data skew), some shards might be consumed faster than others. This can cause some watermarks to be emitted too early from a subtask (the subtask might emit the watermark based on the content of one shard without having consumed from the other shards it’s subscribed to). Ways to mitigate are different watermarking strategies that add a safety buffer (forBoundedOutOfOrderness(Duration.ofSeconds(30)) or explicitly allow late arriving events (allowedLateness(Time.minutes(5)). Set a UUID for all operators When Managed Service for Apache Flink starts a Flink job for an application with a snapshot, the Flink job can fail to start due to certain issues. One of them is |
analytics-java-api-214 | analytics-java-api.pdf | 214 | shards might be consumed faster than others. This can cause some watermarks to be emitted too early from a subtask (the subtask might emit the watermark based on the content of one shard without having consumed from the other shards it’s subscribed to). Ways to mitigate are different watermarking strategies that add a safety buffer (forBoundedOutOfOrderness(Duration.ofSeconds(30)) or explicitly allow late arriving events (allowedLateness(Time.minutes(5)). Set a UUID for all operators When Managed Service for Apache Flink starts a Flink job for an application with a snapshot, the Flink job can fail to start due to certain issues. One of them is operator ID mismatch. Flink expects explicit, consistent operator IDs for Flink job graph operators. If not set explicitly, Flink generates an ID for the operators. This is because Flink uses these operator IDs to uniquely identify the operators in a job graph and uses them to store the state of each operator in a savepoint. The operator ID mismatch issue happens when Flink does not find a 1:1 mapping between the operator IDs of a job graph and the operator IDs defined in a savepoint. This happens when explicit consistent operator IDs are not set and Flink generates operator IDs that may not be consistent with every job graph creation. The likelihood of applications running into this issue is high during maintenance runs. To avoid this, we recommend customers set UUID for all operators in the Flink code. For more information, see the topic Set a UUID for all operators under Production readiness. Set a UUID for all operators 712 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Add ServiceResourceTransformer to the Maven shade plugin Flink uses Java’s Service Provider Interfaces (SPI) to load components such as connectors and formats. Multiple Flink dependencies using SPI may cause clashes in the uber-jar and unexpected application behaviors. We recommend that you add the ServiceResourceTransformer of the Maven shade plugin, defined in the pom.xml. <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <executions> <execution> <id>shade</id> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers combine.children="append"> <!-- The service transformer is needed to merge META- INF/services files --> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/> <!-- ... --> </transformers> </configuration> </execution> </executions> </plugin> Add ServiceResourceTransformer to the Maven shade plugin 713 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apache Flink stateful functions Stateful Functions is an API that simplifies building distributed stateful applications. It’s based on functions with persistent state that can interact dynamically with strong consistency guarantees. A Stateful Functions application is basically just an Apache Flink Application and hence can be deployed to Managed Service for Apache Flink. However, there are a couple of differences between packaging Stateful Functions for a Kubernetes cluster and for Managed Service for Apache Flink. The most important aspect of a Stateful Functions application is the module configuration contains all necessary runtime information to configure the Stateful Functions runtime. This configuration is usually packaged into a Stateful Functions specific container and deployed on Kubernetes. But that is not possible with Managed Service for Apache Flink. Following is an adaptation of the StateFun Python example for Managed Service for Apache Flink: Apache Flink application template Instead of using a customer container for the Stateful Functions runtime, customers can compile a Flink application jar that just invokes the Stateful Functions runtime and contains the required dependencies. For Flink 1.13, the required dependencies look similar to this: <dependency> <groupId>org.apache.flink</groupId> <artifactId>statefun-flink-distribution</artifactId> <version>3.1.0</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </exclusion> </exclusions> </dependency> And the main method of the Flink application to invoke the Stateful Function runtime looks like this: Apache Flink application template 714 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide public static void main(String[] args) throws Exception { final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); StatefulFunctionsConfig stateFunConfig = StatefulFunctionsConfig.fromEnvironment(env); stateFunConfig.setProvider((StatefulFunctionsUniverseProvider) (classLoader, statefulFunctionsConfig) -> { Modules modules = Modules.loadFromClassPath(); return modules.createStatefulFunctionsUniverse(stateFunConfig); }); StatefulFunctionsJob.main(env, stateFunConfig); } Note that these components are generic and independent of the logic that is implemented in the Stateful Function. Location of the module configuration The Stateful Functions module configuration needs to be included in the class path to be discoverable for the Stateful Functions runtime. It's best to include it in the resources folder of the Flink application and package it into the jar file. Similar to a common Apache Flink application, you can then use maven to create an uber jar file and deploy that on Managed Service for Apache Flink. Location of the module configuration 715 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apache Flink settings Managed Service for Apache Flink is an implementation of the Apache Flink framework. Managed Service for Apache Flink uses the default values described in this section. Some of these values can be set by the Managed Service for Apache Flink applications in |
analytics-java-api-215 | analytics-java-api.pdf | 215 | the Flink application and package it into the jar file. Similar to a common Apache Flink application, you can then use maven to create an uber jar file and deploy that on Managed Service for Apache Flink. Location of the module configuration 715 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apache Flink settings Managed Service for Apache Flink is an implementation of the Apache Flink framework. Managed Service for Apache Flink uses the default values described in this section. Some of these values can be set by the Managed Service for Apache Flink applications in code, and others cannot be changed. Use the links in this section to learn more about Apache flink settings and which ones are modifiable. This topic contains the following sections: • Apache Flink configuration • State backend • Checkpointing • Savepointing • Heap sizes • Buffer debloating • Modifiable Flink configuration properties • View configured Flink properties Apache Flink configuration Managed Service for Apache Flink provides a default Flink configuration consisting of Apache Flink-recommended values for most properties and a few based on common application profiles. For more information about Flink configuration, see Configuration. Service-provided default configuration works for most applications. However, to tweak Flink configuration properties to improve performance for certain applications with high parallelism, high memory and state usage, or enable new debugging features in Apache Flink, you can change certain properties by requesting a support case. For more information, see AWS Support Center. You can check the current configuration for your application using the Apache Flink Dashboard. Apache Flink configuration 716 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide State backend Managed Service for Apache Flink stores transient data in a state backend. Managed Service for Apache Flink uses the RocksDBStateBackend. Calling setStateBackend to set a different backend has no effect. We enable the following features on the state backend: • Incremental state backend snapshots • Asynchronous state backend snapshots • Local recovery of checkpoints For more information about state backends, see State Backends in the Apache Flink Documentation. Checkpointing Managed Service for Apache Flink uses a default checkpoint configuration with the following values. Some of these values can be changed using CheckpointConfiguration. You must set CheckpointConfiguration.ConfigurationType to CUSTOM for Managed Service for Apache Flink to use modified checkpointing values. Setting Can be modified? How Default Value CheckpointingEnabl ed Modifiable Create Application True Update Application AWS CloudFormation CheckpointInterval Modifiable Create Application 60000 Update Application AWS CloudFormation Modifiable Create Application 5000 MinPauseB etweenCheckpoints State backend 717 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Setting Can be modified? How Default Value Update Application AWS CloudFormation Unaligned checkpoin ts Number of Concurren t Checkpoints Modifiable Support case False Not Modifiable N/A 1 Checkpointing Mode Not Modifiable Checkpoint Retention Policy Not Modifiable Checkpoint Timeout Not Modifiable Not Modifiable N/A N/A N/A N/A Not Modifiable N/A Max Checkpoints Retained Checkpoint and Savepoint Location Savepointing Exactly Once On Failure 60 minutes 1 We store durable checkpoint and savepoint data to a service-owned S3 bucket. By default, when restoring from a savepoint, the resume operation will try to map all state of the savepoint back to the program you are restoring with. If you dropped an operator, by default, restoring from a savepoint that has data that corresponds to the missing operator will fail. You can allow the operation to succeed by setting the AllowNonRestoredState parameter of the application's FlinkRunConfiguration to true. This will allow the resume operation to skip state that cannot be mapped to the new program. For more information, see Allowing Non-Restored State in the Apache Flink documentation. Savepointing 718 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Heap sizes Managed Service for Apache Flink allocates each KPU 3 GiB of JVM heap, and reserves 1 GiB for native code allocations. For information about increasing your application capacity, see the section called “Implement application scaling”. For more information about JVM heap sizes, see Configuration in the Apache Flink documentation. Buffer debloating Buffer debloating can help applications with high backpressure. If your application experiences failed checkpoints/savepoints, enabling this feature could be useful. To do this, request a support case. For more information, see The Buffer Debloating Mechanism in the Apache Flink documentation. Modifiable Flink configuration properties Following are Flink configuration settings that you can modify using a support case. You can modify more than one property at a time, and for multiple applications at the same time by specifying the application prefix. If there are other Flink configuration properties outside this list you want to modify, specify the exact property in your case. Restart strategy From Flink 1.19 and later, we use the exponential-delay restart strategy by default. All previous versions use the fixed-delay restart strategy by default. restart-strategy: |
analytics-java-api-216 | analytics-java-api.pdf | 216 | For more information, see The Buffer Debloating Mechanism in the Apache Flink documentation. Modifiable Flink configuration properties Following are Flink configuration settings that you can modify using a support case. You can modify more than one property at a time, and for multiple applications at the same time by specifying the application prefix. If there are other Flink configuration properties outside this list you want to modify, specify the exact property in your case. Restart strategy From Flink 1.19 and later, we use the exponential-delay restart strategy by default. All previous versions use the fixed-delay restart strategy by default. restart-strategy: restart-strategy.fixed-delay.delay: restart-strategy.exponential-delay.backoff-muliplier: restart-strategy.exponential-delay.initial-backoff: restart-strategy.exponential-delay.jitter-factor: restart-strategy.exponential-delay.reset-backoff-threshold: Heap sizes 719 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Checkpoints and state backends state.backend: state.backend.fs.memory-threshold: state.backend.incremental: Checkpointing execution.checkpointing.unaligned: execution.checkpointing.interval-during-backlog: RocksDB native metrics RocksDB Native Metrics are not shipped to CloudWatch. Once enabled, these metrics can be accessed either from the Flink dashboard or the Flink REST API with custom tooling. Managed Service for Apache Flink enables customers to access the latest Flink REST API (or the supported version you are using) in read-only mode using the CreateApplicationPresignedUrl API. This API is used by Flink’s own dashboard, but it can also be used by custom monitoring tools. state.backend.rocksdb.metrics.actual-delayed-write-rate: state.backend.rocksdb.metrics.background-errors: state.backend.rocksdb.metrics.block-cache-capacity: state.backend.rocksdb.metrics.block-cache-pinned-usage: state.backend.rocksdb.metrics.block-cache-usage: state.backend.rocksdb.metrics.column-family-as-variable: state.backend.rocksdb.metrics.compaction-pending: state.backend.rocksdb.metrics.cur-size-active-mem-table: state.backend.rocksdb.metrics.cur-size-all-mem-tables: state.backend.rocksdb.metrics.estimate-live-data-size: state.backend.rocksdb.metrics.estimate-num-keys: Checkpoints and state backends 720 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide state.backend.rocksdb.metrics.estimate-pending-compaction-bytes: state.backend.rocksdb.metrics.estimate-table-readers-mem: state.backend.rocksdb.metrics.is-write-stopped: state.backend.rocksdb.metrics.mem-table-flush-pending: state.backend.rocksdb.metrics.num-deletes-active-mem-table: state.backend.rocksdb.metrics.num-deletes-imm-mem-tables: state.backend.rocksdb.metrics.num-entries-active-mem-table: state.backend.rocksdb.metrics.num-entries-imm-mem-tables: state.backend.rocksdb.metrics.num-immutable-mem-table: state.backend.rocksdb.metrics.num-live-versions: state.backend.rocksdb.metrics.num-running-compactions: state.backend.rocksdb.metrics.num-running-flushes: state.backend.rocksdb.metrics.num-snapshots: state.backend.rocksdb.metrics.size-all-mem-tables: RocksDB options state.backend.rocksdb.compaction.style: state.backend.rocksdb.memory.partitioned-index-filters: state.backend.rocksdb.thread.num: Advanced state backends options state.storage.fs.memory-threshold: Full TaskManager options task.cancellation.timeout: RocksDB options 721 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide taskmanager.jvm-exit-on-oom: taskmanager.numberOfTaskSlots: taskmanager.slot.timeout: taskmanager.network.memory.fraction: taskmanager.network.memory.max: taskmanager.network.request-backoff.initial: taskmanager.network.request-backoff.max: taskmanager.network.memory.buffer-debloat.enabled: taskmanager.network.memory.buffer-debloat.period: taskmanager.network.memory.buffer-debloat.samples: taskmanager.network.memory.buffer-debloat.threshold-percentages: Memory configuration taskmanager.memory.jvm-metaspace.size: taskmanager.memory.jvm-overhead.fraction: taskmanager.memory.jvm-overhead.max: taskmanager.memory.managed.consumer-weights: taskmanager.memory.managed.fraction: taskmanager.memory.network.fraction: taskmanager.memory.network.max: taskmanager.memory.segment-size: taskmanager.memory.task.off-heap.size: RPC / Akka akka.ask.timeout: Memory configuration 722 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide akka.client.timeout: akka.framesize: akka.lookup.timeout: akka.tcp.timeout: Client client.timeout: Advanced cluster options cluster.intercept-user-system-exit: cluster.processes.halt-on-fatal-error: Filesystem configurations fs.s3.connection.maximum: fs.s3a.connection.maximum: fs.s3a.threads.max: s3.upload.max.concurrent.uploads: Advanced fault tolerance options heartbeat.timeout: jobmanager.execution.failover-strategy: Memory configuration jobmanager.memory.heap.size: Metrics metrics.latency.interval: Client 723 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Advanced options for the REST endpoint and client rest.flamegraph.enabled: rest.server.numThreads: Advanced SSL security options security.ssl.internal.handshake-timeout: Advanced scheduling options slot.request.timeout: Advanced options for Flink web UI web.timeout: View configured Flink properties You can view Apache Flink properties you have configured yourself or requested to be modified through a support case via the Apache Flink Dashboard and following these steps: 1. Go to the Flink Dashboard 2. Choose Job Manager in the left-hand side navigation pane. 3. Choose Configuration to view the list of Flink properties. Advanced options for the REST endpoint and client 724 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Configure Managed Service for Apache Flink to access resources in an Amazon VPC You can configure a Managed Service for Apache Flink application to connect to private subnets in a virtual private cloud (VPC) in your account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a private network for resources such as databases, cache instances, or internal services. Connect your application to the VPC to access private resources during execution. This topic contains the following sections: • Amazon VPC concepts • VPC application permissions • Internet and service access for a VPC-connected Managed Service for Apache Flink application • Use the Managed Service for Apache Flink VPC API • Example: Use a VPC to access data in an Amazon MSK cluster Amazon VPC concepts Amazon VPC is the networking layer for Amazon EC2. If you're new to Amazon EC2, see What is Amazon EC2? in the Amazon EC2 User Guide for Linux Instances to get a brief overview. The following are the key concepts for VPCs: • A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. • A subnet is a range of IP addresses in your VPC. • A route table contains a set of rules, called routes, that are used to determine where network traffic is directed. • An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic. • A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. Amazon VPC concepts 725 Managed Service for Apache Flink Managed Service for Apache Flink |
analytics-java-api-217 | analytics-java-api.pdf | 217 | in your VPC and the internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic. • A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. Amazon VPC concepts 725 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For more information about the Amazon VPC service, see the Amazon Virtual Private Cloud User Guide. Managed Service for Apache Flink creates elastic network interfaces in one of the subnets provided in your VPC configuration for the application. The number of elastic network interfaces created in your VPC subnets may vary, depending on the parallelism and parallelism per KPU of the application. For more information about application scaling, see Implement application scaling. Note VPC configurations are not supported for SQL applications. Note The Managed Service for Apache Flink service manages the checkpoint and snapshot state for applications that have a VPC configuration. VPC application permissions This section describes the permission policies your application will need to work with your VPC. For more information about using permissions policies, see Identity and Access Management for Amazon Managed Service for Apache Flink. The following permissions policy grants your application the necessary permissions to interact with a VPC. To use this permission policy, add it to your application's execution role. Add a permissions policy for accessing an Amazon VPC { "Version": "2012-10-17", "Statement": [ { "Sid": "VPCReadOnlyPermissions", "Effect": "Allow", "Action": [ "ec2:DescribeVpcs", "ec2:DescribeSubnets", VPC application permissions 726 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ec2:DescribeSecurityGroups", "ec2:DescribeDhcpOptions" ], "Resource": "*" }, { "Sid": "ENIReadWritePermissions", "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DescribeNetworkInterfaces", "ec2:DeleteNetworkInterface" ], "Resource": "*" } ] } Note When you specify application resources using the console (such as CloudWatch Logs or an Amazon VPC), the console modifies your application execution role to grant permission to access those resources. You only need to manually modify your application's execution role if you create your application without using the console. Internet and service access for a VPC-connected Managed Service for Apache Flink application By default, when you connect a Managed Service for Apache Flink application to a VPC in your account, it does not have access to the internet unless the VPC provides access. If the application needs internet access, the following need to be true: • The Managed Service for Apache Flink application should only be configured with private subnets. • The VPC must contain a NAT gateway or instance in a public subnet. Establish internet and service access for a VPC-connected Managed Service for Apache Flink application 727 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • A route must exist for outbound traffic from the private subnets to the NAT gateway in a public subnet. Note Several services offer VPC endpoints. You can use VPC endpoints to connect to Amazon services from within a VPC without internet access. Whether a subnet is public or private depends on its route table. Every route table has a default route, which determines the next hop for packets that have a public destination. • For a Private subnet: The default route points to a NAT gateway (nat-...) or NAT instance (eni-...). • For a Public subnet: The default route points to an internet gateway (igw-...). Once you configure your VPC with a public subnet (with a NAT) and one or more private subnets, do the following to identify your private and public subnets: • In the VPC console, from the navigation pane, choose Subnets. • Select a subnet, and then choose the Route Table tab. Verify the default route: • Public subnet: Destination: 0.0.0.0/0, Target: igw-… • Private subnet: Destination: 0.0.0.0/0, Target: nat-… or eni-… To associate the Managed Service for Apache Flink application with private subnets: • Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink • On the Managed Service for Apache Flink applications page, choose your application, and choose Application details. • On the page for your application, choose Configure. • In the VPC Connectivity section, choose the VPC to associate with your application. Choose the subnets and security group associated with your VPC that you want the application to use to access VPC resources. • Choose Update. Establish internet and service access for a VPC-connected Managed Service for Apache Flink application 728 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Related information Creating a VPC with Public and Private Subnets NAT gateway basics Use the Managed Service for |
analytics-java-api-218 | analytics-java-api.pdf | 218 | page, choose your application, and choose Application details. • On the page for your application, choose Configure. • In the VPC Connectivity section, choose the VPC to associate with your application. Choose the subnets and security group associated with your VPC that you want the application to use to access VPC resources. • Choose Update. Establish internet and service access for a VPC-connected Managed Service for Apache Flink application 728 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Related information Creating a VPC with Public and Private Subnets NAT gateway basics Use the Managed Service for Apache Flink VPC API Use the following Managed Service for Apache Flink API operations to manage VPCs for your application. For information on using the Managed Service for Apache Flink API, see API example code. Create application Use the CreateApplication action to add a VPC configuration to your application during creation. The following example request code for the CreateApplication action includes a VPC configuration when the application is created: { "ApplicationName":"MyApplication", "ApplicationDescription":"My-Application-Description", "RuntimeEnvironment":"FLINK-1_15", "ServiceExecutionRole":"arn:aws:iam::123456789123:role/myrole", "ApplicationConfiguration": { "ApplicationCodeConfiguration":{ "CodeContent":{ "S3ContentLocation":{ "BucketARN":"arn:aws:s3:::amzn-s3-demo-bucket", "FileKey":"myflink.jar", "ObjectVersion":"AbCdEfGhIjKlMnOpQrStUvWxYz12345" } }, "CodeContentType":"ZIPFILE" }, "FlinkApplicationConfiguration":{ "ParallelismConfiguration":{ "ConfigurationType":"CUSTOM", "Parallelism":2, "ParallelismPerKPU":1, "AutoScalingEnabled":true } Related information 729 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide }, "VpcConfigurations": [ { "SecurityGroupIds": [ "sg-0123456789abcdef0" ], "SubnetIds": [ "subnet-0123456789abcdef0" ] } ] } } AddApplicationVpcConfiguration Use the AddApplicationVpcConfiguration action to add a VPC configuration to your application after it has been created. The following example request code for the AddApplicationVpcConfiguration action adds a VPC configuration to an existing application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 9, "VpcConfiguration": { "SecurityGroupIds": [ "sg-0123456789abcdef0" ], "SubnetIds": [ "subnet-0123456789abcdef0" ] } } DeleteApplicationVpcConfiguration Use the DeleteApplicationVpcConfiguration action to remove a VPC configuration from your application. The following example request code for the AddApplicationVpcConfiguration action removes an existing VPC configuration from an application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 9, "VpcConfigurationId": "1.1" } AddApplicationVpcConfiguration 730 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update application Use the UpdateApplication action to update all of an application's VPC configurations at once. The following example request code for the UpdateApplication action updates all of the VPC configurations for an application: { "ApplicationConfigurationUpdate": { "VpcConfigurationUpdates": [ { "SecurityGroupIdUpdates": [ "sg-0123456789abcdef0" ], "SubnetIdUpdates": [ "subnet-0123456789abcdef0" ], "VpcConfigurationId": "2.1" } ] }, "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 9 } Example: Use a VPC to access data in an Amazon MSK cluster For a complete tutorial about how to access data from an Amazon MSK Cluster in a VPC, see MSK Replication. Update application 731 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Troubleshoot Managed Service for Apache Flink The following topics can help you troubleshoot problems that you might encounter with Amazon Managed Service for Apache Flink. Choose the appropriate topic to review solutions. Topics • Development troubleshooting • Runtime troubleshooting Development troubleshooting This section contains information about diagnosing and fixing development issues with your Managed Service for Apache Flink application. Topics • System rollback best practices • Hudi configuration best practices • Apache Flink Flame Graphs • Credential provider issue with EFO connector 1.15.2 • Applications with unsupported Kinesis connectors • Compile error: "Could not resolve dependencies for project" • Invalid choice: "kinesisanalyticsv2" • UpdateApplication action isn't reloading application code • S3 StreamingFileSink FileNotFoundExceptions • FlinkKafkaConsumer issue with stop with savepoint • Flink 1.15 Async Sink Deadlock • Amazon Kinesis data streams source processing out of order during re-sharding • Real-time vector embedding blueprints FAQ and troubleshooting Development troubleshooting 732 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide System rollback best practices With automatic system rollback and operations visibility capabilities in Amazon Managed Service for Apache Flink, you can identify and resolve issues with your applications. System rollbacks If your application update or scaling operation fails due to a customer error, such as a code bug or permission issue, Amazon Managed Service for Apache Flink automatically attempts to roll back to the previous running version if you have opted in to this functionality. For more information, see Enable system rollbacks for your Managed Service for Apache Flink application. If this autorollback fails or you have not opted in or opted out, your application will be placed into the READY state. To update your application, complete the following steps: Manual rollback If the application is not progressing and is in a transient state for long, or if the application successfully transitioned to Running, but you see downstream issues like processing errors in a successfully updated Flink application, you can manually roll it back using the RollbackApplication API. 1. Call RollbackApplication - this will revert to the previous running version and restore the previous state. 2. Monitor the rollback operation using the DescribeApplicationOperation API. 3. If rollback fails, use the previous system rollback steps. Operations visibility |
analytics-java-api-219 | analytics-java-api.pdf | 219 | be placed into the READY state. To update your application, complete the following steps: Manual rollback If the application is not progressing and is in a transient state for long, or if the application successfully transitioned to Running, but you see downstream issues like processing errors in a successfully updated Flink application, you can manually roll it back using the RollbackApplication API. 1. Call RollbackApplication - this will revert to the previous running version and restore the previous state. 2. Monitor the rollback operation using the DescribeApplicationOperation API. 3. If rollback fails, use the previous system rollback steps. Operations visibility The ListApplicationOperations API shows the history of all customer and system operations on your application. 1. Get the operationId of the failed operation from the list. 2. Call DescribeApplicationOperation and check the status and statusDescription. 3. If an operation failed, the description points to a potential error to investigate. Common error code bugs: Use the rollback capabilities to revert to the last working version. Resolve bugs and retry the update. System rollback best practices 733 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Permission issues: Use the DescribeApplicationOperation to see the required permissions. Update application permissions and retry. Amazon Managed Service for Apache Flink service issues: Check the AWS Health Dashboard or open a support case. Hudi configuration best practices To run Hudi connectors on Managed Service for Apache Flink we recommend the following configuration changes. Disable hoodie.embed.timeline.server Hudi connector on Flink sets up an embedded timeline (TM) server on the Flink jobmanager (JM) to cache metadata to improve performance when job parallelism is high. We recommend that you disable this embedded server on Managed Service for Apache Flink because we disable non-Flink communication between JM and TM. If this server is enabled, Hudi writes will first attempt to connect to the embedded server on JM, and then fall back to reading metadata from Amazon S3. This means that Hudi incurs a connection timeout that delays Hudi writes and causes a performance impact on Managed Service for Apache Flink. Apache Flink Flame Graphs Flame Graphs are enabled by default on applications in Managed Service for Apache Flink versions that support it. Flame Graphs may affect application performance if you keep the graph open, as mentioned in Flink documentation. If you want to disable Flame Graphs for your application, create a case to request it to be disabled for your application ARN. For more information, see the AWS Support Center. Credential provider issue with EFO connector 1.15.2 There is a known issue with Kinesis Data Streams EFO connector versions up to 1.15.2 where the FlinkKinesisConsumer is not respecting Credential Provider configuration. Valid configurations are being disregarded due to the issue, which results in the AUTO credential provider being used. This can cause a problem using cross-account access to Kinesis using EFO connector. To resolve this error please use EFO connector version 1.15.3 or higher. Hudi configuration best practices 734 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Applications with unsupported Kinesis connectors Managed Service for Apache Flink for Apache Flink version 1.15 or later will automatically reject applications from starting or updating if they are using unsupported Kinesis Connector versions (pre-version 1.15.2) bundled into application JARs or archives (ZIP). Rejection error You will see the following error when submitting create / update application calls through: An error occurred (InvalidArgumentException) when calling the CreateApplication operation: An unsupported Kinesis connector version has been detected in the application. Please update flink-connector-kinesis to any version equal to or newer than 1.15.2. For more information refer to connector fix: https://issues.apache.org/jira/browse/ FLINK-23528 Steps to remediate • Update the application’s dependency on flink-connector-kinesis. If you are using Maven as your project’s build tool, follow Update a Maven dependency . If you are using Gradle, follow Update a Gradle dependency . • Repackage the application. • Upload to an Amazon S3 bucket. • Resubmit the create / update application request with the revised application just uploaded to the Amazon S3 bucket. • If you continue to see the same error message, re-check your application dependencies. If the problem persists please create a support ticket. Update a Maven dependency 1. Open the project’s pom.xml. 2. Find the project’s dependencies. They look like: <project> ... Applications with unsupported Kinesis connectors 735 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide <dependencies> ... <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kinesis</artifactId> </dependency> ... </dependencies> ... </project> 3. Update flink-connector-kinesis to a version that is equal to or newer than 1.15.2. For instance: <project> ... <dependencies> ... <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kinesis</artifactId> <version>1.15.2</version> </dependency> ... </dependencies> ... </project> Applications with unsupported Kinesis connectors 736 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update a Gradle dependency 1. Open the project’s build.gradle (or build.gradle.kts for Kotlin |
analytics-java-api-220 | analytics-java-api.pdf | 220 | project’s pom.xml. 2. Find the project’s dependencies. They look like: <project> ... Applications with unsupported Kinesis connectors 735 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide <dependencies> ... <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kinesis</artifactId> </dependency> ... </dependencies> ... </project> 3. Update flink-connector-kinesis to a version that is equal to or newer than 1.15.2. For instance: <project> ... <dependencies> ... <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kinesis</artifactId> <version>1.15.2</version> </dependency> ... </dependencies> ... </project> Applications with unsupported Kinesis connectors 736 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update a Gradle dependency 1. Open the project’s build.gradle (or build.gradle.kts for Kotlin applications). 2. Find the project’s dependencies. They look like: ... dependencies { ... implementation("org.apache.flink:flink-connector-kinesis") ... } ... 3. Update flink-connector-kinesis to a version that is equal to or newer than 1.15.2. For instance: ... dependencies { ... implementation("org.apache.flink:flink-connector-kinesis:1.15.2") ... } ... Compile error: "Could not resolve dependencies for project" In order to compile the Managed Service for Apache Flink sample applications, you must first download and compile the Apache Flink Kinesis connector and add it to your local Maven Compile error: "Could not resolve dependencies for project" 737 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide repository. If the connector hasn't been added to your repository, a compile error similar to the following appears: Could not resolve dependencies for project your project name: Failure to find org.apache.flink:flink-connector-kinesis_2.11:jar:1.8.2 in https:// repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced To resolve this error, you must download the Apache Flink source code (version 1.8.2 from https:// flink.apache.org/downloads.html) for the connector. For instructions about how to download, compile, and install the Apache Flink source code, see the section called “Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions”. Invalid choice: "kinesisanalyticsv2" To use v2 of the Managed Service for Apache Flink API, you need the latest version of the AWS Command Line Interface (AWS CLI). For information about upgrading the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. UpdateApplication action isn't reloading application code The UpdateApplication action will not reload application code with the same file name if no S3 object version is specified. To reload application code with the same file name, enable versioning on your S3 bucket, and specify the new object version using the ObjectVersionUpdate parameter. For more information about enabling object versioning in an S3 bucket, see Enabling or Disabling Versioning. S3 StreamingFileSink FileNotFoundExceptions Managed Service for Apache Flink applications can run into In-progress part file FileNotFoundException when starting from snapshots if an In-progress part file referred to by its savepoint is missing. When this failure mode occurs, the Managed Service for Apache Flink application’s operator state is usually non-recoverable and must be restarted without snapshot using SKIP_RESTORE_FROM_SNAPSHOT. See following example stacktrace: java.io.FileNotFoundException: No such file or directory: s3://amzn-s3-demo-bucket/ pathj/INSERT/2023/4/19/7/_part-2-1234_tmp_12345678-1234-1234-1234-123456789012 Invalid choice: "kinesisanalyticsv2" 738 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2231) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2149) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088) at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:699) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:950) at org.apache.flink.fs.s3hadoop.HadoopS3AccessHelper.getObject(HadoopS3AccessHelper.java:98) at org.apache.flink.fs.s3.common.writer.S3RecoverableMultipartUploadFactory.recoverInProgressPart(S3RecoverableMultipartUploadFactory.java:97) ... Flink StreamingFileSink writes records to filesystems supported by the File Systems. Given that the incoming streams can be unbounded, data is organized into part files of finite size with new files added as data is written. Part lifecycle and rollover policy determine the timing, size and the naming of the part files. During checkpointing and savepointing (snapshotting), all Pending files are renamed and committed. However, In-progress part files are not committed but renamed and their reference is kept within checkpoint or savepoint metadata to be used when restoring jobs. These In-progress part files will eventually rollover to Pending, renamed and committed by a subsequent checkpoint or savepoint. Following are the root causes and mitigation for missing In-progress part file: • Stale snapshot used to start the Managed Service for Apache Flink application – only the latest system snapshot taken when an application is stopped or updated can be used to start a Managed Service for Apache Flink application with Amazon S3 StreamingFileSink. To avoid this class of failure, use the latest system snapshot. • This happens for example when you pick a snapshot created using CreateSnapshot instead of a system-triggered Snapshot during stop or update. The older snapshot’s savepoint keeps an out-of-date reference to In-progress part file that has been renamed and committed by subsequent checkpoint or savepoint. • This can also happen when a system triggered snapshot from non-latest Stop/Update event is picked. An example is an application with system snapshot disabled but has RESTORE_FROM_LATEST_SNAPSHOT configured. Generally, Managed Service for Apache Flink S3 StreamingFileSink FileNotFoundExceptions 739 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide applications with Amazon S3 StreamingFileSink should always have |
analytics-java-api-221 | analytics-java-api.pdf | 221 | example when you pick a snapshot created using CreateSnapshot instead of a system-triggered Snapshot during stop or update. The older snapshot’s savepoint keeps an out-of-date reference to In-progress part file that has been renamed and committed by subsequent checkpoint or savepoint. • This can also happen when a system triggered snapshot from non-latest Stop/Update event is picked. An example is an application with system snapshot disabled but has RESTORE_FROM_LATEST_SNAPSHOT configured. Generally, Managed Service for Apache Flink S3 StreamingFileSink FileNotFoundExceptions 739 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide applications with Amazon S3 StreamingFileSink should always have system snapshot enabled and RESTORE_FROM_LATEST_SNAPSHOT configured. • In-progress part file removed – As the In-progress part file is located in an S3 bucket, it can be removed by other components or actors which have access to the bucket. • This can happen when you have stopped your app for too long and the In-progress part file referred to by your app’s savepoint has been removed by S3 bucket MultiPartUpload lifecycle policy. To avoid this class of failure, make sure that your S3 Bucket MPU lifecycle policy covers a sufficiently large period for your use case. • This can also happen when the In-progress part file has been removed manually or by another one of your system’s components. To avoid this class of failure, please make sure that In- progress part files are not removed by other actors or components. • Race condition where an automated checkpoint is triggered after savepoint – This affects Managed Service for Apache Flink versions up to and including 1.13. This issue is fixed in Managed Service for Apache Flink version 1.15. Migrate your application to the latest version of Managed Service for Apache Flink to prevent recurrence. We also suggest migrating from StreamingFileSink to FileSink. • When applications are stopped or updated, Managed Service for Apache Flink triggers a savepoint and stops the application in two steps. If an automated checkpoint triggers between the two steps, the savepoint will be unusable as its In-progress part file would be renamed and potentially committed. FlinkKafkaConsumer issue with stop with savepoint When using the legacy FlinkKafkaConsumer there is a possibility your application may get stuck in UPDATING, STOPPING or SCALING, if you have system snapshots enabled. There is no published fix available for this issue, therefore we recommend you upgrade to the new KafkaSource to mitigate this issue. If you are using the FlinkKafkaConsumer with snapshots enabled, there is a possibility when the Flink job processes a stop with savepoint API request, the FlinkKafkaConsumer can fail with a runtime error reporting a ClosedException. Under these conditions the Flink application becomes stuck, manifesting as Failed Checkpoints. FlinkKafkaConsumer issue with stop with savepoint 740 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Flink 1.15 Async Sink Deadlock There is a known issue with AWS connectors for Apache Flink implementing AsyncSink interface. This affects applications using Flink 1.15 with the following connectors: • For Java applications: • KinesisStreamsSink – org.apache.flink:flink-connector-kinesis • KinesisStreamsSink – org.apache.flink:flink-connector-aws-kinesis-streams • KinesisFirehoseSink – org.apache.flink:flink-connector-aws-kinesis-firehose • DynamoDbSink – org.apache.flink:flink-connector-dynamodb • Flink SQL/TableAPI/Python applications: • kinesis – org.apache.flink:flink-sql-connector-kinesis • kinesis – org.apache.flink:flink-sql-connector-aws-kinesis-streams • firehose – org.apache.flink:flink-sql-connector-aws-kinesis-firehose • dynamodb – org.apache.flink:flink-sql-connector-dynamodb Affected applications will experience the following symptoms: • Flink job is in RUNNING state, but not processing data; • There are no job restarts; • Checkpoints are timing out. The issue is caused by a bug in AWS SDK resulting in it not surfacing certain errors to the caller when using the async HTTP client. This results in the sink waiting indefinitely for an “in-flight request” to complete during a checkpoint flush operation. This issue had been fixed in AWS SDK starting from version 2.20.144. Following are instructions on how to update affected connectors to use the new version of AWS SDK in your applications: Topics • Update Java applications • Update Python applications Flink 1.15 Async Sink Deadlock 741 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update Java applications Follow the procedures below to update Java applications: flink-connector-kinesis If the application uses flink-connector-kinesis: Kinesis connector uses shading to package some dependencies, including the AWS SDK, into the connector jar. To update the AWS SDK version, use the following procedure to replace these shaded classes: Maven 1. Add Kinesis connector and required AWS SDK modules as project dependencies. 2. Configure maven-shade-plugin: a. Add filter to exclude shaded AWS SDK classes when copying content of the Kinesis connector jar. b. Add relocation rule to move updated AWS SDK classes to package, expected by Kinesis connector. pom.xml <project> ... <dependencies> ... <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kinesis</artifactId> <version>1.15.4</version> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>kinesis</artifactId> <version>2.20.144</version> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> Flink 1.15 Async Sink Deadlock 742 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide <artifactId>netty-nio-client</artifactId> |
analytics-java-api-222 | analytics-java-api.pdf | 222 | To update the AWS SDK version, use the following procedure to replace these shaded classes: Maven 1. Add Kinesis connector and required AWS SDK modules as project dependencies. 2. Configure maven-shade-plugin: a. Add filter to exclude shaded AWS SDK classes when copying content of the Kinesis connector jar. b. Add relocation rule to move updated AWS SDK classes to package, expected by Kinesis connector. pom.xml <project> ... <dependencies> ... <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kinesis</artifactId> <version>1.15.4</version> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>kinesis</artifactId> <version>2.20.144</version> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> Flink 1.15 Async Sink Deadlock 742 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide <artifactId>netty-nio-client</artifactId> <version>2.20.144</version> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>sts</artifactId> <version>2.20.144</version> </dependency> ... </dependencies> ... <build> ... <plugins> ... <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.1.1</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> ... <filters> ... <filter> <artifact>org.apache.flink:flink-connector- kinesis</artifact> <excludes> <exclude>org/apache/flink/kinesis/ shaded/software/amazon/awssdk/**</exclude> <exclude>org/apache/flink/kinesis/ shaded/org/reactivestreams/**</exclude> <exclude>org/apache/flink/kinesis/ shaded/io/netty/**</exclude> <exclude>org/apache/flink/kinesis/ shaded/com/typesafe/netty/**</exclude> </excludes> </filter> ... Flink 1.15 Async Sink Deadlock 743 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide </filters> <relocations> ... <relocation> <pattern>software.amazon.awssdk</pattern> <shadedPattern>org.apache.flink.kinesis.shaded.software.amazon.awssdk</ shadedPattern> </relocation> <relocation> <pattern>org.reactivestreams</pattern> <shadedPattern>org.apache.flink.kinesis.shaded.org.reactivestreams</ shadedPattern> </relocation> <relocation> <pattern>io.netty</pattern> <shadedPattern>org.apache.flink.kinesis.shaded.io.netty</shadedPattern> </relocation> <relocation> <pattern>com.typesafe.netty</pattern> <shadedPattern>org.apache.flink.kinesis.shaded.com.typesafe.netty</ shadedPattern> </relocation> ... </relocations> ... </configuration> </execution> </executions> </plugin> ... </plugins> ... </build> </project> Gradle 1. Add Kinesis connector and required AWS SDK modules as project dependencies. Flink 1.15 Async Sink Deadlock 744 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Adjust shadowJar configuration: a. Exclude shaded AWS SDK classes when copying content of the Kinesis connector jar. b. Relocate updated AWS SDK classes to a package expected by Kinesis connector. build.gradle ... dependencies { ... flinkShadowJar("org.apache.flink:flink-connector-kinesis:1.15.4") flinkShadowJar("software.amazon.awssdk:kinesis:2.20.144") flinkShadowJar("software.amazon.awssdk:sts:2.20.144") flinkShadowJar("software.amazon.awssdk:netty-nio-client:2.20.144") ... } ... shadowJar { configurations = [project.configurations.flinkShadowJar] exclude("software/amazon/kinesis/shaded/software/amazon/awssdk/**/*") exclude("org/apache/flink/kinesis/shaded/org/reactivestreams/**/*.class") exclude("org/apache/flink/kinesis/shaded/io/netty/**/*.class") exclude("org/apache/flink/kinesis/shaded/com/typesafe/netty/**/*.class") relocate("software.amazon.awssdk", "org.apache.flink.kinesis.shaded.software.amazon.awssdk") relocate("org.reactivestreams", "org.apache.flink.kinesis.shaded.org.reactivestreams") relocate("io.netty", "org.apache.flink.kinesis.shaded.io.netty") relocate("com.typesafe.netty", "org.apache.flink.kinesis.shaded.com.typesafe.netty") } ... Other affected connectors If the application uses another affected connector: Flink 1.15 Async Sink Deadlock 745 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide In order to update the AWS SDK version, the SDK version should be enforced in the project build configuration. Maven Add AWS SDK bill of materials (BOM) to the dependency management section of the pom.xml file to enforce SDK version for the project. pom.xml <project> ... <dependencyManagement> <dependencies> ... <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>bom</artifactId> <version>2.20.144</version> <scope>import</scope> <type>pom</type> </dependency> ... </dependencies> </dependencyManagement> ... </project> Gradle Add platform dependency on the AWS SDK bill of materials (BOM) to enforce SDK version for the project. This requires Gradle 5.0 or newer: build.gradle ... dependencies { ... flinkShadowJar(platform("software.amazon.awssdk:bom:2.20.144")) ... } ... Flink 1.15 Async Sink Deadlock 746 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update Python applications Python applications can use connectors in 2 different ways: packaging connectors and other Java dependencies as part of single uber-jar, or use connector jar directly. To fix applications affected by Async Sink deadlock: • If the application uses an uber jar, follow the instructions for Update Java applications . • To rebuild connector jars from source, use the following steps: Building connectors from source: Prerequisites, similar to Flink build requirements: • Java 11 • Maven 3.2.5 flink-sql-connector-kinesis 1. Download source code for Flink 1.15.4: wget https://archive.apache.org/dist/flink/flink-1.15.4/flink-1.15.4-src.tgz 2. Uncompress source code: tar -xvf flink-1.15.4-src.tgz 3. Navigate to kinesis connector directory cd flink-1.15.4/flink-connectors/flink-connector-kinesis/ 4. Compile and install connector jar, specifying required AWS SDK version. To speed up build use -DskipTests to skip test execution and -Dfast to skip additional source code checks: mvn clean install -DskipTests -Dfast -Daws.sdkv2.version=2.20.144 5. Navigate to kinesis connector directory cd ../flink-sql-connector-kinesis Flink 1.15 Async Sink Deadlock 747 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 6. Compile and install sql connector jar: mvn clean install -DskipTests -Dfast 7. Resulting jar will be available at: target/flink-sql-connector-kinesis-1.15.4.jar flink-sql-connector-aws-kinesis-streams 1. Download source code for Flink 1.15.4: wget https://archive.apache.org/dist/flink/flink-1.15.4/flink-1.15.4-src.tgz 2. Uncompress source code: tar -xvf flink-1.15.4-src.tgz 3. Navigate to kinesis connector directory cd flink-1.15.4/flink-connectors/flink-connector-aws-kinesis-streams/ 4. Compile and install connector jar, specifying required AWS SDK version. To speed up build use -DskipTests to skip test execution and -Dfast to skip additional source code checks: mvn clean install -DskipTests -Dfast -Daws.sdk.version=2.20.144 5. Navigate to kinesis connector directory cd ../flink-sql-connector-aws-kinesis-streams 6. Compile and install sql connector jar: mvn clean install -DskipTests -Dfast 7. Resulting jar will be available at: target/flink-sql-connector-aws-kinesis-streams-1.15.4.jar Flink 1.15 Async Sink Deadlock 748 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide flink-sql-connector-aws-kinesis-firehose 1. Download source code for Flink 1.15.4: wget https://archive.apache.org/dist/flink/flink-1.15.4/flink-1.15.4-src.tgz 2. Uncompress source code: tar -xvf flink-1.15.4-src.tgz 3. Navigate to connector directory cd flink-1.15.4/flink-connectors/flink-connector-aws-kinesis-firehose/ 4. Compile and install connector jar, specifying required AWS SDK version. To speed up build use -DskipTests to skip test execution and -Dfast to skip additional source code checks: mvn clean install -DskipTests -Dfast -Daws.sdk.version=2.20.144 5. Navigate to sql connector directory cd ../flink-sql-connector-aws-kinesis-firehose 6. Compile and install sql connector jar: mvn clean install -DskipTests -Dfast 7. |
analytics-java-api-223 | analytics-java-api.pdf | 223 | at: target/flink-sql-connector-aws-kinesis-streams-1.15.4.jar Flink 1.15 Async Sink Deadlock 748 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide flink-sql-connector-aws-kinesis-firehose 1. Download source code for Flink 1.15.4: wget https://archive.apache.org/dist/flink/flink-1.15.4/flink-1.15.4-src.tgz 2. Uncompress source code: tar -xvf flink-1.15.4-src.tgz 3. Navigate to connector directory cd flink-1.15.4/flink-connectors/flink-connector-aws-kinesis-firehose/ 4. Compile and install connector jar, specifying required AWS SDK version. To speed up build use -DskipTests to skip test execution and -Dfast to skip additional source code checks: mvn clean install -DskipTests -Dfast -Daws.sdk.version=2.20.144 5. Navigate to sql connector directory cd ../flink-sql-connector-aws-kinesis-firehose 6. Compile and install sql connector jar: mvn clean install -DskipTests -Dfast 7. Resulting jar will be available at: target/flink-sql-connector-aws-kinesis-firehose-1.15.4.jar flink-sql-connector-dynamodb 1. Download source code for Flink 1.15.4: wget https://archive.apache.org/dist/flink/flink-connector-aws-3.0.0/flink- connector-aws-3.0.0-src.tgz 2. Uncompress source code: Flink 1.15 Async Sink Deadlock 749 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide tar -xvf flink-connector-aws-3.0.0-src.tgz 3. Navigate to connector directory cd flink-connector-aws-3.0.0 4. Compile and install connector jar, specifying required AWS SDK version. To speed up build use -DskipTests to skip test execution and -Dfast to skip additional source code checks: mvn clean install -DskipTests -Dfast -Dflink.version=1.15.4 - Daws.sdk.version=2.20.144 5. Resulting jar will be available at: flink-sql-connector-dynamodb/target/flink-sql-connector-dynamodb-3.0.0.jar Amazon Kinesis data streams source processing out of order during re- sharding The current FlinkKinesisConsumer implementation doesn’t provide strong ordering guarantees between Kinesis shards. This may lead to out-of-order processing during re-sharding of Kinesis Stream, in particular for Flink applications that experience processing lag. Under some circumstances, for example windows operators based on event times, events might get discarded because of the resulting lateness. Amazon Kinesis data streams source processing out of order during re-sharding 750 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide This is a known problem in Open Source Flink. Until connector fix is made available, ensure your Flink applications are not falling behind Kinesis Data Streams during re-partitioning. By ensuring that the processing delay is tolerated by your Flink apps, you can minimize the impact of out-of- order processing and risk of data loss. Real-time vector embedding blueprints FAQ and troubleshooting Review the following FAQ and troubleshooting sections to troubleshoot real-time vector embedding blueprint issues. For more information about real-time vector embedding blueprints, see Real-time vector embedding blueprints. For general Managed Service for Apache Flink application troubleshooting, see https:// docs.aws.amazon.com/managed-flink/latest/java/troubleshooting-runtime.html. Topics • Real-time vector embedding blueprints - FAQ • Real-time vector embedding blueprints - troubleshooting Real-time vector embedding blueprints FAQ and troubleshooting 751 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Real-time vector embedding blueprints - FAQ Review the following FAQ about real-time vector embedding blueprints. For more information about real-time vector embedding blueprints, see Real-time vector embedding blueprints. FAQ • What AWS resources does this blueprint create? • What are my actions after the AWS CloudFormation stack deployment is complete? • What should be the structure of the data in the source Amazon MSK topic(s)? • Can I specify parts of a message to embed? • Can I read data from multiple Amazon MSK topics? • Can I use regex to configure Amazon MSK topic names? • What is the maximum size of a message that can be read from an Amazon MSK topic? • What type of OpenSearch is supported? • Why do I need to use a vector search collection, vector index, and add a vector field in my OpenSearch Serverless colelction? • What should I set as the dimension for my vector field? • What does the output look like in the configured OpenSearch index? • Can I specify metadata fields to add to the document stored in the OpenSearch index? • Should I expect duplicate entries in the OpenSearch index? • Can I send data to multiple OpenSearch indices? • Can I deploy multiple real-time vector embedding applications in a single AWS account? • Can multiple real-time vector embedding applications use the same data source or sink? • Does the application support cross-account connectivity? • Does the application support cross-Region connectivity? • Can my Amazon MSK cluster and OpenSearch collection be in different VPCs or subnets? • What embedding models are supported by the application? • Can I fine-tune the performance of my application based on my workload? • What Amazon MSK authentication types are supported? • What is sink.os.bulkFlushIntervalMillis and how do I set it? Real-time vector embedding blueprints FAQ and troubleshooting 752 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • When I deploy my Managed Service for Apache Flink application, from what point in the Amazon MSK topic will it begin reading messages? • How do I use source.msk.starting.offset? • What chunking strategies are supported? • How do I read records in my vector datastore? • Where can I find new updates to the source code? • Can I make a |
analytics-java-api-224 | analytics-java-api.pdf | 224 | • What Amazon MSK authentication types are supported? • What is sink.os.bulkFlushIntervalMillis and how do I set it? Real-time vector embedding blueprints FAQ and troubleshooting 752 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • When I deploy my Managed Service for Apache Flink application, from what point in the Amazon MSK topic will it begin reading messages? • How do I use source.msk.starting.offset? • What chunking strategies are supported? • How do I read records in my vector datastore? • Where can I find new updates to the source code? • Can I make a change to the AWS CloudFormation template and update the Managed Service for Apache Flink application? • Will AWS monitor and maintain the application on my behalf? • Does this application move my data outside my AWS account? What AWS resources does this blueprint create? To find resources deployed in your account, navigate to AWS CloudFormation console and identify the stack name that starts with the name you provided for your Managed Service for Apache Flink application. Choose the Resources tab to check the resources that were created as part of the stack. The following are the key resources that the stack creates: • Real-time vector embedding Managed Service for Apache Flink application • Amazon S3 bucket for holding the source code for the real-time vector embedding application • CloudWatch log group and log stream for storing logs • Lambda functions for fetching and creating resources • IAM roles and policies for Lambdas, Managed Service for Apache Flink application, and accessing Amazon Bedrock and Amazon OpenSearch Service • Data access policy for Amazon OpenSearch Service • VPC endpoints for accessing Amazon Bedrock and Amazon OpenSearch Service What are my actions after the AWS CloudFormation stack deployment is complete? After the AWS CloudFormation stack deployment is complete, access the Managed Service for Apache Flink console and find your blueprint Managed Service for Apache Flink application. Choose the Configure tab and confirm that all runtime properties are setup correctly. They may overflow to the next page. When you are confident of the settings, choose Run. The application will start ingesting messages from your topic. Real-time vector embedding blueprints FAQ and troubleshooting 753 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To check for new releases, see https://github.com/awslabs/real-time-vectorization-of-streaming- data/releases. What should be the structure of the data in the source Amazon MSK topic(s)? We currently support structured and unstructured source data. • Unstructured data is denoted by STRING in source.msk.data.type. The data is read as is from the incoming message. • We currently support structured JSON data, denoted by JSON in source.msk.data.type. The data must always be in JSON format. If the application receives a malformed JSON, the application will fail. • When using JSON as source data type, make sure that every message in all source topics is a valid JSON. If you subscribe to one or more topics that do not contain JSON objects with this setting, the application will fail. If one or more topics have a mix of structured and unstructured data, we recommended that you configure source data as unstructured in the Managed Service for Apache Flink application. Can I specify parts of a message to embed? • For unstructured input data where source.msk.data.type is STRING, the application will always embed the entire message and store the entire message in the configured OpenSearch index. • For structured input data where source.msk.data.type is JSON, you can configure embed.input.config.json.fieldsToEmbed to specify which field in the JSON object should be selected for embedding. This only works for top-level JSON fields and does not work with nested JSONs and with messages containing a JSON array. Use .* to embed the entire JSON. Can I read data from multiple Amazon MSK topics? Yes, you can read data from multiple Amazon MSK topics with this application. Data from all topics must be of the same type (either STRING or JSON) or it might cause the application to fail. Data from all topics is always stored in a single OpenSearch index. Can I use regex to configure Amazon MSK topic names? source.msk.topic.names does not support a list of regex. We support either a comma separated list of topic names or .* regex to include all topics. Real-time vector embedding blueprints FAQ and troubleshooting 754 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide What is the maximum size of a message that can be read from an Amazon MSK topic? The maximum size of a message that can be processed is limited by the Amazon Bedrock InvokeModel body limit that is currently set to 25,000,000. For more information, see InvokeModel. What type of OpenSearch is supported? We support both OpenSearch domains and collections. If you are using an OpenSearch |
analytics-java-api-225 | analytics-java-api.pdf | 225 | a comma separated list of topic names or .* regex to include all topics. Real-time vector embedding blueprints FAQ and troubleshooting 754 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide What is the maximum size of a message that can be read from an Amazon MSK topic? The maximum size of a message that can be processed is limited by the Amazon Bedrock InvokeModel body limit that is currently set to 25,000,000. For more information, see InvokeModel. What type of OpenSearch is supported? We support both OpenSearch domains and collections. If you are using an OpenSearch collection, make sure to use a vector collection and create a vector index to use for this application. This will let you use the OpenSearch vector database capabilities for querying your data. To learn more, seeAmazon OpenSearch Service’s vector database capabilities explained. Why do I need to use a vector search collection, vector index, and add a vector field in my OpenSearch Serverless colelction? The vector search collection type in OpenSearch Serverless provides a similarity search capability that is scalable and high performing. It streamlines building modern machine learning (ML) augmented search experiences and generative artificial intelligence (AI) applications. For more information, see Working with vector search collections. What should I set as the dimension for my vector field? Set the dimension of the vector field based on the embedding model that you want to use. Refer to the following table, and confirm these values from the respective documentation. Vector field dimensions Amazon Bedrock vector embedding model name Output dimension support offered by the model Amazon Titan Text Embeddings V1 1,536 Amazon Titan Text Embeddings V2 1,024 (default), 384, 256 Amazon Titan Multimodal Embeddings G1 1,024 (default), 384, 256 Cohere Embed English Cohere Embed Multilingual 1,024 1,024 Real-time vector embedding blueprints FAQ and troubleshooting 755 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide What does the output look like in the configured OpenSearch index? Every document in the OpenSearch index contains following fields: • original_data: The data that was used to generate embeddings. For STRING type, it is the entire message. For JSON object, it is the JSON object that was used for embeddings. It could be the entire JSON in the message or specified fields in the JSON. For example, if name was selected to be embedded from incoming messages, the output would look as follows: "original_data": "{\"name\":\"John Doe\"}" • embedded_data: A vector float array of embeddings generated by Amazon Bedrock • date: UTC timestamp at which the document was stored in OpenSearch Can I specify metadata fields to add to the document stored in the OpenSearch index? No, currently, we do not support adding additional fields to the final document stored in the OpenSearch index. Should I expect duplicate entries in the OpenSearch index? Depending on how you configured your application, you might see duplicate messages in the index. One common reason is application restart. The application is configured by default to start reading from the earliest message in the source topic. When you change the configuraiton, the application restarts, and processes all messages in the topic again. To avoid re-processing, see How do I use source.msk.starting.offset? and correctly set the starting offset for your application. Can I send data to multiple OpenSearch indices? No, the application supports storing data to a single OpenSearch index. To setup vectorization output to multiple indices, you must deploy separate Managed Service for Apache Flink applications. Can I deploy multiple real-time vector embedding applications in a single AWS account? Yes, you can deploy multiple real-time vector embedding Managed Service for Apache Flink applications in a single AWS account if every application has a unique name. Real-time vector embedding blueprints FAQ and troubleshooting 756 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Can multiple real-time vector embedding applications use the same data source or sink? Yes, you can create multiple real-time vector embedding Managed Service for Apache Flink applications that read data from the same topic(s) or store data in the same index. Does the application support cross-account connectivity? No, for the application to run successfully, the Amazon MSK cluster and the OpenSearch collection must be in the same AWS account where you are trying to setup your Managed Service for Apache Flink application. Does the application support cross-Region connectivity? No, the application only allows you to deploy an Managed Service for Apache Flink application with an Amazon MSK cluster and an OpenSearch collection in the same Region of the Managed Service for Apache Flink application. Can my Amazon MSK cluster and OpenSearch collection be in different VPCs or subnets? Yes, we support Amazon MSK cluster and OpenSearch collection in different VPCs and subnets as long as they are in the same AWS account. See |
analytics-java-api-226 | analytics-java-api.pdf | 226 | be in the same AWS account where you are trying to setup your Managed Service for Apache Flink application. Does the application support cross-Region connectivity? No, the application only allows you to deploy an Managed Service for Apache Flink application with an Amazon MSK cluster and an OpenSearch collection in the same Region of the Managed Service for Apache Flink application. Can my Amazon MSK cluster and OpenSearch collection be in different VPCs or subnets? Yes, we support Amazon MSK cluster and OpenSearch collection in different VPCs and subnets as long as they are in the same AWS account. See (General MSF troubleshooting) to make sure your setup is correct. What embedding models are supported by the application? Currently, the application supports all models that are supported by Bedrock. These include: • Amazon Titan Embeddings G1 - Text • Amazon Titan Text Embeddings V2 • Amazon Titan Multimodal Embeddings G1 • Cohere Embed English • Cohere Embed Multilingual Can I fine-tune the performance of my application based on my workload? Yes. The throughput of the application depends on a number of factors, all of which can be controlled by the customers: 1. AWS MSF KPUs: The application is deployed with default parallelism factor 2 and parallelism per KPU 1, with automatic scaling turned on. However, we recommend that you configure Real-time vector embedding blueprints FAQ and troubleshooting 757 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide scaling for the Managed Service for Apache Flink application according to your workloads. For more information, see Review Managed Service for Apache Flink application resources. 2. Amazon Bedrock: Based on the selected Amazon Bedrock on-demand model, different quotas might apply. Review service quotas in Bedrock to see the workload that the service will be able to handle. For more information, see Quotas for Amazon Bedrock. 3. Amazon OpenSearch Service: Additionally, in some situations, you might notice that OpenSearch is the bottleneck in your pipeline. For scaling information, see OpenSearch scaling Sizing Amazon OpenSearch Service domains. What Amazon MSK authentication types are supported? We only support the IAM MSK authentication type. What is sink.os.bulkFlushIntervalMillis and how do I set it? When sending data to Amazon OpenSearch Service, the bulk flush interval is the interval at which the bulk request is run, regardless of the number of actions or the size of the request. The default value is set to 1 millisecond. While setting a flush interval can help to make sure that data is indexed timely, it can also lead to increased overhead if set too low. Consider your use case and the importance of timely indexing when choosing a flush interval. When I deploy my Managed Service for Apache Flink application, from what point in the Amazon MSK topic will it begin reading messages? The application will start reading messages from the Amazon MSK topic at the offset specified by the source.msk.starting.offset configuration set in the application’s runtime configuration. If source.msk.starting.offset is not explicitly set, the default behavior of the application is to start reading from the earliest available message in the topic. How do I use source.msk.starting.offset? Explicitly set source.msk.starting.offset to one of the following values, based on desired behavior: • EARLIEST: The default setting, which reads from oldest offset in the partition. This is a good choice especially if: Real-time vector embedding blueprints FAQ and troubleshooting 758 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • You have newly created Amazon MSK topics and consumer applications. • You need to replay data, so you can build or reconstruct state. This is relevant when implementing the event sourcing pattern or when initializing a new service that requires a complete view of the data history. • LATEST: The Managed Service for Apache Flink application will read messages from the end of the partition. We recommend this option if you only care about new messages being produced and don't need to process historical data. In this setting, the consumer will ignore the existing messages and only read new messages published by the upstream producer. • COMMITTED: The Managed Service for Apache Flink application will start consuming messages from the committed offset of the consuming group. If the committed offset doesn't exist, the EARLIEST reset strategy will be used. What chunking strategies are supported? We are using the langchain library to chunk inputs. Chunking is only applied if the length of the input is greater than the chosen maxSegmentSizeInChars. We support the following five chunking types: • SPLIT_BY_CHARACTER: Will fit as many characters as it can into each chunk where each chunk length is no greater than maxSegmentSizeInChars. Doesn’t care about whitespace, so it can cut off words. • SPLIT_BY_WORD: Will find whitespace characters to chunk by. No words are cut off. • SPLIT_BY_SENTENCE: Sentence boundaries are detected |
analytics-java-api-227 | analytics-java-api.pdf | 227 | committed offset doesn't exist, the EARLIEST reset strategy will be used. What chunking strategies are supported? We are using the langchain library to chunk inputs. Chunking is only applied if the length of the input is greater than the chosen maxSegmentSizeInChars. We support the following five chunking types: • SPLIT_BY_CHARACTER: Will fit as many characters as it can into each chunk where each chunk length is no greater than maxSegmentSizeInChars. Doesn’t care about whitespace, so it can cut off words. • SPLIT_BY_WORD: Will find whitespace characters to chunk by. No words are cut off. • SPLIT_BY_SENTENCE: Sentence boundaries are detected using the Apache OpenNLP library with the English sentence model. • SPLIT_BY_LINE: Will find new line characters to chunk by. • SPLIT_BY_PARAGRAPH: Will find consecutive new line characters to chunk by. The splitting strategies fall back according to the preceding order, where the larger chunking strategies like SPLIT_BY_PARAGRAPH fall back to SPLIT_BY_CHARACTER. For example, when using SPLIT_BY_LINE, if a line is too long then the line will be sub-chunked by sentence, where each chunk will fit in as many sentences as it can. If there are any sentences that are too long, then it will be chunked at the word-level. If a word is too long, then it will be split by character. How do I read records in my vector datastore? 1. When source.msk.data.type is STRING Real-time vector embedding blueprints FAQ and troubleshooting 759 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • original_data: The entire original string from the Amazon MSK message. • embedded_data: Embedding vector created from chunk_data if it is not empty (chunking applied) or created from original_data if no chunking was applied. • chunk_data: Only present when the original data was chunked. Contains the chunk of the original message that was used to create the embedding in embedded_data. 2. When source.msk.data.type is JSON • original_data: The entire original JSON from the Amazon MSK message after JSON key filtering is applied. • embedded_data: Embedding vector created from chunk_data if it is not empty (chunking applied) or created from original_data if no chunking was applied. • chunk_key: Only present when the original data was chunked. Contains the JSON key that the chunk is from in original_data. For example, it can look like jsonKey1.nestedJsonKeyA for nested keys or metadata in the example of original_data. • chunk_data: Only present when the original data was chunked. Contains the chunk of the original message that was used to create the embedding in embedded_data. Yes, you can read data from multiple Amazon MSK topics with this application. Data from all topics must be of the same type (either STRING or JSON) or it might cause the application to fail. Data from all topics is always stored in a single OpenSearch index. Where can I find new updates to the source code? Go to https://github.com/awslabs/real-time-vectorization-of-streaming-data/releases to check for new releases. Can I make a change to the AWS CloudFormation template and update the Managed Service for Apache Flink application? No, making a change to the AWS CloudFormation template does not update the Managed Service for Apache Flink application. Any new change in AWS CloudFormation implies a new stack needs to be deployed. Will AWS monitor and maintain the application on my behalf? No, AWS will not monitor, scale, update or patch this application on your behalf. Real-time vector embedding blueprints FAQ and troubleshooting 760 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Does this application move my data outside my AWS account? All data read and stored by the Managed Service for Apache Flink application stays within your AWS account and never leaves your account. Real-time vector embedding blueprints - troubleshooting Review the following troubleshooting topics about real-time vector embedding blueprints. For more information about real-time vector embedding blueprints, see Real-time vector embedding blueprints. Troubleshooting topics • My CloudFormation stack deployment has failed or rolled back. What can I do to fix it? • I don't want my application to start reading messages from the beginning of the Amazon MSK topics. What do I do? • How do I know if there is an issue with my Managed Service for Apache Flink application and how can I debug it? • What are the key metrics that I should be monitoring for my Managed Service for Apache Flink application? My CloudFormation stack deployment has failed or rolled back. What can I do to fix it? • Go to your CFN stack and find the reason for the stack failure. It could be related to missing permissions, AWS resource name collisions, among other causes. Fix the root cause of the deployment failure. For more information, see the CloudWatch troubleshooting guide. • [Optional] There can only be one VPC endpoint per service per VPC. If you deployed multiple |
analytics-java-api-228 | analytics-java-api.pdf | 228 | can I debug it? • What are the key metrics that I should be monitoring for my Managed Service for Apache Flink application? My CloudFormation stack deployment has failed or rolled back. What can I do to fix it? • Go to your CFN stack and find the reason for the stack failure. It could be related to missing permissions, AWS resource name collisions, among other causes. Fix the root cause of the deployment failure. For more information, see the CloudWatch troubleshooting guide. • [Optional] There can only be one VPC endpoint per service per VPC. If you deployed multiple real-time vector embedding blueprints to write to the Amazon OpenSearch Service collections in the same VPC, they might be sharing VPC endpoints. These might either already be present in your account for the VPC, or the first real-time vector embedding blueprint stack will create VPC endpoints for Amazon Bedrock and Amazon OpenSearch Service that will be used by all other stacks deployed in your account. If a stack fails, check if that stack created VPC endpoints for Amazon Bedrock and Amazon OpenSearch Service and delete them if they are not used anywhere else in your account. For steps for deleting VPC endpoints, see How do I safely delete my application? (delete). • There might be other services or applications in your account using the VPC endpoint. Deleting it might create network disruption for other services. Be careful in deleting these endpoints. Real-time vector embedding blueprints FAQ and troubleshooting 761 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide I don't want my application to start reading messages from the beginning of the Amazon MSK topics. What do I do? You must explicitly set source.msk.starting.offset to one of the following values, depending on the desired behavior: • Earliest offset: The oldest offset in the partition. • Latest offset: Consumers will read messages from the end of the partition. • Committed offset: Read from the last message the consumer processed within a partition. How do I know if there is an issue with my Managed Service for Apache Flink application and how can I debug it? Use the Managed Service for Apache Flink troubleshooting guide to debug Managed Service for Apache Flink related issues with your application. What are the key metrics that I should be monitoring for my Managed Service for Apache Flink application? • All metrics available for a regular Managed Service for Apache Flink application can help you monitor your application. For more information, see Metrics and dimensions in Managed Service for Apache Flink. • To monitor Amazon Bedrock metrics, see Amazon CloudWatch metrics for Amazon Bedrock. • We have added two new metrics for monitoring performance of generating embeddings. Find them under the EmbeddingGeneration operation name in CloudWatch. The two metrics are: • BedrockTitanEmbeddingTokenCount: Number of tokens present in a single request to Amazon Bedrock. • BedrockEmbeddingGenerationLatencyMs: Reports the time taken to send and receive a response from Amazon Bedrock for generating embeddings, in milliseconds. • For Amazon OpenSearch Service serverless collections, you can use metrics such as IngestionDataRate, IngestionDocumentErrors and others. For more information, see Monitoring OpenSearch Serverless with Amazon CloudWatch. • For OpenSearch provisioned metrics, see Monitoring OpenSearch cluster metrics with Amazon CloudWatch. Real-time vector embedding blueprints FAQ and troubleshooting 762 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Runtime troubleshooting This section contains information about diagnosing and fixing runtime issues with your Managed Service for Apache Flink application. Topics • Troubleshooting tools • Application issues • Application is restarting • Throughput is too slow • Unbounded state growth • I/O bound operators • Upstream or source throttling from a Kinesis data stream • Checkpoints • Checkpointing is timing out • Checkpoint failure for Apache Beam application • Backpressure • Data skew • State skew • Integrate with resources in different Regions Troubleshooting tools The primary tool for detecting application issues is CloudWatch alarms. Using CloudWatch alarms, you can set thresholds for CloudWatch metrics that indicate error or bottleneck conditions in your application. For information about recommended CloudWatch alarms, see Use CloudWatch Alarms with Amazon Managed Service for Apache Flink. Application issues This section contains solutions for error conditions that you may encounter with your Managed Service for Apache Flink application. Runtime troubleshooting 763 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Topics • Application is stuck in a transient status • Snapshot creation fails • Cannot access resources in a VPC • Data is lost when writing to an Amazon S3 bucket • Application is in the RUNNING status but isn't processing data • Snapshot, application update, or application stop error: InvalidApplicationConfigurationException • java.nio.file.NoSuchFileException: /usr/local/openjdk-8/lib/security/cacerts Application is stuck in a transient status If your application stays in a transient status (STARTING, UPDATING, STOPPING, or |
analytics-java-api-229 | analytics-java-api.pdf | 229 | that you may encounter with your Managed Service for Apache Flink application. Runtime troubleshooting 763 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Topics • Application is stuck in a transient status • Snapshot creation fails • Cannot access resources in a VPC • Data is lost when writing to an Amazon S3 bucket • Application is in the RUNNING status but isn't processing data • Snapshot, application update, or application stop error: InvalidApplicationConfigurationException • java.nio.file.NoSuchFileException: /usr/local/openjdk-8/lib/security/cacerts Application is stuck in a transient status If your application stays in a transient status (STARTING, UPDATING, STOPPING, or AUTOSCALING), you can stop your application by using the StopApplication action with the Force parameter set to true. You can't force stop an application in the DELETING status. Alternatively, if the application is in the UPDATING or AUTOSCALING status, you can roll it back to the previous running version. When you roll back an application, it loads state data from the last successful snapshot. If the application has no snapshots, Managed Service for Apache Flink rejects the rollback request. For more information about rolling back an application, see RollbackApplication action. Note Force-stopping your application may lead to data loss or duplication. To prevent data loss or duplicate processing of data during application restarts, we recommend you to take frequent snapshots of your application. Causes for stuck applications include the following: • Application state is too large: Having an application state that is too large or too persistent can cause the application to become stuck during a checkpoint or snapshot operation. Check your application's lastCheckpointDuration and lastCheckpointSize metrics for steadily increasing values or abnormally high values. • Application code is too large: Verify that your application JAR file is smaller than 512 MB. JAR files larger than 512 MB are not supported. Application issues 764 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Application snapshot creation fails: Managed Service for Apache Flink takes a snapshot of the application during an UpdateApplication or StopApplication request. The service then uses this snapshot state and restores the application using the updated application configuration to provide exactly-once processing semantics.If automatic snapshot creation fails, see Snapshot creation fails following. • Restoring from a snapshot fails: If you remove or change an operator in an application update and attempt to restore from a snapshot, the restore will fail by default if the snapshot contains state data for the missing operator. In addition, the application will be stuck in either the STOPPED or UPDATING status. To change this behavior and allow the restore to succeed, change the AllowNonRestoredState parameter of the application's FlinkRunConfiguration to true. This will allow the resume operation to skip state data that cannot be mapped to the new program. • Application initialization taking longer: Managed Service for Apache Flink uses an internal timeout of 5 minutes (soft setting) while waiting for a Flink job to start. If your job is failing to start within this timeout, you will see a CloudWatch log as follows: Flink job did not start within a total timeout of 5 minutes for application: %s under account: %s If you encounter the above error, it means that your operations defined under Flink job’s main method are taking more than 5 minutes, causing the Flink job creation to time out on the Managed Service for Apache Flink end. We suggest you check the Flink JobManager logs as well as your application code to see if this delay in the main method is expected. If not, you need to take steps to address the issue so it completes in under 5 minutes. You can check your application status using either the ListApplications or the DescribeApplication actions. Snapshot creation fails The Managed Service for Apache Flink service can't take a snapshot under the following circumstances: • The application exceeded the snapshot limit. The limit for snapshots is 1,000. For more information, see Manage application backups using snapshots. • The application doesn't have permissions to access its source or sink. • The application code isn't functioning properly. Application issues 765 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • The application is experiencing other configuration issues. If you get an exception while taking a snapshot during an application update or while stopping the application, set the SnapshotsEnabled property of your application's ApplicationSnapshotConfiguration to false and retry the request. Snapshots can fail if your application's operators are not properly provisioned. For information about tuning operator performance, see Operator scaling. After the application returns to a healthy state, we recommend that you set the application's SnapshotsEnabled property to true. Cannot access resources in a VPC If your application uses a VPC running on Amazon VPC, do the following to verify that your application has access to its |
analytics-java-api-230 | analytics-java-api.pdf | 230 | issues. If you get an exception while taking a snapshot during an application update or while stopping the application, set the SnapshotsEnabled property of your application's ApplicationSnapshotConfiguration to false and retry the request. Snapshots can fail if your application's operators are not properly provisioned. For information about tuning operator performance, see Operator scaling. After the application returns to a healthy state, we recommend that you set the application's SnapshotsEnabled property to true. Cannot access resources in a VPC If your application uses a VPC running on Amazon VPC, do the following to verify that your application has access to its resources: • Check your CloudWatch logs for the following error. This error indicates that your application cannot access resources in your VPC: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. If you see this error, verify that your route tables are set up correctly, and that your connectors have the correct connection settings. For information about setting up and analyzing CloudWatch logs, see Logging and monitoring in Amazon Managed Service for Apache Flink. Data is lost when writing to an Amazon S3 bucket Some data loss might occur when writing output to an Amazon S3 bucket using Apache Flink version 1.6.2. We recommend using the latest supported version of Apache Flink when using Amazon S3 for output directly. To write to an Amazon S3 bucket using Apache Flink 1.6.2, we recommend using Firehose. For more information about using Firehose with Managed Service for Apache Flink, see Firehose sink. Application issues 766 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Application is in the RUNNING status but isn't processing data You can check your application status by using either the ListApplications or the DescribeApplication actions. If your application enters the RUNNING status but isn't writing data to your sink, you can troubleshoot the issue by adding an Amazon CloudWatch log stream to your application. For more information, see Work with application CloudWatch logging options. The log stream contains messages that you can use to troubleshoot application issues. Snapshot, application update, or application stop error: InvalidApplicationConfigurationException An error similar to the following might occur during a snapshot operation, or during an operation that creates a snapshot, such as updating or stopping an application: An error occurred (InvalidApplicationConfigurationException) when calling the UpdateApplication operation: Failed to take snapshot for the application xxxx at this moment. The application is currently experiencing downtime. Please check the application's CloudWatch metrics or CloudWatch logs for any possible errors and retry the request. You can also retry the request after disabling the snapshots in the Managed Service for Apache Flink console or by updating the ApplicationSnapshotConfiguration through the AWS SDK This error occurs when the application is unable to create a snapshot. If you encounter this error during a snapshot operation or an operation that creates a snapshot, do the following: • Disable snapshots for your application. You can do this either in the Managed Service for Apache Flink console, or by using the SnapshotsEnabledUpdate parameter of the UpdateApplication action. • Investigate why snapshots cannot be created. For more information, see Application is stuck in a transient status. • Reenable snapshots when the application returns to a healthy state. Application issues 767 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide java.nio.file.NoSuchFileException: /usr/local/openjdk-8/lib/security/cacerts The location of the SSL truststore was updated in a previous deployment. Use the following value for the ssl.truststore.location parameter instead: /usr/lib/jvm/java-11-amazon-corretto/lib/security/cacerts Application is restarting If your application is not healthy, its Apache Flink job continually fails and restarts. This section describes symptoms and troubleshooting steps for this condition. Symptoms This condition can have the following symptoms: • The FullRestarts metric is not zero. This metric represents the number of times the application's job has restarted since you started the application. • The Downtime metric is not zero. This metric represents the number of milliseconds that the application is in the FAILING or RESTARTING status. • The application log contains status changes to RESTARTING or FAILED. You can query your application log for these status changes using the following CloudWatch Logs Insights query: Analyze errors: Application task-related failures. Causes and solutions The following conditions may cause your application to become unstable and repeatedly restart: • Operator is throwing an exception: If any exception in an operator in your application is unhandled, the application fails over (by interpreting that the failure cannot be handled by operator). The application restarts from the latest checkpoint to maintain "exactly-once" processing semantics. As a result, Downtime is not zero during these restart periods. In order to prevent this from happening, we recommend that you handle any retryable exceptions in the application code. You can investigate the causes of this condition by querying your application logs for changes from your application's state from RUNNING to |
analytics-java-api-231 | analytics-java-api.pdf | 231 | become unstable and repeatedly restart: • Operator is throwing an exception: If any exception in an operator in your application is unhandled, the application fails over (by interpreting that the failure cannot be handled by operator). The application restarts from the latest checkpoint to maintain "exactly-once" processing semantics. As a result, Downtime is not zero during these restart periods. In order to prevent this from happening, we recommend that you handle any retryable exceptions in the application code. You can investigate the causes of this condition by querying your application logs for changes from your application's state from RUNNING to FAILED. For more information, see the section called “Analyze errors: Application task-related failures”. Application is restarting 768 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Kinesis data streams are not properly provisioned: If a source or sink for your application is a Kinesis data stream, check the metrics for the stream for ReadProvisionedThroughputExceeded or WriteProvisionedThroughputExceeded errors. If you see these errors, you can increase the available throughput for the Kinesis stream by increasing the stream's number of shards. For more information, see How do I change the number of open shards in Kinesis Data Streams?. • Other sources or sinks are not properly provisioned or available: Verify that your application is correctly provisioning sources and sinks. Check that any sources or sinks used in the application (such as other AWS services, or external sources or destinations) are well provisioned, are not experiencing read or write throttling, or are periodically unavailable. If you are experiencing throughput-related issues with your dependent services, either increase resources available to those services, or investigate the cause of any errors or unavailability. • Operators are not properly provisioned: If the workload on the threads for one of the operators in your application is not correctly distributed, the operator can become overloaded and the application can crash. For information about tuning operator parallelism, see Manage operator scaling properly. • Application fails with DaemonException: This error appears in your application log if you are using a version of Apache Flink prior to 1.11. You may need to upgrade to a later version of Apache Flink so that a KPL version of 0.14 or later is used. • Application fails with TimeoutException, FlinkException, or RemoteTransportException: These errors may appear in your application log if your task managers are crashing. If your application is overloaded, your task managers can experience CPU or memory resource pressure, causing them to fail. These errors may look like the following: • java.util.concurrent.TimeoutException: The heartbeat of JobManager with id xxx timed out • org.apache.flink.util.FlinkException: The assigned slot xxx was removed • org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: Connection unexpectedly closed by remote task manager To troubleshoot this condition, check the following: • Check your CloudWatch metrics for unusual spikes in CPU or memory usage. Application is restarting 769 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Check your application for throughput issues. For more information, see Troubleshoot performance issues. • Examine your application log for unhandled exceptions that your application code is raising. • Application fails with JaxbAnnotationModule Not Found error: This error occurs if your application uses Apache Beam, but doesn't have the correct dependencies or dependency versions. Managed Service for Apache Flink applications that use Apache Beam must use the following versions of dependencies: <jackson.version>2.10.2</jackson.version> ... <dependency> <groupId>com.fasterxml.jackson.module</groupId> <artifactId>jackson-module-jaxb-annotations</artifactId> <version>2.10.2</version> </dependency> If you do not provide the correct version of jackson-module-jaxb-annotations as an explicit dependency, your application loads it from the environment dependencies, and since the versions do not match, the application crashes at runtime. For more information about using Apache Beam with Managed Service for Apache Flink, see Use CloudFormation. • Application fails with java.io.IOException: Insufficient number of network buffers This happens when an application does not have enough memory allocated for network buffers. Network buffers facilitate communication between subtasks. They are used to store records before transmission over a network, and to store incoming data before dissecting it into records and handing them to subtasks. The number of network buffers required scales directly with the parallelism and complexity of your job graph. There are a number of approaches to mitigate this issue: • You can configure a lower parallelismPerKpu so that there is more memory allocated per- subtask and network buffers. Note that lowering parallelismPerKpu will increase KPU and therefore cost. To avoid this, you can keep the same amount of KPU by lowering parallelism by the same factor. • You can simplify your job graph by reducing the number of operators or chaining them so that fewer buffers are needed. Application is restarting 770 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Otherwise, you can reach out to https://aws.amazon.com/premiumsupport/ for custom network buffer configuration. Throughput is too slow If your |
analytics-java-api-232 | analytics-java-api.pdf | 232 | a lower parallelismPerKpu so that there is more memory allocated per- subtask and network buffers. Note that lowering parallelismPerKpu will increase KPU and therefore cost. To avoid this, you can keep the same amount of KPU by lowering parallelism by the same factor. • You can simplify your job graph by reducing the number of operators or chaining them so that fewer buffers are needed. Application is restarting 770 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Otherwise, you can reach out to https://aws.amazon.com/premiumsupport/ for custom network buffer configuration. Throughput is too slow If your application is not processing incoming streaming data quickly enough, it will perform poorly and become unstable. This section describes symptoms and troubleshooting steps for this condition. Symptoms This condition can have the following symptoms: • If the data source for your application is a Kinesis stream, the stream's millisbehindLatest metric continually increases. • If the data source for your application is an Amazon MSK cluster, the cluster's consumer lag metrics continually increase. For more information, see Consumer-Lag Monitoring in the Amazon MSK Developer Guide. • If the data source for your application is a different service or source, check any available consumer lag metrics or data available. Causes and solutions There can be many causes for slow application throughput. If your application is not keeping up with input, check the following: • If throughput lag is spiking and then tapering off, check if the application is restarting. Your application will stop processing input while it restarts, causing lag to spike. For information about application failures, see Application is restarting. • If throughput lag is consistent, check to see if your application is optimized for performance. For information on optimizing your application's performance, see Troubleshoot performance issues. • If throughput lag is not spiking but continuously increasing, and your application is optimized for performance, you must increase your application resources. For information on increasing application resources, see Implement application scaling. • If your application reads from a Kafka cluster in a different Region and FlinkKafkaConsumer or KafkaSource are mostly idle (high idleTimeMsPerSecond or low CPUUtilization) Throughput is too slow 771 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide despite high consumer lag, you can increase the value for receive.buffer.byte, such as 2097152. For more information, see the high latency environment section in Custom MSK configurations. For troubleshooting steps for slow throughput or consumer lag increasing in the application source, see Troubleshoot performance issues. Unbounded state growth If your application is not properly disposing of outdated state information, it will continually accumulate and lead to application performance or stability issues. This section describes symptoms and troubleshooting steps for this condition. Symptoms This condition can have the following symptoms: • The lastCheckpointDuration metric is gradually increasing or spiking. • The lastCheckpointSize metric is gradually increasing or spiking. Causes and solutions The following conditions may cause your application to accumulate state data: • Your application is retaining state data longer than it is needed. • Your application uses window queries with too long a duration. • You did not set TTL for your state data. For more information, see State Time-To-Live (TTL) in the Apache Flink Documentation. • You are running an application that depends on Apache Beam version 2.25.0 or newer. You can opt out of the new version of the read transform by extending your BeamApplicationProperties with the key experiments and value use_deprecated_read. For more information, see the Apache Beam Documentation. Sometimes applications are facing ever growing state size growth, which is not sustainable in the long term (a Flink application runs indefinitely, after all). Sometimes, this can be traced back to applications storing data in state and not aging out old information properly. But sometimes there are just unreasonable expectations on what Flink can deliver. Applications can use aggregations Unbounded state growth 772 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide over large time windows spanning days or even weeks. Unless AggregateFunctions are used, which allow incremental aggregations, Flink needs to keep the events of the entire window in state. Moreover, when using process functions to implement custom operators, the application needs to remove data from state that is no longer required for the business logic. In that case, state time- to-live can be used to automatically age out data based on processing time. Managed Service for Apache Flink is using incremental checkpoints and thus state ttl is based on RocksDB compaction. You can only observe an actual reduction in state size (indicated by checkpoint size) after a compaction operation occurs. In particular for checkpoint sizes below 200 MB, it's unlikely that you observe any checkpoint size reduction as a result of state expiring. However, savepoints are based on a clean copy of |
analytics-java-api-233 | analytics-java-api.pdf | 233 | remove data from state that is no longer required for the business logic. In that case, state time- to-live can be used to automatically age out data based on processing time. Managed Service for Apache Flink is using incremental checkpoints and thus state ttl is based on RocksDB compaction. You can only observe an actual reduction in state size (indicated by checkpoint size) after a compaction operation occurs. In particular for checkpoint sizes below 200 MB, it's unlikely that you observe any checkpoint size reduction as a result of state expiring. However, savepoints are based on a clean copy of the state that does not contain old data, so you can trigger a snapshot in Managed Service for Apache Flink to force the removal of outdated state. For debugging purposes, it can make sense to disable incremental checkpoints to verify more quickly that the checkpoint size actually decreases or stabilizes (and avoid the effect of compaction in RocksBS). This requires a ticket to the service team, though. I/O bound operators It's best to avoid dependencies to external systems on the data path. It's often much more performant to keep a reference data set in state rather than querying an external system to enrich individual events. However, sometimes there are dependencies that cannot be easily moved to state, e.g., if you want to enrich events with a machine learning model that is hosted on Amazon Sagemaker. Operators that are interfacing with external systems over the network can become a bottleneck and cause backpressure. It is highly recommended to use AsyncIO to implement the functionality, to reduce the wait time for individual calls and avoid the entire application slowing down. Moreover, for applications with I/O bound operators it can also make sense to increase the ParallelismPerKPU setting of the Managed Service for Apache Flink application. This configuration describes the number of parallel subtasks an application can perform per Kinesis Processing Unit (KPU). By increasing the value from the default of 1 to, say, 4, the application leverages the same resources (and has the same cost) but can scale to 4 times the parallelism. This works well for I/O bound applications, but it causes additional overhead for applications that are not I/O bound. Upstream or source throttling from a Kinesis data stream Symptom: The application is encountering LimitExceededExceptions from their upstream source Kinesis data stream. I/O bound operators 773 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Potential Cause: The default setting for the Apache Flink library Kinesis connector is set to read from the Kinesis data stream source with a very aggressive default setting for the maximum number of records fetched per GetRecords call. Apache Flink is configured by default to fetch 10,000 records per GetRecords call (this call is made by default every 200 ms), although the limit per shard is only 1,000 records. This default behavior can lead to throttling when attempting to consume from the Kinesis data stream, which will affect the applications performance and stability. You can confirm this by checking the CloudWatch ReadProvisionedThroughputExceeded metric and seeing prolonged or sustained periods where this metric is greater than zero. You can also see this in CloudWatch logs for your Amazon Managed Service for Apache Flink application by observing continued LimitExceededException errors. Resolution: You can do one of two things to resolve this scenario: • Lower the default limit for the number of records fetched per GetRecords call • Enable Adaptive Reads in your Amazon Managed Service for Apache Flink application. For more information on the Adaptive Reads feature, see SHARD_USE_ADAPTIVE_READS Checkpoints Checkpoints are Flink’s mechanism to ensure that the state of an application is fault tolerant. The mechanism allows Flink to recover the state of operators if the job fails and gives the application the same semantics as failure-free execution. With Managed Service for Apache Flink, the state of an application is stored in RocksDB, an embedded key/value store that keeps its working state on disk. When a checkpoint is taken the state is also uploaded to Amazon S3 so even if the disk is lost then the checkpoint can be used to restore the applications state. For more information, see How does State Snapshotting Work?. Checkpointing stages For a checkpointing operator subtask in Flink there are 5 main stages: • Waiting [Start Delay] – Flink uses checkpoint barriers that get inserted into the stream so time in this stage is the time the operator waits for the checkpoint barrier to reach it. • Alignment [Alignment Duration] – In this stage the subtask has reached one barrier but it’s waiting for barriers from other input streams. Checkpoints 774 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Sync checkpointing [Sync Duration] – This stage is when the subtask actually snapshots the |
analytics-java-api-234 | analytics-java-api.pdf | 234 | Work?. Checkpointing stages For a checkpointing operator subtask in Flink there are 5 main stages: • Waiting [Start Delay] – Flink uses checkpoint barriers that get inserted into the stream so time in this stage is the time the operator waits for the checkpoint barrier to reach it. • Alignment [Alignment Duration] – In this stage the subtask has reached one barrier but it’s waiting for barriers from other input streams. Checkpoints 774 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Sync checkpointing [Sync Duration] – This stage is when the subtask actually snapshots the state of the operator and blocks all other activity on the subtask. • Async checkpointing [Async Duration] – The majority of this stage is the subtask uploading the state to Amazon S3. During this stage, the subtask is no longer blocked and can process records. • Acknowledging – This is usually a short stage and is simply the subtask sending an acknowledgement to the JobManager and also performing any commit messages (e.g. with Kafka sinks). Each of these stages (apart from Acknowledging) maps to a duration metric for checkpoints that is available from the Flink WebUI, which can help isolate the cause of the long checkpoint. To see an exact definition of each of the metrics available on checkpoints, go to History Tab. Investigating When investigating long checkpoint duration, the most important thing to determine is the bottleneck for the checkpoint, i.e., what operator and subtask is taking the longest to checkpoint and which stage of that subtask is taking an extended period of time. This can be determined using the Flink WebUI under the jobs checkpoint task. Flink’s Web interface provides data and information that helps to investigate checkpointing issues. For a full breakdown, see Monitoring Checkpointing. The first thing to look at is the End to End Duration of each operator in the Job graph to determine which operator is taking long to checkpoint and warrants further investigation. Per the Flink documentation, the definition of the duration is: The duration from the trigger timestamp until the latest acknowledgement (or n/a if no acknowledgement received yet). This end to end duration for a complete checkpoint is determined by the last subtask that acknowledges the checkpoint. This time is usually larger than single subtasks need to actually checkpoint the state. The other durations for the checkpoint also gives more fine-grained information as to where the time is being spent. If the Sync Duration is high then this indicates something is happening during the snapshotting. During this stage snapshotState() is called for classes that implement the snapshotState interface; this can be user code so thread-dumps can be useful for investigating this. Checkpoints 775 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide A long Async Duration would suggest that a lot of time is being spent on uploading the state to Amazon S3. This can occur if the state is large or if there is a lot of state files that are being uploaded. If this is the case it is worth investigating how state is being used by the application and ensuring that the Flink native data structures are being used where possible (Using Keyed State). Managed Service for Apache Flink configures Flink in such a way as to minimize the number of Amazon S3 calls to ensure this doesn’t get too long. Following is an example of an operator's checkpointing statistics. It shows that the Async Duration is relatively long compared to the preceding operator checkpointing statistics. The Start Delay being high would show that the majority of the time is being spent on waiting for the checkpoint barrier to reach the operator. This indicates that the application is taking a while to process records, meaning the barrier is flowing through the job graph slowly. This is usually the case if the Job is backpressured or if an operator(s) is constantly busy. Following is an example of a JobGraph where the second KeyedProcess operator is busy. Checkpoints 776 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide You can investigate what is taking so long by either using Flink Flame Graphs or TaskManager thread dumps. Once the bottle-neck has been identified, it can be investigated further using either Flame-graphs or thread-dumps. Thread dumps Thread dumps are another debugging tool that is at a slightly lower level than flame graphs. A thread dump outputs the execution state of all threads at a point in time. Flink takes a JVM thread dump, which is an execution state of all threads within the Flink process. The state of a thread is presented by a stack trace of the thread as well as some additional information. Flame graphs are actually built using multiple stack traces taken in |
analytics-java-api-235 | analytics-java-api.pdf | 235 | thread dumps. Once the bottle-neck has been identified, it can be investigated further using either Flame-graphs or thread-dumps. Thread dumps Thread dumps are another debugging tool that is at a slightly lower level than flame graphs. A thread dump outputs the execution state of all threads at a point in time. Flink takes a JVM thread dump, which is an execution state of all threads within the Flink process. The state of a thread is presented by a stack trace of the thread as well as some additional information. Flame graphs are actually built using multiple stack traces taken in quick succession. The graph is a visualisation made from these traces that makes it easy to identify the common code paths. "KeyedProcess (1/3)#0" prio=5 Id=1423 RUNNABLE at app//scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:154) at $line33.$read$$iw$$iw$ExpensiveFunction.processElement(<console>>19) at $line33.$read$$iw$$iw$ExpensiveFunction.processElement(<console>:14) at app// org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:83) at app//org.apache.flink.streaming.runtime.tasks.OneInputStreamTask $StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:205) at app// org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134) at app// org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105) at app// org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:66) ... Above is a snippet of a thread dump taken from the Flink UI for a single thread. The first line contains some general information about this thread including: • The thread name KeyedProcess (1/3)#0 • Priority of the thread prio=5 • A unique thread Id Id=1423 • Thread state RUNNABLE The name of a thread usually gives information as to the general purpose of the thread. Operator threads can be identified by their name since operator threads have the same name as the Checkpoints 777 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide operator, as well as an indication of which subtask it is related to, e.g., the KeyedProcess (1/3)#0 thread is from the KeyedProcess operator and is from the 1st (out of 3) subtask. Threads can be in one of a few states: • NEW – The thread has been created but has not yet been processed • RUNNABLE – The thread is execution on the CPU • BLOCKED – The thread is waiting for another thread to release it’s lock • WAITING – The thread is waiting by using a wait(), join(), or park() method • TIMED_WAITING – The thread is waiting by using a sleep, wait, join or park method, but with a maximum wait time. Note In Flink 1.13, the maximum depth of a single stacktrace in the thread dump is limited to 8. Note Thread dumps should be the last resort for debugging performance issues in a Flink application as they can be challenging to read, require multiple samples to be taken and manually analysed. If at all possible it is preferable to use flame graphs. Thread dumps in Flink In Flink, a thread dump can be taken by choosing the Task Managers option on the left navigation bar of the Flink UI, selecting a specific task manager, and then navigating to the Thread Dump tab. The thread dump can be downloaded, copied to your favorite text editor (or thread dump analyzer), or analyzed directly inside the text view in the Flink Web UI (however, this last option can be a bit clunky. To determine which Task Manager to take a thread dump of the TaskManagers tab can be used when a particular operator is chosen. This shows that the operator is running on different subtasks of an operator and can run on different Task Managers. Checkpoints 778 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The dump will be comprised of multiple stack traces. However when investigating the dump the ones related to an operator are the most important. These can easily be found since operator threads have the same name as the operator, as well as an indication of which subtask it is related to. For example the following stack trace is from the KeyedProcess operator and is the first subtask. "KeyedProcess (1/3)#0" prio=5 Id=595 RUNNABLE at app//scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:155) at $line360.$read$$iw$$iw$ExpensiveFunction.processElement(<console>:19) at $line360.$read$$iw$$iw$ExpensiveFunction.processElement(<console>:14) at app// org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:83) at app//org.apache.flink.streaming.runtime.tasks.OneInputStreamTask $StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:205) at app// org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134) at app// org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105) at app// org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:66) ... This can become confusing if there are multiple operators with the same name but we can name operators to get around this. For example: .... .process(new ExpensiveFunction).name("Expensive function") Checkpoints 779 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Flame graphs Flame graphs are a useful debugging tool that visualize the stack traces of the targeted code, which allows the most frequent code paths to be identified. They are created by sampling stack traces a number of times. The x-axis of a flame graph shows the different stack profiles, while the y-axis shows the stack depth, and calls in the stack trace. A single rectangle in a flame graph represents on stack frame, and the width of a frame shows how frequently it appears in the stacks. For more details about flame graphs and how to use them, see Flame Graphs. In Flink, the flame graph |
analytics-java-api-236 | analytics-java-api.pdf | 236 | debugging tool that visualize the stack traces of the targeted code, which allows the most frequent code paths to be identified. They are created by sampling stack traces a number of times. The x-axis of a flame graph shows the different stack profiles, while the y-axis shows the stack depth, and calls in the stack trace. A single rectangle in a flame graph represents on stack frame, and the width of a frame shows how frequently it appears in the stacks. For more details about flame graphs and how to use them, see Flame Graphs. In Flink, the flame graph for an operator can be accessed via the Web UI by selecting an operator and then choosing the FlameGraph tab. Once enough samples have been collected the flamegraph will be displayed. Following is the FlameGraph for the ProcessFunction that was taking a lot of time to checkpoint. This is a very simple flame graph and shows that all the CPU time is being spent within a foreach look within the processElement of the ExpensiveFunction operator. You also get the line number to help determine where in the code execution is taking place. Checkpointing is timing out If your application is not optimized or properly provisioned, checkpoints can fail. This section describes symptoms and troubleshooting steps for this condition. Symptoms If checkpoints fail for your application, the numberOfFailedCheckpoints will be greater than zero. Checkpointing is timing out 780 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Checkpoints can fail due to either direct failures, such as application errors, or due to transient failures, such as running out of application resources. Check your application logs and metrics for the following symptoms: • Errors in your code. • Errors accessing your application's dependent services. • Errors serializing data. If the default serializer can't serialize your application data, the application will fail. For information about using a custom serializer in your application, see Data Types and Serialization in the Apache Flink Documentation. • Out of Memory errors. • Spikes or steady increases in the following metrics: • heapMemoryUtilization • oldGenerationGCTime • oldGenerationGCCount • lastCheckpointSize • lastCheckpointDuration For more information about monitoring checkpoints, see Monitoring Checkpointing in the Apache Flink Documentation. Causes and solutions Your application log error messages show the cause for direct failures. Transient failures can have the following causes: • Your application has insufficient KPU provisioning. For information about increasing application provisioning, see Implement application scaling. • Your application state size is too large. You can monitor your application state size using the lastCheckpointSize metric. • Your application's state data is unequally distributed between keys. If your application uses the KeyBy operator, ensure that your incoming data is being divided equally between keys. If most of the data is being assigned to a single key, this creates a bottleneck that causes failures. • Your application is experiencing memory or garbage collection backpressure. Monitor your application's heapMemoryUtilization, oldGenerationGCTime, and oldGenerationGCCount for spikes or steadily increasing values. Checkpointing is timing out 781 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Checkpoint failure for Apache Beam application If your Beam application is configured with shutdownSourcesAfterIdleMs set to 0ms, checkpoints can fail to trigger because tasks are in "FINISHED" state. This section describes symptoms and resolution for this condition. Symptom Go to your Managed Service for Apache Flink application CloudWatch logs and check if the following log message has been logged. The following log message indicates that checkpoint failed to trigger as some tasks has been finished. { "locationInformation": "org.apache.flink.runtime.checkpoint.CheckpointCoordinator.onTriggerFailure(CheckpointCoordinator.java:888)", "logger": "org.apache.flink.runtime.checkpoint.CheckpointCoordinator", "message": "Failed to trigger checkpoint for job your job ID since some tasks of job your job ID has been finished, abort the checkpoint Failure reason: Not all required tasks are currently running.", "threadName": "Checkpoint Timer", "applicationARN": your application ARN, "applicationVersionId": "5", "messageSchemaVersion": "1", "messageType": "INFO" } This can also be found on Flink dashboard where some tasks have entered "FINISHED" state, and checkpointing is not possible anymore. Checkpoint failure for Apache Beam 782 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Cause shutdownSourcesAfterIdleMs is a Beam config variable that shuts down sources which have been idle for the configured time of milliseconds. Once a source has been shut down, checkpointing is not possible anymore. This could lead to checkpoint failure. One of the causes for tasks entering "FINISHED" state is when shutdownSourcesAfterIdleMs is set to 0ms, which means that tasks that are idle will be shutdown immediately. Solution To prevent tasks from entering "FINISHED" state immediately, set shutdownSourcesAfterIdleMs to Long.MAX_VALUE. This can be done in two ways: • Option 1: If your beam configuration is set in your Managed Service for Apache Flink application configuration page, then you can add a new key value pair to set shutdpwnSourcesAfteridleMs as follows: • |
analytics-java-api-237 | analytics-java-api.pdf | 237 | milliseconds. Once a source has been shut down, checkpointing is not possible anymore. This could lead to checkpoint failure. One of the causes for tasks entering "FINISHED" state is when shutdownSourcesAfterIdleMs is set to 0ms, which means that tasks that are idle will be shutdown immediately. Solution To prevent tasks from entering "FINISHED" state immediately, set shutdownSourcesAfterIdleMs to Long.MAX_VALUE. This can be done in two ways: • Option 1: If your beam configuration is set in your Managed Service for Apache Flink application configuration page, then you can add a new key value pair to set shutdpwnSourcesAfteridleMs as follows: • Option 2: If your beam configuration is set in your JAR file, then you can set shutdownSourcesAfterIdleMs as follows: FlinkPipelineOptions options = PipelineOptionsFactory.create().as(FlinkPipelineOptions.class); // Initialize Beam Options object options.setShutdownSourcesAfterIdleMs(Long.MAX_VALUE); // set shutdownSourcesAfterIdleMs to Long.MAX_VALUE options.setRunner(FlinkRunner.class); Pipeline p = Pipeline.create(options); // attach specified options to Beam pipeline Checkpoint failure for Apache Beam 783 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Backpressure Flink uses backpressure to adapt the processing speed of individual operators. The operator can struggle to keep up processing the message volume it receives for many reasons. The operation may require more CPU resources than the operator has available, The operator may wait for I/O operations to complete. If an operator cannot process events fast enough, it build backpressure in the upstream operators feeding into the slow operator. This causes the upstream operators to slow down, which can further propagate the backpressure to the source and cause the source to adapt to the overall throughput of the application by slowing down as well. You can find a deeper description of backpressure and how it works at How Apache Flink™ handles backpressure. Knowing which operators in an applications are slow gives you crucial information to understand the root cause of performance problems in the application. Backpressure information is exposed through the Flink Dashboard. To identify the slow operator, look for the operator with a high backpressure value that is closest to a sink (operator B in the following example). The operator causing the slowness is then one of the downstream operators (operator C in the example). B could process events faster, but is backpressured as it cannot forward the output to the actual slow operator C. A (backpressured 93%) -> B (backpressured 85%) -> C (backpressured 11%) -> D (backpressured 0%) Once you have identified the slow operator, try to understand why it's slow. There could be a myriad of reasons and sometimes it's not obvious what's wrong and can require days of debugging and profiling to resolve. Following are some obvious and more common reasons, some of which are further explained below: • The operator is doing slow I/O, e.g., network calls (consider using AsyncIO instead). • There is a skew in the data and one operator is receiving more events than others (verify by looking at the number of messages in/out of individual subtasks (i.e., instances of the same operator) in the Flink dashboard. • It's a resource intensive operation (if there is no data skew consider scaling out for CPU/memory bound work or increasing ParallelismPerKPU for I/O bound work) • Extensive logging in the operator (reduce the logging to a minimum for production application or consider sending debug output to a data stream instead). Backpressure 784 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Testing throughput with the Discarding Sink The Discarding Sink simply disregards all events it receives while still executing the application (an application without any sink fails to execute). This is very useful for throughput testing, profiling, and to verify if the application is scaling properly. It's also a very pragmatic sanity check to verify if the sinks are causing back pressure or the application (but just checking the backpressure metrics is often easier and more straightforward). By replacing all sinks of an application with a discarding sink and creating a mock source that generates data that r esembles production data, you can measure the maximum throughput of the application for a certain parallelism setting. You can then also increase the parallelism to verify that the application scales properly and does not have a bottleneck that only emerges at higher throughput (e.g., because of data skew). Data skew A Flink application is executed on a cluster in a distributed fashion. To scale out to multiple nodes, Flink uses the concept of keyed streams, which essentially means that the events of a stream are partitioned according to a specific key, e.g., customer id, and Flink can then process different partitions on different nodes. Many of the Flink operators are then evaluated based on these partitions, e.g., Keyed Windows, Process Functions and Async I/O. Choosing a partition key often depends on the business logic. At the same time, |
analytics-java-api-238 | analytics-java-api.pdf | 238 | only emerges at higher throughput (e.g., because of data skew). Data skew A Flink application is executed on a cluster in a distributed fashion. To scale out to multiple nodes, Flink uses the concept of keyed streams, which essentially means that the events of a stream are partitioned according to a specific key, e.g., customer id, and Flink can then process different partitions on different nodes. Many of the Flink operators are then evaluated based on these partitions, e.g., Keyed Windows, Process Functions and Async I/O. Choosing a partition key often depends on the business logic. At the same time, many of the best practices for, e.g., DynamoDB and Spark, equally apply to Flink, including: • ensuring a high cardinality of partition keys • avoiding skew in the event volume between partitions You can identify skew in the partitions by comparing the records received/sent of subtasks (i.e., instances of the same operator) in the Flink dashboard. In addition, Managed Service for Apache Flink monitoring can be configured to expose metrics for numRecordsIn/Out and numRecordsInPerSecond/OutPerSecond on a subtask level as well. State skew For stateful operators, i.e., operators that maintain state for their business logic such as windows, data skew always leads to state skew. Some subtasks receive more events than others because of the skew in the data and hence are also persisting more data in state. However, even for an Data skew 785 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide application that has evenly balanced partitions, there can be a skew in how much data is persisted in state. For instance, for session windows, some users and sessions respectively may be much longer than others. If the longer sessions happen to be part of the same partition, it can lead to an imbalance of the state size kept by different subtasks of the same operator. State skew not only increases more memory and disk resources required by individual subtasks, it can also decrease the overall performance of the application. When an application is taking a checkpoint or savepoint, the operator state is persisted to Amazon S3, to protect the state against node or cluster failure. During this process (especially with exactly once semantics that are enabled by default on Managed Service for Apache Flink), the processing stalls from an external perspective until the checkpoint/savepoint has completed. If there is data skew, the time to complete the operation can be bound by a single subtask that has accumulated a particularly high amount of state. In extreme cases, taking checkpoints/savepoints can fail because of a single subtask not being able to persist state. So similar to data skew, state skew can substantially slow down an application. To identify state skew, you can leverage the Flink dashboard. Find a recent checkpoint or savepoint and compare the amount of data that has been stored for individual subtasks in the details. Integrate with resources in different Regions You can enable using StreamingFileSink to write to an Amazon S3 bucket in a different Region from your Managed Service for Apache Flink application via a setting required for cross Region replication in the Flink configuration. To do this, file a support ticket at AWS Support Center. Integrate with resources in different Regions 786 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Document history for Amazon Managed Service for Apache Flink The following table describes the important changes to the documentation since the last release of Managed Service for Apache Flink. • API version: 2018-05-23 • Latest documentation update: August 30, 2023 Change Description Date Kinesis Data Analytics is now known as Managed Service There are no changes to the service endpoints, APIs, the for Apache Flink Command Line Interface, IAM August 30, 2023 access policies, CloudWatch Metrics, or the AWS Billing dashboards. Your existing applications will continue to work as they did previousl y. For more information, see What Is Managed Service for Apache Flink? Managed Service for Apache Flink now supports applicati ons that use Apache Flink version 1.15.2. Create Kinesis Data Analytics applications using the Apache Flink Table API. For more information, see Create an application. Managed Service for Apache Flink now supports applicati ons that use Apache Flink Support for Apache Flink version 1.15.2 Support for Apache Flink version 1.13.2 November 22, 2022 October 13, 2021 787 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Change Description Date Support for Python version 1.13.2. Create Kinesis Data Analytics applications using the Apache Flink Table API. For more information, see Getting Started: Flink 1.13.2. Managed Service for Apache Flink now supports applicati ons that use Python with the Apache Flink Table API & SQL. For more information, see Use Python. March 25, 2021 Support for Apache Flink 1.11.1 Managed Service for Apache |
analytics-java-api-239 | analytics-java-api.pdf | 239 | that use Apache Flink Support for Apache Flink version 1.15.2 Support for Apache Flink version 1.13.2 November 22, 2022 October 13, 2021 787 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Change Description Date Support for Python version 1.13.2. Create Kinesis Data Analytics applications using the Apache Flink Table API. For more information, see Getting Started: Flink 1.13.2. Managed Service for Apache Flink now supports applicati ons that use Python with the Apache Flink Table API & SQL. For more information, see Use Python. March 25, 2021 Support for Apache Flink 1.11.1 Managed Service for Apache Flink now supports applicati November 19, 2020 Apache Flink Dashboard EFO Consumer ons that use Apache Flink 1.11.1. Create Kinesis Data Analytics applications using the Apache Flink Table API. For more information, see Create an application. Use the Apache Flink Dashboard to monitor application health and performance. For more information, see Use the Apache Flink Dashboard. Create applications that use an Enhanced Fan-Out (EFO) consumer to read from a Kinesis Data Stream. For more information, see EFO Consumer. November 19, 2020 October 6, 2020 788 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Change Description Date Apache Beam Performance Custom Keystore CloudWatch Alarms New CloudWatch Metrics Custom CloudWatch Metrics Create applications that use Apache Beam to process streaming data. For more information, see Use CloudFormation. How to troubleshoot applicati on performance issues, and how to create a performan t application. For more information, see ???. How to access an Amazon MSK cluster that uses a custom keystore for encryption in transit. For more information, see Custom Truststore. Recommendations for creating CloudWatch alarms with Managed Service for Apache Flink. For more information, see ???. Managed Service for Apache Flink now emits 22 metrics to Amazon CloudWatch Metrics. For more information, see ???. Define application-specific metrics and emit them to Amazon CloudWatch Metrics. For more information, see ???. September 15, 2020 July 21, 2020 June 10, 2020 June 5, 2020 May 12, 2020 May 12, 2020 789 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Change Description Date Example: Read From a Kinesis Stream in a Different Account Learn how to access a Kinesis stream in a different AWS March 30, 2020 Support for Apache Flink 1.8.2 account in your Managed Service for Apache Flink application. For more information, see Cross-Acc ount. Managed Service for Apache Flink now supports applicati ons that use Apache Flink 1.8.2. Use the Flink Streaming FileSink connector to write output directly to S3. For more information, see Create an application. December 17, 2019 Managed Service for Apache Flink VPC Configure a Managed Service for Apache Flink application November 25, 2019 Managed Service for Apache Flink Best Practices Analyze Managed Service for Apache Flink Application Logs to connect to a virtual private cloud. For more information, see Configure MSF to access resources in an Amazon VPC. Best practices for creating and administering Managed Service for Apache Flink applications. For more information, see ???. Use CloudWatch Logs Insights to monitor your Managed Service for Apache Flink application. For more information, see ???. October 14, 2019 June 26, 2019 790 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Change Description Date Managed Service for Apache Flink Application Runtime Work with Runtime Propertie s in Managed Service for June 24, 2019 Properties Apache Flink. For more information, see Use runtime properties. Tagging Managed Service for Apache Flink Applications Use application tagging to determine per-application May 8, 2019 costs, control access, or for user-defined purposes. For more information, see Add tags to Managed Service for Apache Flink applications. Logging Managed Service for Apache Flink API Calls with Managed Service for Apache Flink is integrated with AWS March 22, 2019 AWS CloudTrail Create an Application (Firehose Sink) Public release CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Managed Service for Apache Flink. For more information, see ???. Exercise to create a Managed Service for Apache Flink with an Amazon Kinesis data stream as a source, and an Amazon Data Firehose stream as a sink. For more informati on, see Firehose sink. This is the initial release of the Managed Service for Apache Flink Developer Guide for Java Applications. December 13, 2018 November 27, 2018 791 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink API example code This topic contains example request blocks for Managed Service for Apache Flink actions. To use JSON as the input for an action with the AWS Command Line Interface (AWS CLI), save the request in a JSON file. Then pass the file name into the action using the --cli-input-json parameter. The following example demonstrates how to |
analytics-java-api-240 | analytics-java-api.pdf | 240 | sink. This is the initial release of the Managed Service for Apache Flink Developer Guide for Java Applications. December 13, 2018 November 27, 2018 791 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink API example code This topic contains example request blocks for Managed Service for Apache Flink actions. To use JSON as the input for an action with the AWS Command Line Interface (AWS CLI), save the request in a JSON file. Then pass the file name into the action using the --cli-input-json parameter. The following example demonstrates how to use a JSON file with an action. $ aws kinesisanalyticsv2 start-application --cli-input-json file://start.json For more information about using JSON with the AWS CLI, see Generate CLI Skeleton and CLI Input JSON Parameters in the AWS Command Line Interface User Guide. Topics • AddApplicationCloudWatchLoggingOption • AddApplicationInput • AddApplicationInputProcessingConfiguration • AddApplicationOutput • AddApplicationReferenceDataSource • AddApplicationVpcConfiguration • CreateApplication • CreateApplicationSnapshot • DeleteApplication • DeleteApplicationCloudWatchLoggingOption • DeleteApplicationInputProcessingConfiguration • DeleteApplicationOutput • DeleteApplicationReferenceDataSource • DeleteApplicationSnapshot • DeleteApplicationVpcConfiguration • DescribeApplication • DescribeApplicationSnapshot 792 Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink • DiscoverInputSchema • ListApplications • ListApplicationSnapshots • StartApplication • StopApplication • UpdateApplication AddApplicationCloudWatchLoggingOption The following example request code for the AddApplicationCloudWatchLoggingOption action adds an Amazon CloudWatch logging option to a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CloudWatchLoggingOption": { "LogStreamARN": "arn:aws:logs:us-east-1:123456789123:log-group:my-log- group:log-stream:My-LogStream" }, "CurrentApplicationVersionId": 2 } AddApplicationInput The following example request code for the AddApplicationInput action adds an application input to a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 2, "Input": { "InputParallelism": { "Count": 2 }, "InputSchema": { "RecordColumns": [ { "Mapping": "$.TICKER", "Name": "TICKER_SYMBOL", AddApplicationCloudWatchLoggingOption 793 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "SqlType": "VARCHAR(50)" }, { "SqlType": "REAL", "Name": "PRICE", "Mapping": "$.PRICE" } ], "RecordEncoding": "UTF-8", "RecordFormat": { "MappingParameters": { "JSONMappingParameters": { "RecordRowPath": "$" } }, "RecordFormatType": "JSON" } }, "KinesisStreamsInput": { "ResourceARN": "arn:aws:kinesis:us-east-1:012345678901:stream/ ExampleInputStream" } } } AddApplicationInputProcessingConfiguration The following example request code for the AddApplicationInputProcessingConfiguration action adds an application input processing configuration to a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 2, "InputId": "2.1", "InputProcessingConfiguration": { "InputLambdaProcessor": { "ResourceARN": "arn:aws:lambda:us- east-1:012345678901:function:MyLambdaFunction" } } } AddApplicationInputProcessingConfiguration 794 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide AddApplicationOutput The following example request code for the AddApplicationOutput action adds a Kinesis data stream as an application output to a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 2, "Output": { "DestinationSchema": { "RecordFormatType": "JSON" }, "KinesisStreamsOutput": { "ResourceARN": "arn:aws:kinesis:us-east-1:012345678901:stream/ ExampleOutputStream" }, "Name": "DESTINATION_SQL_STREAM" } } AddApplicationReferenceDataSource The following example request code for the AddApplicationReferenceDataSource action adds a CSV application reference data source to a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 5, "ReferenceDataSource": { "ReferenceSchema": { "RecordColumns": [ { "Mapping": "$.TICKER", "Name": "TICKER", "SqlType": "VARCHAR(4)" }, { "Mapping": "$.COMPANYNAME", "Name": "COMPANY_NAME", "SqlType": "VARCHAR(40)" }, ], AddApplicationOutput 795 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "RecordEncoding": "UTF-8", "RecordFormat": { "MappingParameters": { "CSVMappingParameters": { "RecordColumnDelimiter": " ", "RecordRowDelimiter": "\r\n" } }, "RecordFormatType": "CSV" } }, "S3ReferenceDataSource": { "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket", "FileKey": "TickerReference.csv" }, "TableName": "string" } } AddApplicationVpcConfiguration The following example request code for the AddApplicationVpcConfiguration action adds a VPC configuration to an existing application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 9, "VpcConfiguration": { "SecurityGroupIds": [ "sg-0123456789abcdef0" ], "SubnetIds": [ "subnet-0123456789abcdef0" ] } } CreateApplication The following example request code for the CreateApplication action creates a Managed Service for Apache Flink application: { "ApplicationName":"MyApplication", AddApplicationVpcConfiguration 796 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationDescription":"My-Application-Description", "RuntimeEnvironment":"FLINK-1_15", "ServiceExecutionRole":"arn:aws:iam::123456789123:role/myrole", "CloudWatchLoggingOptions":[ { "LogStreamARN":"arn:aws:logs:us-east-1:123456789123:log-group:my-log-group:log- stream:My-LogStream" } ], "ApplicationConfiguration": { "EnvironmentProperties": {"PropertyGroups": [ {"PropertyGroupId": "ConsumerConfigProperties", "PropertyMap": {"aws.region": "us-east-1", "flink.stream.initpos": "LATEST"} }, {"PropertyGroupId": "ProducerConfigProperties", "PropertyMap": {"aws.region": "us-east-1"} }, ] }, "ApplicationCodeConfiguration":{ "CodeContent":{ "S3ContentLocation":{ "BucketARN":"arn:aws:s3:::amzn-s3-demo-bucket", "FileKey":"myflink.jar", "ObjectVersion":"AbCdEfGhIjKlMnOpQrStUvWxYz12345" } }, "CodeContentType":"ZIPFILE" }, "FlinkApplicationConfiguration":{ "ParallelismConfiguration":{ "ConfigurationType":"CUSTOM", "Parallelism":2, "ParallelismPerKPU":1, "AutoScalingEnabled":true } } } CreateApplication 797 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } CreateApplicationSnapshot The following example request code for the CreateApplicationSnapshot action creates a snapshot of application state: { "ApplicationName": "MyApplication", "SnapshotName": "MySnapshot" } DeleteApplication The following example request code for the DeleteApplication action deletes a Managed Service for Apache Flink application: {"ApplicationName": "MyApplication", "CreateTimestamp": 12345678912} DeleteApplicationCloudWatchLoggingOption The following example request code for the DeleteApplicationCloudWatchLoggingOption action deletes an Amazon CloudWatch logging option from a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CloudWatchLoggingOptionId": "3.1" "CurrentApplicationVersionId": 3 } DeleteApplicationInputProcessingConfiguration The following example request code for the DeleteApplicationInputProcessingConfiguration action removes an input processing configuration from a Managed Service for Apache Flink application: CreateApplicationSnapshot 798 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 4, "InputId": "2.1" } DeleteApplicationOutput The following example request code for the DeleteApplicationOutput action removes an application output from a |
analytics-java-api-241 | analytics-java-api.pdf | 241 | Apache Flink application: {"ApplicationName": "MyApplication", "CreateTimestamp": 12345678912} DeleteApplicationCloudWatchLoggingOption The following example request code for the DeleteApplicationCloudWatchLoggingOption action deletes an Amazon CloudWatch logging option from a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CloudWatchLoggingOptionId": "3.1" "CurrentApplicationVersionId": 3 } DeleteApplicationInputProcessingConfiguration The following example request code for the DeleteApplicationInputProcessingConfiguration action removes an input processing configuration from a Managed Service for Apache Flink application: CreateApplicationSnapshot 798 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 4, "InputId": "2.1" } DeleteApplicationOutput The following example request code for the DeleteApplicationOutput action removes an application output from a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 4, "OutputId": "4.1" } DeleteApplicationReferenceDataSource The following example request code for the DeleteApplicationReferenceDataSource action removes an application reference data source from a Managed Service for Apache Flink application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 5, "ReferenceId": "5.1" } DeleteApplicationSnapshot The following example request code for the DeleteApplicationSnapshot action deletes a snapshot of application state: { "ApplicationName": "MyApplication", "SnapshotCreationTimestamp": 12345678912, "SnapshotName": "MySnapshot" } DeleteApplicationOutput 799 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide DeleteApplicationVpcConfiguration The following example request code for the DeleteApplicationVpcConfiguration action removes an existing VPC configuration from an application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 9, "VpcConfigurationId": "1.1" } DescribeApplication The following example request code for the DescribeApplication action returns details about a Managed Service for Apache Flink application: {"ApplicationName": "MyApplication"} DescribeApplicationSnapshot The following example request code for the DescribeApplicationSnapshot action returns details about a snapshot of application state: { "ApplicationName": "MyApplication", "SnapshotName": "MySnapshot" } DiscoverInputSchema The following example request code for the DiscoverInputSchema action generates a schema from a streaming source: { "InputProcessingConfiguration": { "InputLambdaProcessor": { DeleteApplicationVpcConfiguration 800 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ResourceARN": "arn:aws:lambda:us- east-1:012345678901:function:MyLambdaFunction" } }, "InputStartingPositionConfiguration": { "InputStartingPosition": "NOW" }, "ResourceARN": "arn:aws:kinesis:us-east-1:012345678901:stream/ExampleInputStream", "S3Configuration": { "BucketARN": "string", "FileKey": "string" }, "ServiceExecutionRole": "string" } The following example request code for the DiscoverInputSchema action generates a schema from a reference source: { "S3Configuration": { "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket", "FileKey": "TickerReference.csv" }, "ServiceExecutionRole": "arn:aws:iam::123456789123:role/myrole" } ListApplications The following example request code for the ListApplications action returns a list of Managed Service for Apache Flink applications in your account: { "ExclusiveStartApplicationName": "MyApplication", "Limit": 50 } ListApplicationSnapshots The following example request code for the ListApplicationSnapshots action returns a list of snapshots of application state: ListApplications 801 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide {"ApplicationName": "MyApplication", "Limit": 50, "NextToken": "aBcDeFgHiJkLmNoPqRsTuVwXyZ0123" } StartApplication The following example request code for the StartApplication action starts a Managed Service for Apache Flink application, and loads the application state from the latest snapshot (if any): { "ApplicationName": "MyApplication", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } StopApplication The following example request code for the API_StopApplication action stops a Managed Service for Apache Flink application: {"ApplicationName": "MyApplication"} UpdateApplication The following example request code for the UpdateApplication action updates a Managed Service for Apache Flink application to change the location of the application code: {"ApplicationName": "MyApplication", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentTypeUpdate": "ZIPFILE", "CodeContentUpdate": { "S3ContentLocationUpdate": { StartApplication 802 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "BucketARNUpdate": "arn:aws:s3:::amzn-s3-demo-bucket", "FileKeyUpdate": "my_new_code.zip", "ObjectVersionUpdate": "2" } } } } UpdateApplication 803 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink API Reference For information about the APIs that Managed Service for Apache Flink provides, see Managed Service for Apache Flink API Reference. 804 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide This content was moved to Release versions. See Release versions. 805 |
analytics-on-aws-how-to-choose-001 | analytics-on-aws-how-to-choose.pdf | 1 | AWS Decision Guide Choosing an AWS analytics service Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Choosing an AWS analytics service AWS Decision Guide Choosing an AWS analytics service: AWS Decision Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Choosing an AWS analytics service Table of Contents AWS Decision Guide Decision guide .................................................................................................................................. 1 Introduction ................................................................................................................................................... 1 Understand ..................................................................................................................................................... 2 Consider .......................................................................................................................................................... 6 Choose .......................................................................................................................................................... 14 Use ................................................................................................................................................................. 17 Explore .......................................................................................................................................................... 26 Document history .......................................................................................................................... 28 iii Choosing an AWS analytics service AWS Decision Guide Choosing an AWS analytics service Taking the first step Purpose Last updated Covered services Help determine which AWS analytics services are the best fit for your organization. February 20, 2025 • Amazon Athena • AWS Clean Rooms • Amazon Data Firehose • Amazon DataZone • Amazon EMR • AWS Glue • Amazon Kinesis Data Streams • Amazon Managed Service for Apache Flink • Amazon Managed Streaming for Apache Kafka • Amazon OpenSearch Service • QuickSight • Amazon Redshift • Amazon S3 • Amazon SageMaker Lakehouse • Amazon SageMaker Unified Studio Introduction Data is foundational to modern business. People and applications need to securely access and analyze data, which comes from new and diverse sources. The volume of data is also constantly increasing, which can cause organizations to struggle with capturing, storing, and analyzing all the necessary data. Introduction 1 Choosing an AWS analytics service AWS Decision Guide Meeting these challenges means building a modern data architecture that breaks down all of your data silos for analytics and insights--including third-party data--and makes it accessible to everyone in the organization, in one place, with end-to-end governance. It's also increasingly important to connect your analytics and machine learning (ML) systems to enable predictive analytics. This decision guide helps you ask the right questions to build your modern data architecture on AWS services. It explains how to break down your data silos (by connecting your data lake and data warehouses), your system silos (by connecting ML and analytics), and your people silos (by putting data in the hands of everyone in your organization). This eight-minute exerpt is from a one-hour presentation by Sirish Chandrasekaran and Rick Sears at re:Invent 2024. It provides an overview of how a fictional company, Maxdome, uses SageMaker Unified Studio AI and analytics to unlock data insights. Understand AWS analytics services A modern data strategy is built with a set of technology building blocks that help you manage, access, analyze, and act on data. It also gives you multiple options to connect to data sources. A modern data strategy should empower your teams to: • Use your preferred tools or techniques • Use artificial intelligence (AI) to assist with finding answers to specific questions about your data • Manage who has access to data with the proper security and data governance controls • Break down data silos to give you the best of both data lakes and purpose-built data stores • Store any amount of data, at low-cost, and in open, standards-based data formats • Connect your data lakes, data warehouses, operational databases, applications, and federated data sources into a coherent whole AWS offers a variety of services to help you achieve a modern data strategy. The following diagram depicts the AWS services for analytics that this guide covers. The tabs that follow provide additional details. Understand 2 Choosing an AWS analytics service AWS Decision Guide Unified analytics and AI The next generation of Amazon SageMaker combines widely adopted AWS machine learning (ML) and analytics capabilities to deliver an integrated experience for analytics and AI, providing unified access to all your data. Using Amazon SageMaker Unified Studio (preview), you can collaborate and build faster with familiar AWS tools for model development, generative AI application development, data processing, and SQL analytics, all accelerated by Amazon Q Developer, our generative AI assistant for software development. Access your data from data lakes, data warehouses, or third-party and federated sources, with built-in governance to meet enterprise security requirements. Data processing • Amazon Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena. Athena integrates with QuickSight, AWS Glue Data Catalog, |
analytics-on-aws-how-to-choose-002 | analytics-on-aws-how-to-choose.pdf | 2 | processing, and SQL analytics, all accelerated by Amazon Q Developer, our generative AI assistant for software development. Access your data from data lakes, data warehouses, or third-party and federated sources, with built-in governance to meet enterprise security requirements. Data processing • Amazon Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena. Athena integrates with QuickSight, AWS Glue Data Catalog, and other AWS services. You can also analyze data at scale with Trino, without needing to manage infrastructure, and build real-time analytics using Apache Flink and Apache Spark. • Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. Using these frameworks and related open-source projects, you can process data for analytics Understand 3 Choosing an AWS analytics service AWS Decision Guide purposes and business intelligence workloads. Amazon EMR also lets you transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon S3. • With AWS Glue, you can discover and connect to more than 70 diverse data sources and manage your data in a centralized data catalog. You can visually create, run, and monitor ETL pipelines to load data into your data lakes. Also, you can immediately search and query cataloged data using Athena, Amazon EMR, and Amazon Redshift Spectrum. Data streaming • With Amazon Managed Streaming for Apache Kafka (Amazon MSK), you can build and run applications that use Apache Kafka to process streaming data. Amazon MSK provides the control-plane operations, such as those for creating, updating, and deleting clusters. It lets you use Apache Kafka data-plane operations, such as those for producing and consuming data. • With Amazon Kinesis Data Streams, you can collect and process large streams of data records in real time. The type of data used can include IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data. • Amazon Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, and Apache Iceberg Tables. You can also send data to any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB, New Relic, Coralogix, and Elastic. • With Amazon Managed Service for Apache Flink, you can use Java, Scala, Python, or SQL to process and analyze streaming data. You can author and run code against streaming sources and static sources to perform time-series analytics, feed real-time dashboards, and metrics. Business intelligence QuickSight gives decision-makers the opportunity to explore and interpret information in an interactive visual environment. In a single data dashboard, QuickSight can include AWS data, third-party data, big data, spreadsheet data, SaaS data, B2B data, and more. With QuickSight Q, you can use natural language to ask questions about your data and receive a response. For example, "What are the top-selling categories in California?" Understand 4 Choosing an AWS analytics service Search analytics AWS Decision Guide Amazon OpenSearch Service provisions all the resources for your OpenSearch cluster and launches it. It also automatically detects and replaces failed OpenSearch Service nodes, reducing the overhead associated with self-managed infrastructures. You can use OpenSearch Service direct query to analyze data in Amazon S3 and other AWS services. Data governance With Amazon DataZone, you can manage and govern access to data by using fine-grained controls. These controls help ensure access with the right level of privileges and context. Amazon DataZone simplifies your architecture by integrating data management services, including Amazon Redshift, Athena, QuickSight, AWS Glue, on-premises sources, and third- party sources. Data collaboration AWS Clean Rooms is a secure collaboration workspace where you can analyze collective datasets without providing access to the raw data. You can collaborate with other companies by choosing the partners with whom you want to collaborate, selecting their datasets, and configuring privacy-enhancing controls for those partners. When you run queries, AWS Clean Rooms reads data from that data's original location and applies built-in analysis rules to help you maintain control over that data. Data lake and data warehouse • Amazon SageMaker Lakehouse unifies your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics, ML, and generative AI applications on a single copy of data. SageMaker Lakehouse gives you the flexibility to access and query your data in-place using Apache Iceberg–compatible tools and engines. You can also integrate data from operational databases and applications into your lakehouse in near real time |
analytics-on-aws-how-to-choose-003 | analytics-on-aws-how-to-choose.pdf | 3 | you run queries, AWS Clean Rooms reads data from that data's original location and applies built-in analysis rules to help you maintain control over that data. Data lake and data warehouse • Amazon SageMaker Lakehouse unifies your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics, ML, and generative AI applications on a single copy of data. SageMaker Lakehouse gives you the flexibility to access and query your data in-place using Apache Iceberg–compatible tools and engines. You can also integrate data from operational databases and applications into your lakehouse in near real time through zero-ETL integrations. With fine-grained permissions, your data is secured across all analytics and ML tools and engines, ensuring consistent access control. • Amazon Simple Storage Service (Amazon S3) can store and protect virtually any amount and kind of data, which you can use for your data lake foundation. Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements. Amazon S3 Tables provide S3 storage that’s optimized for analytics workloads. Using standard SQL statements, you can query your tables with query engines that support Iceberg, such as Athena, Amazon Redshift, and Apache Spark. Understand 5 Choosing an AWS analytics service AWS Decision Guide • Amazon Redshift is a fully managed, petabyte-scale data warehouse service. Amazon Redshift integrates with SageMaker Lakehouse, allowing you to use its powerful SQL analytic capabilities on your unified data across Amazon Redshift data warehouses and Amazon S3 data lakes. You can also use Amazon Q in Amazon Redshift, which simplifies SQL authoring through natural language. Consider criteria for AWS analytics services There are many reasons for building data analytics on AWS. You might need to support a greenfield or pilot project as a first step in your cloud migration journey. Alternatively, you might be migrating an existing workload with as little disruption as possible. Whatever your goal, the following considerations can be useful in making your choice. Assess data sources and data types Analyze available data sources and data types to gain a comprehensive understanding of data diversity, frequency, and quality. Understand any potential challenges in processing and analyzing the data. This analysis is crucial because: • Data sources are diverse and come from various systems, applications, devices, and external platforms. • Data sources have unique structure, format, and frequency of data updates. Analyzing these sources helps in identifying suitable data collection methods and technologies. • Analyzing data types, such as structured, semi-structured, and unstructured data determines the appropriate data processing and storage approaches. • Analyzing data sources and types facilitates data quality assessment, helps you anticipate potential data quality issues—missing values, inconsistencies, or inaccuracies. Data processing requirements Determine data processing requirements for how data is ingested, transformed, cleansed, and prepared for analysis. Key considerations include: • Data transformation: Determine the specific transformations needed to make the raw data suitable for analysis. This involves tasks like data aggregation, normalization, filtering, and enrichment. Consider 6 Choosing an AWS analytics service AWS Decision Guide • Data cleansing: Assess data quality and define processes to handle missing, inaccurate, or inconsistent data. Implement data cleansing techniques to ensure high-quality data for reliable insights. • Processing frequency: Determine whether real-time, near real-time, or batch processing is required based on the analytical needs. Real-time processing enables immediate insights, while batch processing may be sufficient for periodic analyses. • Scalability and throughput: Evaluate the scalability requirements for handling data volumes, processing speed, and the number of concurrent data requests. Ensure that the chosen processing approach can accommodate future growth. • Latency: Consider the acceptable latency for data processing and the time it takes from data ingestion to analysis results. This is particularly important for real-time or time-sensitive analytics. Storage requirements Determine storage needs by determining how and where data is stored throughout the analytics pipeline. Important considerations include: • Data volume: Assess the amount of data being generated and collected, and estimate future data growth to plan for sufficient storage capacity. • Data retention: Define the duration for which data should be retained for historical analysis or compliance purposes. Determine the appropriate data retention policies. • Data access patterns: Understand how data will be accessed and queried to choose the most suitable storage solution. Consider read and write operations, data access frequency, and data locality. • Data security: Prioritize data security by evaluating encryption options, access controls, and data protection mechanisms to safeguard sensitive information. • Cost optimization: Optimize storage costs by selecting the most cost-effective storage solutions based on data access patterns and usage. • Integration with analytics services: Ensure seamless integration between the chosen storage solution and the data processing and analytics tools in the pipeline. Consider 7 Choosing an AWS analytics service Types of |
analytics-on-aws-how-to-choose-004 | analytics-on-aws-how-to-choose.pdf | 4 | access patterns: Understand how data will be accessed and queried to choose the most suitable storage solution. Consider read and write operations, data access frequency, and data locality. • Data security: Prioritize data security by evaluating encryption options, access controls, and data protection mechanisms to safeguard sensitive information. • Cost optimization: Optimize storage costs by selecting the most cost-effective storage solutions based on data access patterns and usage. • Integration with analytics services: Ensure seamless integration between the chosen storage solution and the data processing and analytics tools in the pipeline. Consider 7 Choosing an AWS analytics service Types of data AWS Decision Guide When deciding on analytics services for the collection and ingestion of data, consider various types of data that are relevant to your organization's needs and objectives. Common types of data that you might need to consider includes: • Transactional data: Includes information about individual interactions or transactions, such as customer purchases, financial transactions, online orders, and user activity logs. • File-based data: Refers to structured or unstructured data that is stored in files, such as log files, spreadsheets, documents, images, audio files, and video files. Analytics services should support the ingestion of different file formats. • Event data: Captures significant occurrences or incidents, such as user actions, system events, machine events, or business events. Events can include any data that is arriving in high velocity that is captured for onstream or downstream processing. Operational considerations Operational responsibility is shared between you, and AWS, with the division of responsibility varying across different levels of modernization. You have the option of self-managing your analytics infrastructure on AWS or leveraging the numerous serverless analytics services to lesson your infrastructure management burden. Self-managed options grant users greater control over the infrastructure and configurations, but they require more operational effort. Serverless options abstract away much of the operational burden, providing automatic scalability, high availability, and robust security features, allowing users to focus more on building analytical solutions and driving insights rather than managing infrastructure and operational tasks. Consider these benefits of serverless analytics solutions: • Infrastructure abstraction: Serverless services abstract infrastructure management, relieving users from provisioning, scaling, and maintenance tasks. AWS handles these operational aspects, reducing management overhead. • Auto-scaling and performance: Serverless services automatically scale resources based on workload demands, ensuring optimal performance without manual intervention. • High availability and disaster recovery: AWS provides high availability for serverless services. AWS manages data redundancy, replication, and disaster recovery to enhance data availability and reliability. Consider 8 Choosing an AWS analytics service AWS Decision Guide • Security and compliance: AWS manages security measures, data encryption, and compliance for serverless services, adhering to industry standards and best practices. • Monitoring and logging: AWS offers built-in monitoring, logging, and alerting capabilities for serverless services. Users can access detailed metrics and logs through Amazon CloudWatch. Type of workload When building a modern analytics pipeline, deciding on the types of workload to support is crucial to meet different analytical needs effectively. Key decision points to consider for each type of workload includes: Batch workload • Data volume and frequency: Batch processing is suitable for large volumes of data with periodic updates. • Data latency: Batch processing might introduce some delay in delivering insights compared to real-time processing. Interactive analysis • Data query complexity: Interactive analysis requires low-latency responses for quick feedback. • Data visualization: Evaluate the need for interactive data visualization tools to enable business users to explore data visually. Streaming workloads • Data velocity and volume: Streaming workloads require real-time processing to handle high- velocity data. • Data windowing: Define data windowing and time-based aggregations for streaming data to extract relevant insights. Type of analysis needed Clearly define the business objectives and the insights you aim to derive from the analytics. Different types of analytics serve different purposes. For example: Consider 9 Choosing an AWS analytics service AWS Decision Guide • Descriptive analytics is ideal for gaining a historical overview • Diagnostic analytics helps understand the reasons behind past events • Predictive analytics forecasts future outcomes • Prescriptive analytics provides recommendations for optimal actions Match your business goals with the relevant types of analytics. Here are some key decision criteria to help you choose the right types of analytics: • Data availability and quality: Descriptive and diagnostic analytics rely on historical data, while predictive and prescriptive analytics require sufficient historical data and high-quality data to build accurate models. • Data volume and complexity: Predictive and prescriptive analytics require substantial data processing and computational resources. Ensure that your infrastructure and tools can handle the data volume and complexity. • Decision complexity: If decisions involve multiple variables, constraints, and objectives, prescriptive analytics may be more suitable to guide optimal actions. • Risk tolerance: Prescriptive analytics may provide recommendations, but come with associated uncertainties. Ensure that decision-makers understand |
analytics-on-aws-how-to-choose-005 | analytics-on-aws-how-to-choose.pdf | 5 | right types of analytics: • Data availability and quality: Descriptive and diagnostic analytics rely on historical data, while predictive and prescriptive analytics require sufficient historical data and high-quality data to build accurate models. • Data volume and complexity: Predictive and prescriptive analytics require substantial data processing and computational resources. Ensure that your infrastructure and tools can handle the data volume and complexity. • Decision complexity: If decisions involve multiple variables, constraints, and objectives, prescriptive analytics may be more suitable to guide optimal actions. • Risk tolerance: Prescriptive analytics may provide recommendations, but come with associated uncertainties. Ensure that decision-makers understand the risks associated with the analytics outputs. Evaluate scalability and performance Assess the scalability and performance needs of the architecture. The design must handle increasing data volumes, user demands, and analytical workloads. Key decision factors to consider includes: • Data volume and growth: Assess the current data volume and anticipate future growth. • Data velocity and real-time requirements: Determine if the data needs to be processed and analyzed in real-time or near real-time. • Data processing complexity: Analyze the complexity of your data processing and analysis tasks. For computationally intensive tasks, services such as Amazon EMR provide a scalable and managed environment for big data processing. • Concurrency and user load: Consider the number of concurrent users and the level of user load on the system. Consider 10 Choosing an AWS analytics service AWS Decision Guide • Auto-scaling capabilities: Consider services that offer auto-scaling capabilities, allowing resources to automatically scale up or down based on demand. This ensures efficient resource utilization and cost optimization. • Geographic distribution: Consider services with global replication and low-latency data access if your data architecture needs to be distributed across multiple regions or locations. • Cost-performance trade-off: Balance the performance needs with cost considerations. Services with high performance may come at a higher cost. • Service-level agreements (SLAs): Check the SLAs provided by AWS services to ensure they meet your scalability and performance expectations. Data governance Data governance is the set of processes, policies, and controls you need to implement to ensure effective management, quality, security, and compliance of your data assets. Key decision points to consider includes: • Data retention policies: Define data retention policies based on regulatory requirements and business needs and establish processes for secure data disposal when it is no longer needed. • Audit trail and logging: Decide on the logging and auditing mechanisms to monitor data access and usage. Implement comprehensive audit trails to track data changes, access attempts, and user activities for compliance and security monitoring. • Compliance requirements: Understand the industry-specific and geographic data compliance regulations that apply to your organization. Ensure that the data architecture aligns with these regulations and guidelines. • Data classification: Classify data based on its sensitivity and define appropriate security controls for each data class. • Disaster recovery and business continuity: Plan for disaster recovery and business continuity to ensure data availability and resilience in case of unexpected events or system failures. • Third-party data sharing: If sharing data with third-party entities, implement secure data sharing protocols and agreements to protect data confidentiality and prevent data misuse. Consider 11 Choosing an AWS analytics service Security AWS Decision Guide The security of data in the analytics pipeline involves protecting data at every stage of the pipeline to ensure its confidentiality, integrity, and availability. Key decision points to consider includes: • Access control and authorization: Implement robust authentication and authorization protocols to ensure that only authorized users can access specific data resources. • Data encryption: Choose appropriate encryption methods for data stored in databases, data lakes, and during data movement between different components of the architecture. • Data masking and anonymization: Consider the need for data masking or anonymization to protect sensitive data, such as PII or sensitive business data, while allowing certain analytical processes to continue. • Secure data integration: Establish secure data integration practices to ensure that data flows securely between different components of the architecture, avoiding data leaks or unauthorized access during data movement. • Network isolation: Consider services that support Amazon VPC Endpoints to avoid exposing resources to the public internet. Plan for integration and data flows Define the integration points and data flows between various components of the analytics pipeline to ensure seamless data flow and interoperability. Key decision points to consider includes: • Data source integration: Identify the data sources from which data will be collected, such as databases, applications, files, or external APIs. Decide on the data ingestion methods (batch, real-time, event-based) to bring data into the pipeline efficiently and with minimal latency. • Data transformation: Determine the transformations required to prepare data for analysis. Decide on the tools and processes to clean, aggregate, normalize, or enrich the data as it moves through the pipeline. • Data movement |
analytics-on-aws-how-to-choose-006 | analytics-on-aws-how-to-choose.pdf | 6 | and data flows between various components of the analytics pipeline to ensure seamless data flow and interoperability. Key decision points to consider includes: • Data source integration: Identify the data sources from which data will be collected, such as databases, applications, files, or external APIs. Decide on the data ingestion methods (batch, real-time, event-based) to bring data into the pipeline efficiently and with minimal latency. • Data transformation: Determine the transformations required to prepare data for analysis. Decide on the tools and processes to clean, aggregate, normalize, or enrich the data as it moves through the pipeline. • Data movement architecture: Choose the appropriate architecture for data movement between pipeline components. Consider batch processing, stream processing, or a combination of both based on the real-time requirements and data volume. Consider 12 Choosing an AWS analytics service AWS Decision Guide • Data replication and sync: Decide on data replication and synchronization mechanisms to keep data up-to-date across all components. Consider real-time replication solutions or periodic data syncs depending on data freshness requirements. • Data quality and validation: Implement data quality checks and validation steps to ensure the integrity of data as it moves through the pipeline. Decide on the actions to be taken when data fails validation, such as alerting or error handling. • Data security and encryption: Determine how data will be secured during transit and at rest. Decide on the encryption methods to protect sensitive data throughout the pipeline, considering the level of security required based on data sensitivity. • Scalability and resilience: Ensure that the data flow design allows for horizontal scalability and can handle increased data volumes and traffic. Architect for cost optimization Building your analytics pipeline on AWS provides various cost optimization opportunities. To ensure cost efficiency, consider the following strategies: • Resource sizing and selection: Right-size your resources based on actual workload requirements. Choose AWS services and instance types that match the workloads performance needs while avoiding overprovisioning. • Auto-scaling: Implement auto-scaling for services that experience varying workloads. Auto- scaling dynamically adjusts the number of instances based on demand, reducing costs during low-traffic periods. • Spot Instances: Use Amazon EC2 Spot Instances for non-critical and fault-tolerant workloads. Spot Instances can significantly reduce costs compared to on-demand instances. • Reserved instances: Consider purchasing AWS Reserved Instances to achieve significant cost savings over on-demand pricing for stable workloads with predictable usage. • Data storage tiering: Optimize data storage costs by using different storage classes based on data access frequency. • Data lifecycle policies: Establish data lifecycle policies to automatically move or delete data based on its age and usage patterns. This helps manage storage costs and keeps data storage aligned with its value. Consider 13 Choosing an AWS analytics service AWS Decision Guide Choose AWS analytics services Now that you know the criteria to evaluate your analytics needs, you are ready to choose which AWS analytics services are right for your organizational needs. The following table aligns sets of services with common capabilities and business goals. Categories What is it optimized for? Services Unified analytics and AI Analytics and AI developme nt Amazon SageMaker Unified Studio (preview) Optimized for using a single environment to access data, analytics, and AI capabilities. Data processing Interactive analytics Amazon Athena Optimized for performing real-time data analysis and exploration, which allows users to interactively query and visualize data. Big data processing Amazon EMR Optimized for processing, moving, and transforming large amounts of data. Data catalog AWS Glue Optimized for providing detailed information about the available data, its structure, characteristics, and relationships. Data streaming Apache Kafka processing of streaming data Amazon MSK Choose 14 Choosing an AWS analytics service AWS Decision Guide Categories What is it optimized for? Services Optimized for using Apache Kafka data-plane operation s and running open source versions of Apache Kafka. Real-time processing Amazon Kinesis Data Streams Optimized for rapid and continuous data intake and aggregation, including IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data. Real-time streaming data delivery Amazon Data Firehose Optimized for delivering real-time streaming data to destinations such as Amazon S3, Amazon Redshift, OpenSearch Service, Splunk, Apache Iceberg Tables, and any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers. Building Apache Flink applications Amazon Managed Service for Apache Flink Optimized for using Java, Scala, Python, or SQL to process and analyze streaming data. Choose 15 Choosing an AWS analytics service AWS Decision Guide Categories What is it optimized for? Services Business intelligence Dashboards and visualiza tions QuickSight Search analytics Optimized for visually representing complex datasets, and providing natural language query of your data. Managed OpenSearch clusters Optimized for log analytics , real-time application monitoring, and clickstream analysis. Amazon OpenSearch Service Data governance Managing data access Amazon DataZone Optimized for setting up the proper management, availabil |
analytics-on-aws-how-to-choose-007 | analytics-on-aws-how-to-choose.pdf | 7 | supported third-party service providers. Building Apache Flink applications Amazon Managed Service for Apache Flink Optimized for using Java, Scala, Python, or SQL to process and analyze streaming data. Choose 15 Choosing an AWS analytics service AWS Decision Guide Categories What is it optimized for? Services Business intelligence Dashboards and visualiza tions QuickSight Search analytics Optimized for visually representing complex datasets, and providing natural language query of your data. Managed OpenSearch clusters Optimized for log analytics , real-time application monitoring, and clickstream analysis. Amazon OpenSearch Service Data governance Managing data access Amazon DataZone Optimized for setting up the proper management, availabil ity, usability, integrity, and security of data throughout its lifecycle. Data collaboration Secure data clean rooms AWS Clean Rooms Optimized for collaborating with other companies without sharing raw underlying data. Choose 16 Choosing an AWS analytics service AWS Decision Guide Categories What is it optimized for? Services Data lake and warehouse Integrated data lake and data warehouse access Amazon SageMaker Lakehouse Optimized for unifying your data across Amazon S3 data lakes and Amazon Redshift data warehouses. Object storage for data lakes Amazon S3 Optimized for providing a data lake foundation with virtually unlimited scalability and high durability. Data warehousing Amazon Redshift Optimized for centrally storing, organizing, and retrieving large volumes of structured and sometimes semi-structured data from various sources within an organization. Use AWS analytics services You should now have a clear understanding of your business objectives, and the volume and velocity of data you will be ingesting and analyzing to begin building your data pipelines. To explore how to use and learn more about each of the available services—we have provided a pathway to explore how each of the services work. The following sections provides links to in- depth documentation, hands-on tutorials, and resources to get you started from basic usage to more advanced deep dives. Use 17 Choosing an AWS analytics service Amazon Athena • Getting started with Amazon Athena AWS Decision Guide Learn how to use Amazon Athena to query data and create a table based on sample data stored in Amazon S3, query the table, and check the results of the query. Get started with the tutorial • Get started with Apache Spark on Athena Use the simplified notebook experience in Athena console to develop Apache Spark applications using Python or Athena notebook APIs. Get started with the tutorial • Catalog and govern Athena federated queries with SageMaker Lakehouse Learn how to connect to, govern, and run federated queries on data stored in Amazon Redshift, DynamoDB (Preview), and Snowflake (Preview). Read the blog • Analyzing data in Amazon S3 using Athena Explore how to use Athena on logs from Elastic Load Balancers, generated as text files in a pre-defined format. We show you how to create a table, partition the data in a format used by Athena, convert it to Parquet, and compare query performance. Read the blog post AWS Clean Rooms • Setting up AWS Clean Rooms Learn how to set up AWS Clean Rooms in your AWS acccount. Read the guide • Unlock data insights across multi-party datasets using AWS Entity Resolution on AWS Clean Rooms without sharing underlying data Learn how to use preparation and matching to help improve data matching with collaborators. Use 18 Choosing an AWS analytics service Read the blog post AWS Decision Guide • How differential privacy helps unlock insights without revealing data at the individual- level Learn how AWS Clean Rooms Differential Privacy simplifies applying differential privacy and helps protect the privacy of your users. Read the blog Amazon Data Firehose • Tutorial: Create a Firehose stream from console Learn how to use the AWS Management Console or an AWS SDK to create a Firehose stream to your chosen destination. Read the guide • Send data to a Firehose stream Learn how to use different data sources to send data to your Firehose stream. Read the guide • Transform source data in Firehose Learn how to invoke your Lambda function to transform incoming source data and deliver the transformed data to destinations. Read the guide Amazon DataZone • Getting started with Amazon DataZone Learn how to create the Amazon DataZone root domain, obtain the data portal URL, walk through the basic Amazon DataZone workflows for data producers and data consumers. Get started with the tutorial Use 19 Choosing an AWS analytics service AWS Decision Guide • Announcing the general availability of data lineage in the next generation of Amazon SageMaker and Amazon DataZone Learn how Amazon DataZone uses automated lineage capture to focus on automatically collecting and mapping lineage information from AWS Glue and Amazon Redshift. Get started with the tutorial Amazon EMR • Getting started with Amazon EMR Learn how to launch a sample cluster using Spark, and how to run a simple PySpark script stored in |
analytics-on-aws-how-to-choose-008 | analytics-on-aws-how-to-choose.pdf | 8 | through the basic Amazon DataZone workflows for data producers and data consumers. Get started with the tutorial Use 19 Choosing an AWS analytics service AWS Decision Guide • Announcing the general availability of data lineage in the next generation of Amazon SageMaker and Amazon DataZone Learn how Amazon DataZone uses automated lineage capture to focus on automatically collecting and mapping lineage information from AWS Glue and Amazon Redshift. Get started with the tutorial Amazon EMR • Getting started with Amazon EMR Learn how to launch a sample cluster using Spark, and how to run a simple PySpark script stored in an Amazon S3 bucket. Get started with the tutorial • Getting started with Amazon EMR on Amazon EKS We show you how to get started using Amazon EMR on Amazon EKS by deploying a Spark application on a virtual cluster. Explore the guide • Get started with EMR Serverless Explore how Amazon EMR Serverless provides a serverless runtime environment that simplifies the operation of analytics applications that use the latest open source frameworks. Get started with the tutorial AWS Glue • Getting started with AWS Glue DataBrew Learn how to create your first DataBrew project. You load a sample dataset, run transformations on that dataset, build a recipe to capture those transformations, and run a job to write the transformed data to Amazon S3. Get started with the tutorial • Transform data with AWS Glue DataBrew Use 20 Choosing an AWS analytics service AWS Decision Guide Learn about AWS Glue DataBrew, a visual data preparation tool that makes it easy for data analysts and data scientists to clean and normalize data to prepare it for analytics and machine learning. Learn how to construct an ETL process using AWS Glue DataBrew. Get started with the lab • AWS Glue DataBrew immersion day Explore how to use AWS Glue DataBrew to clean and normalize data for analytics and machine learning. Get started with the workshop • Getting started with the AWS Glue Data Catalog Learn how to create your first AWS Glue Data Catalog, which uses an Amazon S3 bucket as your data source. Get started with the tutorial • Data catalog and crawlers in AWS Glue Discover how you can use the information in the Data Catalog to create and monitor your ETL jobs. Explore the guide Amazon Kinesis Data Streams • Getting started tutorials for Amazon Kinesis Data Streams Learn how to process and analyze real-time stock data. Get started with the tutorials • Architectural patterns for real-time analytics using Amazon Kinesis Data Streams, part 1 Learn about common architectural patterns of two use cases: time series data analysis and event driven microservices. Read the blog • Architectural Patterns for real-time analytics using Amazon Kinesis Data Streams, part 2 Use 21 Choosing an AWS analytics service AWS Decision Guide Learn about AI applications with Kinesis Data Streams in three scenarios: real-time generative business intelligence, real-time recommendation systems, and Internet of Things data streaming and inferencing. Read the blog Amazon Managed Service for Apache Flink • What is Amazon Managed Service for Apache Flink? Understand the fundamental concepts of Amazon Managed Service for Apache Flink. Explore the guide • Amazon Managed Service for Apache Flink Workshop In this workshop, you will learn how to deploy, operate, and scale a Flink application with Amazon Managed Service for Apache Flink. Attend the virtual workshop Amazon MSK • Getting Started with Amazon MSK Learn how to create an Amazon MSK cluster, produce and consume data, and monitor the health of your cluster using metrics. Get started with the guide • Amazon MSK Workshop Go deep with this hands-on Amazon MSK workshop. Get started with the workshop OpenSearch Service • Getting started with OpenSearch Service Learn how to use Amazon OpenSearch Service to create and configure a test domain. Use 22 Choosing an AWS analytics service AWS Decision Guide Get started with the tutorial • Visualizing customer support calls with OpenSearch Service and OpenSearch Dashboards Discover a full walkthrough of the following situation: a business receives some number of customer support calls and wants to analyze them. What is the subject of each call? How many were positive? How many were negative? How can managers search or review the the transcripts of these calls? Get started with the tutorial • Getting started with Amazon OpenSearch Serverless workshop Learn how to set up a new Amazon OpenSearch Serverless domain in the AWS console. Explore the different types of search queries available, and design eye-catching visualizations, and learn how you can secure your domain and documents based on assigned user privileges. Get started with the workshop • Cost Optimized Vector Database: Introduction to Amazon OpenSearch Service quantization techniques Learn how OpenSearch Service supports scalar and product quantization techniques to optimize memory usage and reduce operational costs. Read |
analytics-on-aws-how-to-choose-009 | analytics-on-aws-how-to-choose.pdf | 9 | search or review the the transcripts of these calls? Get started with the tutorial • Getting started with Amazon OpenSearch Serverless workshop Learn how to set up a new Amazon OpenSearch Serverless domain in the AWS console. Explore the different types of search queries available, and design eye-catching visualizations, and learn how you can secure your domain and documents based on assigned user privileges. Get started with the workshop • Cost Optimized Vector Database: Introduction to Amazon OpenSearch Service quantization techniques Learn how OpenSearch Service supports scalar and product quantization techniques to optimize memory usage and reduce operational costs. Read the blog post QuickSight • Getting started with QuickSight data analysis Learn how to create your first analysis. Use sample data to create either a simple or a more advanced analysis. Or you can connect to your own data to create an analysis. Explore the guide • Visualizing with QuickSight Discover the technical side of business intelligence (BI) and data visualization with AWS. Learn how to embed dashboards into applications and websites, and securely manage access and permissions. Use 23 Choosing an AWS analytics service AWS Decision Guide Get started with the course • QuickSight workshops Get a head start on your QuickSight journey with workshops Get started with the workshops Amazon Redshift • Getting started with Amazon Redshift Serverless Understand the basic flow of Amazon Redshift Serverless to create serverless resources, connect to Amazon Redshift Serverless, load sample data, and then run queries on the data. Explore the guide • Deploy a data warehouse on AWS Learn how to create and configure an Amazon Redshift data warehouse, load sample data, and analyze it using a SQL client. Get started with the tutorial • Amazon Redshift deep dive workshop Explore a series of exercises which help users get started using the Amazon Redshift platform. Get started with the workshop Amazon S3 • Getting started with Amazon S3 Learn how to create your first DataBrew project. You load a sample dataset, run transformations on that dataset, build a recipe to capture those transformations, and run a job to write the transformed data to Amazon S3. Get started with the guide • Central storage - Amazon S3 as the data lake storage platform Use 24 Choosing an AWS analytics service AWS Decision Guide Discover how Amazon S3 is an optimal foundation for a data lake because of its virtually unlimited scalability and high durability. Read the whitepaper SageMaker Lakehouse • Getting started with SageMaker Lakehouse Learn how to create a project and to browse, upload, and query data. Read the guide • Simplify data access for your enterprise using SageMaker Lakehouse Learn how to use preferred analytics, machine learning, and business intelligence engines through an open, Apache Iceberg REST API to help ensure secure access to data with consistent, fine-grained access controls. Read the blog • Catalog and govern Athena federated queries with SageMaker Lakehouse Learn how to connect to, govern, and run federated queries on data stored in Amazon Redshift, DynamoDB, and Snowflake. Read the blog SageMaker Unified Studio • Getting started with SageMaker Unified Studio Learn how to create a project, add members, and use the sample JupyterLab notebook to begin building. Read the guide • Introducing the next generation of Amazon SageMaker: The center for all your data, analytics, and AI Learn how to get started with data processing, model development, and generative AI app development. Use 25 Choosing an AWS analytics service Read the blog • What is Amazon SageMaker Unified Studio? AWS Decision Guide Learn about the capabilities of SageMaker Unified Studio and how to access them. Read the blog Explore ways to use AWS analytics services Editable architecture diagrams Reference architecture diagrams Explore architecture diagrams to help you develop, scale, and test your analytics solutions on AWS. Explore analytics reference architectures Ready-to-use code Featured solution AWS Solutions Scalable Analytics Using Explore pre-configured, Apache Druid on AWS deployable solutions and their implementation guides, Deploy AWS-built code to help you set up, operate, and built by AWS. manage Apache Druid on Explore all AWS security, AWS, a cost-effective, highly identity, and governance available, resilient, and fault solutions tolerant hosting environme nt. Explore this solution Documentation Analytics whitepapers AWS Big Data Blog Explore 26 Choosing an AWS analytics service AWS Decision Guide Explore whitepapers for further insights and Explore blog posts that address specific big best practices on choosing, implementing, and using the analytics services that best fit data use cases. your organization. Explore analytics whitepapers Explore the AWS Big Data blog Explore 27 Choosing an AWS analytics service AWS Decision Guide Document history The following table describes the important changes to this decision guide. For notifications about updates to this guide, you can subscribe to an RSS feed. Change Description Date re:Invent updates February 20, 2025 Added SageMaker |
analytics-on-aws-how-to-choose-010 | analytics-on-aws-how-to-choose.pdf | 10 | Big Data Blog Explore 26 Choosing an AWS analytics service AWS Decision Guide Explore whitepapers for further insights and Explore blog posts that address specific big best practices on choosing, implementing, and using the analytics services that best fit data use cases. your organization. Explore analytics whitepapers Explore the AWS Big Data blog Explore 27 Choosing an AWS analytics service AWS Decision Guide Document history The following table describes the important changes to this decision guide. For notifications about updates to this guide, you can subscribe to an RSS feed. Change Description Date re:Invent updates February 20, 2025 Added SageMaker AI Unified Studio and AWS Clean Rooms. Updated document throughout with new AI features and capabilities. Initial publication Guide first published. November 17, 2023 28 |
apc-bg-001 | apc-bg.pdf | 1 | Builder Guide AWS Partner Central Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. AWS Partner Central Builder Guide AWS Partner Central: Builder Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. AWS Partner Central Table of Contents Builder Guide What is a solution? .......................................................................................................................... 1 Creating a solution .......................................................................................................................... 2 Managing your solution .................................................................................................................. 3 Publication to AWS Solution Finder ......................................................................................................... 3 Removing a solution from AWS Partner Solution Finder ..................................................................... 4 AWS Foundational Technical Review (FTR) ............................................................................................. 4 Listing software solutions on AWS Marketplace .................................................................................... 5 AWS Marketplace software product listing and linking support ........................................................ 6 Document history ............................................................................................................................ 8 iii AWS Partner Central Builder Guide What is a solution? A solution is any product, service, or practice you offer to solve a customer business need. When you create a solution on AWS Partner Central, you provide details that help us understand what you bring to market. We can offer tailored support to develop and increase the discoverability of your solutions and engage with AWS customers and AWS sales teams. In addition, as an AWS Partner, you can receive benefits when you submit your solutions for AWS validation. For more information, sign in to AWS Partner Central and refer to the Partner solutions overview guide. 1 AWS Partner Central Builder Guide Creating a solution The first step to gain visibility and engagement with AWS customers and sales teams is to create your solutions on AWS Partner Central. When creating a solution, provide as much detail as you can to improve your discoverability on the internal AWS Partner directory and public AWS Partner Solution Finder. Your listing in these directories can help generate AWS customer leads and opportunities sourced from qualified AWS sellers. Leads and opportunities appear to you in the AWS Customer Engagement (ACE) Pipeline Manager in Partner Central. To create a solution 1. Sign in to AWS Partner Central. 2. Choose Build, Solutions. 3. Choose Create. 4. Complete the required solution details and contact information fields. 5. (Optional) For Solution URL, enter a link to your AWS branded microsite that explains your AWS practice. Although optional to create a solution, a microsite is required for validation in an AWS Specialization Program. 6. (Optional) Increase the discoverability of your solution by adding marketing, sales, and case study details. 7. On the Review and submit page, choose Create solution. 2 AWS Partner Central Builder Guide Managing your solution AWS Partner Central you can manage your offering by publishing it to your AWS Partner Solution Finder listing, requesting an AWS Foundational Technical Review (FTR), and linking Software Product offerings to an AWS Marketplace listing. Topics • Publication to AWS Solution Finder • Removing a solution from AWS Partner Solution Finder • AWS Foundational Technical Review (FTR) • Listing software solutions on AWS Marketplace • AWS Marketplace software product listing and linking support Publication to AWS Solution Finder Your solution is published to the AWS Solution Finder automatically after the following prerequisites are met: Solution type Prerequisites Software Product, Managed Service, Consultin g Service, Professional Service Solution must meet one of the following prerequisites: Hardware Product, Communications Product, Value-Added Resale AWS Service, Training Service, Distribution Service • The solution is validated by an AWS Foundation Technical Review (FTR). Refer to AWS Foundational Technical Review (FTR). • The solution is associated with a confirmed designation application. Solution must meet both of the following prerequisites: • The solution is associated with a confirmed designation application. Publication to AWS Solution Finder 3 AWS Partner Central Solution type Builder Guide Prerequisites • The solution is approved by the AWS Partner Network team. Removing a solution from AWS Partner Solution Finder To remove a solution from your AWS Partner Solution Finder listing, mark it inactive in AWS Partner Central. In AWS Partner Central you can do this on the home page or the Solution details page. To mark a solution inactive on the... Do this... The AWS Partner Central home page. 1. Sign in to AWS Partner Central. 2. Choose Build, Solutions from the navigation bar. 3. Choose the solution you want to remove. 4. Choose Inactive. The AWS Partner Central Solution details page. 1. On the Solution details page, choose the solution you want to remove. 2. Choose Update Visibility. 3. Choose Inactive. AWS Foundational Technical Review (FTR) You can obtain an FTR to validate each of your |
apc-bg-002 | apc-bg.pdf | 2 | In AWS Partner Central you can do this on the home page or the Solution details page. To mark a solution inactive on the... Do this... The AWS Partner Central home page. 1. Sign in to AWS Partner Central. 2. Choose Build, Solutions from the navigation bar. 3. Choose the solution you want to remove. 4. Choose Inactive. The AWS Partner Central Solution details page. 1. On the Solution details page, choose the solution you want to remove. 2. Choose Update Visibility. 3. Choose Inactive. AWS Foundational Technical Review (FTR) You can obtain an FTR to validate each of your submitted and active Software Product, Managed Service, Consulting Service, or Professional Service solutions. An FTR helps you identify and mitigate technical risks. Solutions with FTR validation are published automatically to the AWS Solution Finder. For more information, refer to AWS Foundational Technical Review To request an FTR 1. Sign in to AWS Partner Central. 2. Choose Build, Solutions from the navigation bar. Removing a solution from AWS Partner Solution Finder 4 AWS Partner Central Builder Guide 3. Choose the solution you want to submit. 4. Choose the Validation tab. 5. Download and review the AWS Foundational Technical Review Guide for Software Offerings or Service Offerings and FTR checklist for your solution type. 6. Complete the self-assessment checklist. 7. Upload the following files. Files may not exceed 3MB. • Self-assessment checklist. • Architecture diagram(s). • Other required or supplemental documentation relevant to your solution. • Case studies that demonstrate customer success specific to the solution. 8. Choose Request Foundational Technical Review. Listing software solutions on AWS Marketplace You can create a product listing for your software solution on AWS Marketplace. You can also link your solution to an existing product listing. The following sets of steps explain how to complete both tasks. Note To link your software solution to an AWS Marketplace product listing, you must first link your AWS Partner Central account to an AWS Marketplace account. For more information, refer to Linking AWS Partner Central accounts and AWS accounts in the AWS Partner Central Getting Started Guide. To create a product listing 1. Sign in to AWS Partner Central. 2. Choose Build, Solutions from the navigation bar. 3. Create an offering or choose an existing solution. 4. On the Solution details page, choose the AWS Marketplace products tab. 5. Choose Create new. 6. Enter product details and choose a product type (AMI, SaaS, Container, or Server). Listing software solutions on AWS Marketplace 5 AWS Partner Central 7. Enter a product title. 8. Choose Create product and connect. Builder Guide 9. Choose Continue to complete the listing on the AWS Marketplace Management Portal (AMMP). Or, choose Exit to return to the AWS Marketplace products tab. To link to a product listing 1. Sign in to AWS Partner Central. 2. On the navigation bar, choose Build, then Solutions. 3. Create a solution or choose an existing solution. 4. From the solutions details view, choose the AWS Marketplace products tab. Note To unlink a product for your solution in AWS Partner Central, contact AWS Partner Central support. AWS Marketplace software product listing and linking support AWS Partner Central supports creating and linking AWS Marketplace software product listings for specific deployment and hosting options, as shown in the following table. Note You choose deployment and hosting options when you create a solution. You cannot change these options after creating a solution. To choose different deployment and hosting options for a solution, you can either create the solution again or contact contact AWS Partner Central support for help. Who is primarily deploying the software? Where the software primarily deployed? Supported AWS Marketplace software product type You Your AWS account SaaS AWS Marketplace software product listing and linking support 6 AWS Partner Central Builder Guide Who is primarily deploying the software? Where the software primarily deployed? Supported AWS Marketplace software product type Your customer Customer's AWS account Server (Amazon Machine Image (AMI) or container) You You On premise Not supported Customer's AWS account Not supported Your customer On premise Not supported Your customer Your customer Edge Edge Not supported Not supported AWS Marketplace software product listing and linking support 7 AWS Partner Central Builder Guide Document history for the AWS Partner Central Builder Guide The following table describes the documentation releases for AWS Partner Central Documentation. Change Description Date Second release First release Second release of the AWS Partner Central Builder Guide. June 25, 2024 First release of the AWS Partner Central Builder Guide. November 2, 2023 8 |
apc-crm-001 | apc-crm.pdf | 1 | CRM Guide AWS Partner Central Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. AWS Partner Central CRM Guide AWS Partner Central: CRM Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. AWS Partner Central Table of Contents CRM Guide AWS Partner CRM integration ........................................................................................................ 1 Options for partner CRM integration ....................................................................................................... 2 Options ........................................................................................................................................................... 2 Business flows ............................................................................................................................................... 4 What is a referral? .................................................................................................................................. 4 What is an AWS originated opportunity referral? ............................................................................ 4 What is a partner originated opportunity referral? ......................................................................... 5 Closing a referral ..................................................................................................................................... 6 Setting up ...................................................................................................................................................... 6 Prerequisites for CRM Integration ....................................................................................................... 6 Who’s involved in setting up the Integration? .................................................................................. 8 AWS concepts involved in the Integration ......................................................................................... 8 Getting started ........................................................................................................................................... 12 Onboarding process ............................................................................................................................. 12 Stage 1: Onboarding prerequisites ................................................................................................... 12 Stage 2: Request submission .............................................................................................................. 16 Stage 3: Sandbox setup ...................................................................................................................... 20 Stage 4: Implementation .................................................................................................................... 22 Stage 5: Testing .................................................................................................................................... 23 Stage 6: Production implementation ............................................................................................... 28 Stage 7: Launch .................................................................................................................................... 33 Glossary ........................................................................................................................................................ 34 Glossary ................................................................................................................................................... 34 Data security ............................................................................................................................................... 35 Data security and compliance ............................................................................................................ 35 Maintenance ................................................................................................................................................ 38 Release cadence .................................................................................................................................... 38 Partner expectations ............................................................................................................................ 38 Recommended resource allocation ................................................................................................... 38 FAQ ................................................................................................................................................................ 39 Troubleshooting .................................................................................................................................... 39 AWS Partner CRM connector ........................................................................................................ 42 About ............................................................................................................................................................ 42 Introduction ................................................................................................................................................. 42 iii AWS Partner Central CRM Guide AWS Partner CRM connector application .............................................................................................. 42 Uninstalling the CRM connector package ........................................................................................ 43 Available features ....................................................................................................................................... 44 Partner Central API features .............................................................................................................. 44 ACE features ........................................................................................................................................... 45 AWS Marketplace features .................................................................................................................. 45 Release notes .............................................................................................................................................. 47 Version 3.8 (April 17, 2025) ............................................................................................................... 48 Version 3.6 (March 18, 2025) ............................................................................................................. 48 Version 3.5 (January 22, 2025) .......................................................................................................... 49 Version 3.1 (December 2, 2024) ........................................................................................................ 50 Version 3.0 (November 14, 2024) ..................................................................................................... 50 Version 2.2 (April 24, 2024) ............................................................................................................... 51 Version 2.1 (April 18, 2024) ............................................................................................................... 51 Version 2.0 (November 29, 2023) ..................................................................................................... 52 Version 1.7 (October 12, 2022) ......................................................................................................... 53 Version 1.6 (January 13, 2023) .......................................................................................................... 54 Version 1.5 (January 13, 2023) .......................................................................................................... 55 Version 1.4 (December 7, 2022) ........................................................................................................ 56 Upgrading to the Partner Central API ................................................................................................... 57 Upgrade features .................................................................................................................................. 57 Set up named credentials ................................................................................................................... 58 Add the Approval Status button to the Opportunity Lightning Record page .......................... 58 Add the remaining buttons ................................................................................................................ 59 Refresh the solutions on the Solution Offerings tab .................................................................... 59 Upgrading from previous versions ......................................................................................................... 59 Setting up real-time notifications .......................................................................................................... 62 Configuring a Salesforce connected app ......................................................................................... 62 Configuring AWS Components ........................................................................................................... 63 Creating AWS components manually ............................................................................................... 64 Example rules ........................................................................................................................................ 71 ACE integration ........................................................................................................................................... 73 Prerequisites ........................................................................................................................................... 74 Permissions sets .................................................................................................................................... 76 Guided setup .......................................................................................................................................... 78 ACE object mappings ........................................................................................................................... 85 iv AWS Partner Central CRM Guide Creating synchronization schedules .................................................................................................. 89 Sync logs and reports .......................................................................................................................... 91 Production checklist ............................................................................................................................. 96 Upgrading AWS Partner CRM connector to the new data model ............................................... 97 Sandbox testing with the custom ACE opportunity and ACE lead objects ................................ 99 AWS Marketplace integration ................................................................................................................ 101 Configuring baseline AWS permissions .......................................................................................... 101 Configuring Salesforce core components ...................................................................................... 104 Validating AWS Marketplace integration ....................................................................................... 110 Additional resources: AWS API calls for the AWS Marketplace integration ............................ 123 Getting help .............................................................................................................................................. 123 AWS Partner CRM connector FAQ ........................................................................................................ 124 General questions ............................................................................................................................... 124 Setup issues ......................................................................................................................................... 128 Mapping issues .................................................................................................................................... 130 Synchronization and validation issues ........................................................................................... 132 Custom integration using Amazon S3 ........................................................................................ 135 Integration resources .............................................................................................................................. 135 Field definitions .................................................................................................................................. 136 Standard values .................................................................................................................................. 136 Sample inbound files ......................................................................................................................... 136 Sample outbound files ...................................................................................................................... 136 Sample processed results ................................................................................................................. 137 Sample test cases ............................................................................................................................... 137 Sample code snippets ........................................................................................................................ 137 Implementing a custom integration .................................................................................................... 137 Lead sharing .............................................................................................................................................. 138 How AWS shares leads ...................................................................................................................... 138 Consuming leads from AWS ............................................................................................................. 138 Sharing updates on leads with AWS .............................................................................................. 139 Opportunity sharing ................................................................................................................................ 140 How AWS shares opportunities ....................................................................................................... 140 Consuming opportunities from AWS .............................................................................................. 141 Sharing updates to opportunities with AWS ................................................................................ 142 Field mapping ........................................................................................................................................... 143 Mandatory field mapping ................................................................................................................. 143 v AWS Partner Central CRM Guide Handling optional fields ................................................................................................................... 143 Value mapping .................................................................................................................................... 143 Data type and format validation .................................................................................................... 144 Field length and limitations ............................................................................................................. 144 Data type and format validation .................................................................................................... 144 Periodic review and update .............................................................................................................. 144 Field mapping documentation ........................................................................................................ 144 Testing and validation ....................................................................................................................... 144 Handling unwanted overwrites ....................................................................................................... 145 Managing downstream dependencies ............................................................................................ 145 Best practices ............................................................................................................................................ 145 |
apc-crm-002 | apc-crm.pdf | 2 | .............................................................................................. 139 Opportunity sharing ................................................................................................................................ 140 How AWS shares opportunities ....................................................................................................... 140 Consuming opportunities from AWS .............................................................................................. 141 Sharing updates to opportunities with AWS ................................................................................ 142 Field mapping ........................................................................................................................................... 143 Mandatory field mapping ................................................................................................................. 143 v AWS Partner Central CRM Guide Handling optional fields ................................................................................................................... 143 Value mapping .................................................................................................................................... 143 Data type and format validation .................................................................................................... 144 Field length and limitations ............................................................................................................. 144 Data type and format validation .................................................................................................... 144 Periodic review and update .............................................................................................................. 144 Field mapping documentation ........................................................................................................ 144 Testing and validation ....................................................................................................................... 144 Handling unwanted overwrites ....................................................................................................... 145 Managing downstream dependencies ............................................................................................ 145 Best practices ............................................................................................................................................ 145 General best practices ...................................................................................................................... 145 Data exchange protocols .................................................................................................................. 146 Field-specific best practices .............................................................................................................. 146 Additional best practices .................................................................................................................. 146 Quotas ........................................................................................................................................................ 147 Inbound file to Amazon Web Services (AWS) ............................................................................... 147 Outbound file to partner .................................................................................................................. 147 Version history .......................................................................................................................................... 147 FAQs ............................................................................................................................................................ 155 General FAQ ......................................................................................................................................... 155 Technical FAQ—fields ........................................................................................................................ 157 Technical FAQ—Amazon S3 ............................................................................................................. 159 Technical FAQ—leads and opportunities ....................................................................................... 160 Technical FAQ—versioning and backward compatibility ............................................................ 166 vi AWS Partner Central CRM Guide AWS Partner CRM integration This customer relationship management (CRM) integration for partners is designed to exchange referrals between Amazon Web Services (AWS) Partners and AWS. Participants in the AWS Partner Network (APN) Customer Engagements (ACE) program can scale operations without the need to allocate additional resources for managing coselling pipelines. Partners can also use this CRM integration to reduce manual maintenance of leads and opportunities across separate systems. This CRM integration provides the following advantages: 1. Unified lead and opportunity management: Leads and opportunities are located within the CRM integration, so it's unnecessary for sales teams to maintain identical information across systems. Scale sales engagements while managing leads and opportunities within one interface. 2. Automated coselling operations: Automate coselling operations using standardized rules and validations. This allows CRM administrators to set up notifications, reports, and other integrations. Build workflows to automatically match opportunities and control the quality of sales data at the source. 3. Simplified coselling workflows: Sales teams don't require training from Partner Central to oversee coselling deals. 1 AWS Partner Central CRM Guide Options for partner CRM integration The following are three options for integrating CRM with AWS: 1. AWS Partner integration: An AWS managed CRM package on Salesforce. Download it from Salesforce AppExchange. 2. Third-party integration: A customized integration offered by third-party service providers. 3. Custom integration: A customized integration using the AWS Partner Central API guide to build an integration that fits your requirements. Note Lead management is unavailable for custom integrations. To help partners set up the integration's infrastructure, AWS offers a self-service onboarding experience on AWS Partner Central. Options Using the CRM integration, partners can accept, send, and receive updates directly from AWS for new opportunities and leads. Depending on your requirements, choose one of three integration options, which are outlined in the following table: AWS Partner integration Descripti on AWS managed package at no additional cost, downloadable from Salesforce AppExchange Third-party integration Standard integration provided by third-party providers Custom integration Customized integrati on according to the AWS Partner Central API guide Resources Configuration and regular maintenance from a CRM administrator; low-to-medium Varies by third- party; may include direct 3–12 weeks for initial development (includin g project managemen support and/ t), followed by 2–3 Options for partner CRM integration 2 AWS Partner Central CRM Guide AWS Partner integration Third-party integration Custom integration development effort depending on or compatibi weeks each quarter level of required automation lity support between cloud for maintenance and upgrades providers Skill set Cloud administrator, Salesforc e administrator, and Salesforce Cloud administr ator, CRM Cloud administrator, CRM administrator, and developer administrator, project manager and project manager Maintenan ce Regular maintenance required but can be managed by the administr Relies on third-party Regular updates required; each upgrade ator with minimal developer provider for may require code or support enhancements configuration changes Cost No additional cost Customiza tion Limited to package capabilities and maintenan ce; partner is responsible for CRM administr ation Third-part subscription costs May require third-party support Development and maintenance costs Highly customizable Setup time Low Low High Support Limited support from AWS Third-party support AWS provides documentation and limited support Options 3 AWS Partner Central CRM Guide AWS Partner integration Additional features Outbound lead sharing, inbound and outbound opportunity sharing, job scheduling, and Third-party integration Possible multiclou d cosell Custom integration Highly customizable, outbound lead sharing, inbound and outbound automatic mapping features, future opportunity sharing enhanceme nts handled by provider, support and consulting services Table 1: Integration options Business flows Referrals can be categorized as either a lead or opportunity. What is a referral? The term referral serves as a general descriptor for both leads and opportunities. A lead refers to a contact that has expressed interest in an Amazon Web Services (AWS) |
apc-crm-003 | apc-crm.pdf | 3 | Partner Central CRM Guide AWS Partner integration Additional features Outbound lead sharing, inbound and outbound opportunity sharing, job scheduling, and Third-party integration Possible multiclou d cosell Custom integration Highly customizable, outbound lead sharing, inbound and outbound automatic mapping features, future opportunity sharing enhanceme nts handled by provider, support and consulting services Table 1: Integration options Business flows Referrals can be categorized as either a lead or opportunity. What is a referral? The term referral serves as a general descriptor for both leads and opportunities. A lead refers to a contact that has expressed interest in an Amazon Web Services (AWS) product or an AWS Partner solution. During the initial stages of the sales process, a sales representative assesses if the interested individual has the potential to become an AWS customer. This assessment and validation phase is referred to as qualification. If a lead is deemed qualified and is considered to have a higher probability of converting to a customer, it’s then classified as an opportunity. What is an AWS originated opportunity referral? A referral shared by AWS Sales with an partner for coselling is called an AWS originated (AO) opportunity referral. The AWS Sales team receives recommendations to attach a partner to an AWS sales opportunity based on multiple factors such as the quality of information in the solution listing, past opportunities, progress in the partnership journey, or past performance. When the AWS Sales team attaches a partner to an AWS sales opportunity, the opportunity is shared with the partner as a referral. The partner receives the referral with the customer contact Business flows 4 AWS Partner Central CRM Guide details masked (contact name, title, email, and phone). The referral contains AWS contact details, customer name, project title, use case, stage, description, and other details that the partner can use to decide if they want to pursue the referral. The partner must accept or reject the referral before the acceptBy date and time specified in the payload. The partner sends an Accepted or Rejected value for the partnerAcceptanceStatus field. If rejected, partners should provide a rejectionReason. While a partner accepts or rejects the AO referral, they shouldn’t update any other values in the referral. Every update on a referral (from the partner or AWS) can take up to one hour to sync with the CRM. After acceptance, AWS sends a new payload with the unmasked details of the customer contact. Partners should actively engage the opportunity and provide regular updates to AWS. What is a partner originated opportunity referral? A referral shared by an AWS Partner with AWS for coselling or visibility, is called a partner originated (PO) opportunity referral. The status of the referral is initially set to Submitted. By default, all PO opportunity referrals go through a validation (review) process. During this process the status of the opportunity is set to In-review and no updates are accepted to the opportunity until validation is completed. If the validation succeeds, the opportunity status is set to Approved, and partners can send updates to the opportunity. If the validation fails, the status of the opportunity is set to Action required, and the validator’s comments are shared as part of the apnReviewerComments field. In the Action Required state, the partner can only update a limited set of fields (refer to the field definitions for details). After the partner updates and resubmits the opportunity, it moves back to the Submitted state and the validation process starts again. When the validation passes, the referral is set to Approved, and partners and AWS can share regular updates about the opportunity. The validation process can take up to five business days. Note AWS doesn’t currently support the Partner Shares Lead with AWS scenario. Partners that receive a lead through an external source typically pursue it themselves. After the lead becomes a viable opportunity that meets validation criteria, partners can submit it to AWS as a partner originated opportunity referral. What is a partner originated opportunity referral? 5 AWS Partner Central Closing a referral CRM Guide When a partner closes a referral as Launched, they’re must attach an AWS account associated with the customer. If the referral is being closed as Closed Lost, partners must give a closedLostReason. For a referral that relates to a sale on AWS Marketplace, partners must attach an AWS Marketplace offer to the opportunity. Partners can check if an opportunity is marked as Launched or Closed Lost on the AWS CRM by using the field awsStage. Note The awsStage field is different from stage. The stage field is for sharing regular updates about a referral, while awsStage is a read-only field that indicates the current referral stage. Setting up To set up the CRM Integration with Amazon Web Services (AWS), regardless of your Integration path, you must have access to AWS owned |
apc-crm-004 | apc-crm.pdf | 4 | closedLostReason. For a referral that relates to a sale on AWS Marketplace, partners must attach an AWS Marketplace offer to the opportunity. Partners can check if an opportunity is marked as Launched or Closed Lost on the AWS CRM by using the field awsStage. Note The awsStage field is different from stage. The stage field is for sharing regular updates about a referral, while awsStage is a read-only field that indicates the current referral stage. Setting up To set up the CRM Integration with Amazon Web Services (AWS), regardless of your Integration path, you must have access to AWS owned Amazon Simple Storage Service (Amazon S3) buckets in APN Customer Engagements (ACE) for each environment. The bucket is an intermediary for bidirectional file transfers. The following sections can help you set up your CRM Integration with AWS. Prerequisites for CRM Integration Before you set up the CRM Integration, ensure you meet the following criteria: 1. You must be an ACE eligible partner. For more information, refer to the section called “FAQ”. 2. The partner alliance lead must complete the onboarding process described in this document. Other profiles will not have access to the CRM Integration onboarding experience. 3. The team implementing the Integration must be familiar with the ACE program and the coselling process. For more information, refer to the following resources on Partner Central: • ACE Opportunity Submission Quick Guide • ACE Validation Process • ACE Program FAQs • What is ACE Pipeline Manager? Closing a referral 6 AWS Partner Central CRM Guide Prerequisites for CRM Integration 7 AWS Partner Central CRM Guide Who’s involved in setting up the Integration? The following roles are essential in setting up the CRM Integration: 1. Partner alliance lead: Has permission to initiate a new Integration request through Partner Central. The partner alliance lead oversees the progress of the Integration and monitors the status from the CRM Integration page within Partner Central. 2. Program manager: Entrusted with driving the Integration process from the partner’s side. This person is able to define essential processes and necessary enablement post-integration. 3. Partner CRM administrator: Helps map fields between AWS and the partner’s CRM. If partners choose an Integration through the AWS Partner CRM connector, the administrator is critical to its setup. 4. Developers: For partners that choose the custom option, developers build and implement the custom Integration. 5. Partner cloud operations and IT team: Configures authentication credentials, such as AWS Identity and Access Management (IAM) user/role. This involves creating an AWS account and an AWS user for secure access. 6. AWS Partner development manager (PDM): The partner’s AWS point of contact. All communication with the AWS team is routed through the PDM. For more information, refer to the section called “FAQ”. 7. AWS Partner solutions architect (PSA): Works closely with the PDM to assist with any technical questions the partner has. 8. AWS CRM Integration support: Addresses technical support issues that partners raise through Support Center in Partner Central. AWS concepts involved in the Integration Environments and access The CRM Integration operates within two distinct environments: sandbox (also known as UAT or Beta) and production (also known as Prod). AWS creates an AWS owned Amazon S3 bucket within the AWS Partner Network (APN) for each environment. The sandbox S3 bucket connects to the APN sandbox environment, and the production S3 bucket connects to the APN production environment. To access each S3 bucket securely, you need to set up (or reuse) an AWS account for each environment. If you’re an independent software vendor (ISV), we recommend reusing your existing AWS Marketplace account. In the AWS account, you need to create an IAM user (for AWS Partner Who’s involved in setting up the Integration? 8 AWS Partner Central CRM Guide CRM connector) or IAM role (for third-party or custom Integration). The IAM role or user is used for provisioning access to the S3 bucket that AWS sets up for the partner. Partners have programmatic access to the AWS created buckets. During the onboarding process, AWS generates an access policy that you have to attach to these IAMs. You can create an AWS account and IAM user or role for each environment, at the beginning of the Integration onboarding process. However, AWS allows programmatic access to the production bucket only after you successfully test your solution’s sandbox environment (connector, third-party, or custom Integration). AWS concepts involved in the Integration 9 AWS Partner Central Amazon S3 buckets CRM Guide To access a bucket for each environment, refer to the section called “Stage 1: Onboarding prerequisites”. To ensure secure interaction with S3 files, AWS uses IAM policies for partner authentication. These policies rigorously control partner permissions for uploading and downloading S3 files. Each bucket created for partners follows the naming convention below. ace-apn-[partner-id]-[environment]-us-west-2 • partner-id: A numerical unique identifier assigned to |
apc-crm-005 | apc-crm.pdf | 5 | process. However, AWS allows programmatic access to the production bucket only after you successfully test your solution’s sandbox environment (connector, third-party, or custom Integration). AWS concepts involved in the Integration 9 AWS Partner Central Amazon S3 buckets CRM Guide To access a bucket for each environment, refer to the section called “Stage 1: Onboarding prerequisites”. To ensure secure interaction with S3 files, AWS uses IAM policies for partner authentication. These policies rigorously control partner permissions for uploading and downloading S3 files. Each bucket created for partners follows the naming convention below. ace-apn-[partner-id]-[environment]-us-west-2 • partner-id: A numerical unique identifier assigned to each partner in the AWS Partner Network, consisting of up to 10 digits. Partners can locate their Partner ID by navigating to: AWS Partner Central > My Company > Partner Scorecard > Partner ID. • environment: This field accepts two values: • beta: Indicates a bucket pointing to a sandbox environment. • prod: Indicates a bucket pointing to a production environment. Folder structure in Amazon S3 bucket AWS uses S3 buckets with different folders for the Integration, as shown in table 1. #Purpose Folder name Description 1Retrieve ACE leads lead-outb ound Contains new leads or updates existing leads. Partners have read and delete access to this folder. After a file is processed, delete it. Contains a file of new or updated opportunities. Partners have read and delete access to this folder. Contains files with new or updated opportunities. 2Retrieve ACE opportunities 3Send new or update existing ACE opportuni ties. opportuni ty-outbou nd opportuni ty-inbound AWS concepts involved in the Integration 10 AWS Partner Central CRM Guide #Purpose Folder name Description 4Send ACE updates about leads 5Retrieve results for opportunities sent to ACE 6Retrieve results for leads from ACE lead-inbo Contains files with updated leads. und opportuni ty-inboun d-process ed-results lead-inbo und-proce ssed-resu lts Contains files with the results of processed opportuni ties. Partners have read and delete access to this folder. Contains files with the results of processed leads. Partners have read and delete access to this folder. Table 1: Folder structure of S3 bucket Note Amazon S3 treats folders as objects that are only visible if they contain files. But partners can read and add files to folders even if a folder doesn’t appear. IAM AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. For more information, refer to Introduction to AWS Identity and Access Management (IAM). Your access to the Amazon S3 buckets provisioned by AWS is managed through an IAM user/ role. Each IAM user/role is allow-listed for access to their respective bucket. To configure access, you need to create one IAM user/role for each environment, sandbox and production. For more information, refer to the section called “Getting started”. AWS concepts involved in the Integration 11 AWS Partner Central Getting started CRM Guide To initiate the Integration process, Amazon Web Services (AWS) provides a process for registration and tracking through Partner Central called CRM Integration onboarding. This functionality is only available to the partner alliance lead of an ACE-eligible partner. Onboarding process The onboarding process for the CRM Integration has the following steps. Regardless of the Integration option you choose, you must complete every step. the section called “Stage 1: Onboarding prerequisites”: Describes the prerequisites for the CRM Integration, including AWS account creation, AWS Identity and Access Management (IAM) setup, account linking, and IAM mapping for both sandbox and production environments. the section called “Stage 2: Request submission”: Describes the steps involved in submitting an onboarding request. the section called “Stage 3: Sandbox setup”: Describes how to set up a sandbox environment. the section called “Stage 4: Implementation”: Describes the step after AWS provisions the Amazon Simple Storage Service (Amazon S3) bucket used for testing the Integration. Partners implement the connector based on the chosen Integration option (AWS Partner CRM connector, custom Integration, or third-party solution). the section called “Stage 5: Testing”: Describes the steps involved in testing the Integration with different business flows. the section called “Stage 6: Production implementation”: Describes the steps involved in migrating the data (backfilling) and moving the Integration solution to the production environment. the section called “Stage 7: Launch”: Describes the steps leading up to the launch and post launch activities. Stage 1: Onboarding prerequisites Before you start the onboarding steps, ensure you have met the five prerequisites below. Regardless of the CRM Integration type, there are two mandatory prerequisites: Getting started 12 AWS Partner Central CRM Guide 1. the section called “Have an AWS account” 2. the section called “Set up an IAM principal” Regardless of the CRM |
apc-crm-006 | apc-crm.pdf | 6 | section called “Stage 6: Production implementation”: Describes the steps involved in migrating the data (backfilling) and moving the Integration solution to the production environment. the section called “Stage 7: Launch”: Describes the steps leading up to the launch and post launch activities. Stage 1: Onboarding prerequisites Before you start the onboarding steps, ensure you have met the five prerequisites below. Regardless of the CRM Integration type, there are two mandatory prerequisites: Getting started 12 AWS Partner Central CRM Guide 1. the section called “Have an AWS account” 2. the section called “Set up an IAM principal” Regardless of the CRM Integration type, there are three optional prerequisites: 1. the section called “Link AWS Marketplace to Partner Central” 2. the section called “Attaching a policy to an IAM role” 3. the section called “Mapping your IAM role for CRM Integration” Mandatory prerequisites Have an AWS account To get started, partners must have an AWS account in place. Partners may sign up for a free AWS account or use an existing one. For more information, refer to Sign up for AWS. We recommend having two separate AWS accounts for setting up the sandbox (testing) and production environments. Reach out to your Cloud Operations or IT department to set up an AWS account. For more information, refer to Create a standalone AWS account. For those who are AWS Marketplace sellers, we recommend using your AWS Marketplace account. Set up an IAM principal To work with the Amazon Simple Storage Service (Amazon S3) buckets AWS provides, partners need to use IAM to authenticate. Keep the names of the IAM principals handy because you need them when you submit your onboarding request. Additionally, you use a custom policy generated by AWS to attach to your IAM principals to access the Amazon S3 bucket. For more information, refer to What is IAM? • AWS Partner CRM connector users: Use an IAM user. • Custom or third-party solution users: Choose between an IAM user or role. We recommend an IAM role for this purpose. How to create an IAM user Creating an IAM user allows individuals to access AWS services. 1. Sign in to the AWS Management Console, and then navigate to the IAM console. 2. Choose Users, and then choose Create user. Stage 1: Onboarding prerequisites 13 AWS Partner Central CRM Guide 3. Enter the user name following this naming convention: apn-ace-{partner-name}- AccessUser-{prod|beta}. For example, for a production environment, a partner named AnyAuthority would use apn-ace-anyauthority-AccessUser-prod. For more information, refer to Creating an IAM user in your AWS account. How to create an IAM role An IAM role is a set of permissions that grant access to actions in AWS but is not tied to a specific individual. It can be assumed by anyone who needs it. The naming convention for an IAM role follows a similar pattern to the IAM user: apn-ace- {partner-name}-AccessRole-{environment}. For more information, refer to Creating IAM roles. Optional prerequisites Note Applicable only for partners who want to attach an AWS Marketplace offer to opportunities using the integration. Link AWS Marketplace to Partner Central AWS Partners with AWS Marketplace seller accounts can connect their accounts using the Account linking feature in AWS Partner Central. When you connect the AWS Partner Central account to an AWS Marketplace account and map user permissions across portals, it allows users to seamlessly access both accounts through single sign-on access, and enables offer-to-opportunity linking across platforms.. To enable account linking, it’s best practice to have user roles assigned in AWS Partner Central, including the cloud administrator role. If a cloud administrator role is unassigned, the alliance lead may assign themselves this role to link their AWS Partner Central and AWS Marketplace accounts. Follow these steps to link your AWS Partner Central account to an AWS account. 1. Sign in to AWS Partner Central with an Alliance Lead or Cloud Admin role. 2. Navigate to the Account Linking section on the homepage, and then choose Link Account. 3. On the Account Linking page, choose Link Account again. Stage 1: Onboarding prerequisites 14 AWS Partner Central CRM Guide 4. Choose IAM user, and then enter the AWS Account ID for your AWS account. 5. Choose Next, and then sign in to the AWS account. 6. Choose Allow to authorize the connection between your AWS Partner Central and AWS accounts. Attaching a policy to an IAM role 1. Verify that you completed the steps to link your AWS Partner Central account to an AWS Marketplace account. For more information, refer to the section called “How to create an IAM role”. 2. Create an IAM role in your AWS Marketplace account. For more information, refer to Controlling access to AWS Marketplace Management Portal. 3. Assign the following trust policy to the user: { "Statement": [ { "Effect": "Allow", "Action": [ "aws-marketplace:ListEntities", |
apc-crm-007 | apc-crm.pdf | 7 | in to the AWS account. 6. Choose Allow to authorize the connection between your AWS Partner Central and AWS accounts. Attaching a policy to an IAM role 1. Verify that you completed the steps to link your AWS Partner Central account to an AWS Marketplace account. For more information, refer to the section called “How to create an IAM role”. 2. Create an IAM role in your AWS Marketplace account. For more information, refer to Controlling access to AWS Marketplace Management Portal. 3. Assign the following trust policy to the user: { "Statement": [ { "Effect": "Allow", "Action": [ "aws-marketplace:ListEntities", "aws-marketplace:SearchAgreements" ], "Resource": "*" } ] } Alternately, partners can use an existing user in the account who has permissions to perform ListEntities and SearchAgreements actions. Mapping your IAM role for CRM Integration Partners who want to associate/disassociate AWS Marketplace private offers to APN Customer Engagements (ACE) opportunities need to map the IAM role that the CRM Integration can assume to call the Marketplace account. Before mapping the IAM user, partners need to have linked their AWS account to their Partner Central account. By choosing an IAM role, you allow the CRM Integration to access and interact with your AWS Marketplace using that role. Stage 1: Onboarding prerequisites 15 AWS Partner Central CRM Guide Follow these steps to map an IAM Marketplace role to a CRM Integration user. 1. Sign in to IAM Partner Central as a user with the Alliance Lead or Cloud Admin role. 2. In the Account linking section of the IAM Partner Central homepage, choose Manage Linked Account. 3. On the Account Linking page, in the IAM role for CRM Integration section, choose Map IAM role. 4. Choose an IAM role from the dropdown list that has permissions to perform ListEntities and SearchAgreements, at a minimum. Verify you have completed the steps to attach a trust policy to the Marketplace user. For more information, refer to the section called “Attaching a policy to an IAM role”. 5. Choose Map role. Stage 2: Request submission Request submission is a three-step process. You must complete the Onboarding Request form with essential details that include information about the partner’s CRM system, integration solution choice, estimated Integration start date, and more. To initiate the Integration process, Amazon Web Services (AWS) provides a process for registration and tracking through Partner Central called CRM Integration onboarding. This functionality is only available to the partner alliance lead of an ACE- eligible partner. Additionally, you must submit contact details for communication and notification purposes. The following table lists the form fields, their description, and attributes. Name Description Partner CRM system Name of the CRM software used for sales pipeline management TypeRequired/ optional validation Required Picklist Allowed field values Salesforc e, Hubspot, Microsoft Dynamics, Zoho, Other Partner CRM Name of the CRM software used for sales pipeline management, not listed above TextRequired when Other Stage 2: Request submission 16 CRM Guide Allowed field values AWS Partner CRM connector (for Salesforc e), custom Integration (in-house), third-party solution AWS Partner Central Name Description system name TypeRequired/ optional validation is selected for partner CRM system What solution Choose from the different options to integrate with Amazon Web Services (AWS): 1. AWS Required Picklist would Partner CRM connector—Free AWS managed you package available to download from Salesforc use to e AppExchange; 2. Custom solution (Amazon integrate CRM Simple Storage Service (Amazon S3) or coselling APIs); 3. Third-party solution—Third-party software as a service (SaaS) offering or assisted development of custom solution Company offering the solution or providing support for building and maintaining the Integration with APN? Name of third- party solution provider Estimated Integrati on start date Your start date should be based on the Integrati on resource readiness required to build the Integration. For AWS Partner CRM connector or third-party solution, enter the start date for when you plan to install and use the solution in a testing environment. TextRequired when Third Party Solution is selected for What solution would you be using to integrate your CRM with APN? DateRequired (must match MM/DD/ YYYY format and be a date within the next 90 days) Stage 2: Request submission 17 AWS Partner Central Name Description CRM Guide Allowed field values TypeRequired/ optional validation Monthly number An estimate of the number of leads or opportuni ties to be shared with AWS. Currently, the Optional (must Number be a number) of Integration requires support from AWS Engineeri records ng for testing the completed Integration. We use shared this estimate to prioritize outstanding requests. with AWS Additiona l comments Additional information to share with AWS TextOptional Complete the details, and then choose Next. On the next screen, enter partner contacts. The following table lists the partner contact form fields, their description, and attributes. Name Description Type Required/optional validation Primary contact The primary point |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.