id
stringlengths
8
78
source
stringclasses
743 values
chunk_id
int64
1
5.05k
text
stringlengths
593
49.7k
ams-ug-295
ams-ug.pdf
295
system level. The patches that are installed may differ by operating system. Supported operating systems Version May 08, 2025 853 AMS Advanced User Guide AMS Advanced Concepts and Procedures Important All updates are downloaded from the Systems Manager patch baseline service remote repositories configured on the instance, and described later in this topic. The instance must be able to connect to the repositories so the patching can be performed. To opt-out of the patch baseline service for repositories that deliver packages that you want to maintain yourself, run the following command to disable the repository: yum-config-manager DASHDASHdisable REPOSITORY_NAME Retrieve the list of currently configured repositories with the following command: yum repolist • Amazon Linux preconfigured repositories (usually four): Repository ID amzn-main/latest Repository name amzn-main-Base amzn-updates/latest amzn-updates-Base epel/x86_64 Extra Packages for Enterprise Linux 6 - x86_64 pbis PBIS Packages Updates • Red Hat Enterprise Linux preconfigured repositories (five for Red Hat Enterprise Linux 7 and five for Red Hat Enterprise Linux 6): Repository ID Repository name rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Ser rhui-REGION-rhel-server-releases/7Server/ x86_64 Red Hat Enterprise Linux Server 7 Supported patches Version May 08, 2025 854 AMS Advanced User Guide Repository ID AMS Advanced Concepts and Procedures Repository name rhui-REGION-rhel-server-releases/7Server/ x86_64 Red Hat Enterprise Linux Server 7 RH Common(RPMs) epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 pbis PBIS Packages Updates Repository ID Repository name rhui-REGION-client-config-server-6 Red Hat Update Infrastructure 2.0 rhui-REGION-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) rhui-REGION-rhel-server-rh-common epel pbis Red Hat Enterprise Linux Server 6 RH Common (RPMs) Extra Packages for Enterprise Linux 6 - x86_64 PBIS Packages Updates • CentOS 7 preconfigured repositories (usually five): Repository ID base/7/x86_64 Repository Name CentOS-7 - Base updates/7/x86_64 CentOS-7 - Updates extras/7/x86_64 epel/x86_64 CentOS-7 - Extras Extra Packages for Enterprise Linux 7 - x86_64 pbis PBIS Packages Updates Supported patches Version May 08, 2025 855 AMS Advanced User Guide AMS Advanced Concepts and Procedures • For Microsoft Windows Server, all updates are detected and installed using the Windows Update Agent, which is configured to use the Windows Update catalog (this doesn't include updates from Microsoft Update). On Microsoft Windows operating systems, Patch Manager uses Microsoft’s cab file wsusscn2.cab as the source of available operating system security updates. This file contains information about the security-related updates that Microsoft publishes. Patch Manager downloads this file regularly from Microsoft and uses it to update the set of patches available for Windows instances. The file contains only updates that Microsoft identifies as being related to security. As the information in the file is processed, Patch Manager also removes updates that have been replaced by later updates. Therefore, only the most recent update is displayed and made available for installation. For example, if KB4012214 replaces KB3135456, only KB4012214 is made available as an update in Patch Manager. To read more about the wsusscn2.cab file, see the Microsoft article Using WUA to Scan for Updates Offline. Patching and infrastructure design AMS employs different patching methods depending on your infrastructure design: mutable or immutable (for detailed definitions, see AMS key terms). With mutable infrastructures, patching is done using a traditional in-place methodology of installing updates directly to the Amazon EC2 instances, individually, by AMS operations engineers. This patching method is used for stacks that are not Auto Scaling groups, and contain a single Amazon EC2 instance or a few instances. In this scenario, replacing the AMI that the instance or stack was based on would destroy all of the changes made to that system since it was first deployed, so that is not done. Updates are applied to the running system, and you may experience system downtime (depending on the stack configuration) due to application or system restarts. This can be mitigated with a Blue/Green update strategy. For more information, see AWS CodeDeploy Introduces Blue/Green Deployments. With immutable infrastructures, the patching method is AMI replacement. Immutable instances are updated uniformly using an updated AMI that replaces the AMI specified in the Auto Scaling group configuration. AMS releases updated (that is, patched) AMIs every month, usually the week of Patch Tuesday. The following section describes how this works. Patching and infrastructure design Version May 08, 2025 856 AMS Advanced User Guide AMS Advanced Concepts and Procedures How AMS standard patching works AMS uses the Systems Manager Run Command service for regularly scheduled monthly and as- needed critical patching, with two principal patching methods, in-place and AMI replacement, depending on your infrastructure deployment strategy (mutable vs. immutable). This section describes the AMS patching service, types, methods, and processes. AMS defines two patch types, which are scheduled differently: • Critical patching: Updates are applied as quickly as possible, after acceptance of the notice. • Standard patching: Regular OS vendor updates and applied monthly. Patches are applied through either in-place patching or AMI replacement (upon request). Update
ams-ug-296
ams-ug.pdf
296
and Procedures How AMS standard patching works AMS uses the Systems Manager Run Command service for regularly scheduled monthly and as- needed critical patching, with two principal patching methods, in-place and AMI replacement, depending on your infrastructure deployment strategy (mutable vs. immutable). This section describes the AMS patching service, types, methods, and processes. AMS defines two patch types, which are scheduled differently: • Critical patching: Updates are applied as quickly as possible, after acceptance of the notice. • Standard patching: Regular OS vendor updates and applied monthly. Patches are applied through either in-place patching or AMI replacement (upon request). Update scanning AMS uses the Amazon EC2 Run Command Service to contact your Amazon EC2 stacks and deploy the required scanning and patching scripts. AMS uses the native package management component already installed on the supported operating system to perform all the required scanning and patching behavior on the Amazon EC2 stack. For Red Hat and Amazon Linux, the service uses yum. For Windows, the service uses the Windows Update Agent. Scans are performed daily using SSM Maintenance Windows and the AMS default AWS- RunPatchBaseline document. Every reachable Amazon EC2 stack is scanned, using the update repositories for Linux and Windows. The AMS patching process detects all reachable Amazon EC2 stacks and then performs the scans in a batch process so that the stack always remains in a healthy state, even if a failure occurs while running the scan. The scan results are then saved for each Amazon EC2 stack. To view the scan results for a stack or instance, submit a service request with the stack ID or instance ID. The default AMS patching process is to install all available patches regardless of patch classification or severity (for example, critical versus standard). The exception to this are patches that you have explicitly excluded for the stack (patches defined as mandatory by AMS should not be excluded). You're sent a patching service notification 14 days before the proposed maintenance window. This gives you time to test the proposed patches and accept or reject them. If you don't reply to How AMS standard patching works Version May 08, 2025 857 AMS Advanced User Guide AMS Advanced Concepts and Procedures the patching service notification, your instances aren't patched. When the time comes to install the patches, AMS creates a Request for Change (RFC) for each stack, and that RFC appears in your account's RFC list. AMS configured maintenance window and notice With AMS configured patching, each account has a monthly maintenance window, which you define when you onboard your account. The AWS Managed Services Maintenance Window (or Maintenance Window) performs maintenance activities for AWS Managed Services (AMS) and recurs the second Thursday of every month from 3 PM to 4 PM Pacific Time. AMS may change the maintenance window with 48 hours notice. The patching window is different. The patching outbound service request (also known as a service notification) includes a suggested patch window. Note For information about replying to the patching service notification, see Actions you can take in AMS standard patching. The patching service notification is sent by email to the contact email address on file for your account. The notification includes a link to the AWS Support console where you can respond to it. You can also respond to the notification using the AMS Service Request page. The service notification includes: • A list of update IDs (CSUs, IUs, and OUs) that apply to the stack, and those updates that you have requested be excluded from patching (if any). • IDs of instances that will be affected. • A proposed patching window when the updates will be applied. You can request a different patching window. • A request that you accept the proposed patching, or ask for additional information. AMS gives you time to test the impact of the updates and approve or reject the patching, or ask that specific updates be excluded. If you need more time to test, and want the updates to be applied after your testing, respond to the service notification and describe what you want, or submit a service request for a new patch RFC based on the details of the previous RFC. If you don't reply to the service notification at all, no patching action is taken and the RFC is cancelled. How AMS standard patching works Version May 08, 2025 858 AMS Advanced User Guide AMS Advanced Concepts and Procedures If you approve the service notification, AMS runs the patch RFC and applies the updates within the agreed-to patch window, as per the service commitment. When patching is finished, AMS sends you a correspondence in the Service Request, with a summary of the outcome of the patching activity (that is, success or failed). In-place patching In-place patching refers to a method where AMS
ams-ug-297
ams-ug.pdf
297
don't reply to the service notification at all, no patching action is taken and the RFC is cancelled. How AMS standard patching works Version May 08, 2025 858 AMS Advanced User Guide AMS Advanced Concepts and Procedures If you approve the service notification, AMS runs the patch RFC and applies the updates within the agreed-to patch window, as per the service commitment. When patching is finished, AMS sends you a correspondence in the Service Request, with a summary of the outcome of the patching activity (that is, success or failed). In-place patching In-place patching refers to a method where AMS logs into each stack instance and applies patches. In-place patching occurs on mutable infrastructures using Amazon EC2 instances running a supported operating system. Patching applies all non-excluded updates available up to that point. When critical patches are released, there is an additional critical patching process. Standard patching: in-place Standard patching occurs on the agreed-to patch schedule suggested in the patch service notification, and includes regular patch updates that are not deemed critical. Prior to the proposed patching window, and with your affirmative response to the notification, a patch RFC is created and appears in your RFC dashboard. Critical patching: in-place When an OS vendor releases a critical security update, AMS notifies you of the patch RFC by sending you a service notification (to the contact email for your account) for each stack, according to the AMS service commitment. The service notification includes the following for each update: • Update release date • Update criticality • Update details (KB reference, etc.) • IDs of stacks affected You can test the updates listed in the notification, and approve or reject the patches by replying to the service notification. If you approve the notification, you need to provide a specific patch window per stack for installing the updates. How AMS standard patching works Version May 08, 2025 859 AMS Advanced User Guide AMS Advanced Concepts and Procedures Note Patch windows that are within 24 hours of reply to the service notification may be rescheduled based on available capacity. If you don't reply within 10 days or if you reject the proposed patching, the patching is canceled. If you want to apply the updates after the allowed period (provided in the notification), submit a service request for a new patch schedule based on the details of the previous notification. If you approve the service notification, AMS applies the updates within your specified patch window, according to the service commitment. In the case of multiple updates, you can exclude specific updates from the patching by specifying the updates to be excluded in your response to the service notification. AMS sends you a service notification for each stack, of the outcome of each update (that is, success or fail). AMI updates patching (using patched AMIs for Auto Scaling groups) AMI-replacement patching is done on immutable infrastructures by updating the AMI ID that is configured to deploy new Amazon EC2 instances in an Auto Scaling group. Amazon Machine Images (AMIs) are released on a regular basis for the supported operating systems. Operating system vendors release new patches on a periodic basis. AMS takes the Amazon-provided AMI, updates it with the latest patches, and then adds the appropriate components to enable it to operate in the AMS environment. Then, it makes the new AMS AMI available to all AMS customers by sharing the AMI to the accounts. Your Auto Scaling group stacks can be refreshed on a monthly basis with these newly released AMS AMIs. The following graphic illustrates how AMIs are used in AMS your environments. How AMS standard patching works Version May 08, 2025 860 AMS Advanced User Guide AMS Advanced Concepts and Procedures Auto Scaling groups create their instances based on the configured AMI for the Auto Scaling group. When AMS shares updated AMIs, you have the following options depending on how you are managing AMI updates: • If you are using an application deployment tool (for example, UserData, CodeDeploy, and so forth) that customizes your instances automatically after they are created, you can do the following: • Reply to the patching service notification, or submit a service request, for the latest AMS AMI to replace your current Auto Scaling group's configuration AMI. After the AMI ID in your Auto Scaling groups' configuration is replaced, AMS kicks off rolling updates of your instances and your Auto Scaling group instance configurations (for example, installing applications, boot scripts, etc.) are applied to the new instances created with the new AMS AMI automatically. • If you are using a custom/golden AMI in your Auto Scaling groups' configuration, you can: • Create an instance with the new AMS AMI, customize the instance and create a new golden AMI. Share the new golden AMI with AMS using the Amazon EC2
ams-ug-298
ams-ug.pdf
298
replace your current Auto Scaling group's configuration AMI. After the AMI ID in your Auto Scaling groups' configuration is replaced, AMS kicks off rolling updates of your instances and your Auto Scaling group instance configurations (for example, installing applications, boot scripts, etc.) are applied to the new instances created with the new AMS AMI automatically. • If you are using a custom/golden AMI in your Auto Scaling groups' configuration, you can: • Create an instance with the new AMS AMI, customize the instance and create a new golden AMI. Share the new golden AMI with AMS using the Amazon EC2 console, and submit a service request to AMS to update your Auto Scaling groups' configuration to use your new custom AMI. How AMS standard patching works Version May 08, 2025 861 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Share your existing golden AMI with AMS by using the Amazon EC2 console, and submit a service request for AMS to update your golden AMI. To do this, AMS creates an instance from your golden AMI, applies the patches to that instance, creates a new golden AMI for you, and then updates your Auto Scaling groups' configuration to use the new AMI. The drawback here is that AMS cannot test that your new custom AMI works the way you want it to. Instead, you should test the instance created with the new AMI and verify that everything works correctly before creating a new golden AMI, sharing it, and requesting that AMS update your Auto Scaling groups. AMS does not recommend this option. Standard patching: AMI updates Every month AMS releases new Amazon Machine Images (AMIs) with service improvements and new patches that apply to the AMIs. Note New AMS AMIs are generated after Patch Tuesday from updated AWS AMIs. Then, AMS tests them before making them available. After the new AMIs pass testing, AMS shares updated AMIs to managed accounts. Critical patching: AMI updates When needed, AMS provides AMIs updated with critical security patches released since the last monthly AMI release. The process for critical security updates to immutable infrastructures is identical to the monthly AMI process for immutable infrastructures, except that a new AMS AMI is created outside the normal schedule (Patch Tuesday), based on the release of new critical updates. AMS makes available a new AMI with the critical security patches according to the service level agreements (SLAs) defined for your account. AMS updates of Auto Scaling groups by request only. Use a service request to submit AMI replacement requests. AMS standard patching failures In case of failed updates, AMS performs an analysis to understand the cause of failure and communicates the outcome of the analysis to you. If the failure is attributable to AMS, we retry the updates if it's within the maintenance window. Otherwise, AMS creates service notifications for the failed instance update and waits for your instructions. AMS standard patching failures Version May 08, 2025 862 AMS Advanced User Guide AMS Advanced Concepts and Procedures For failures attributable to your system, you can submit a service request with a new patch RFC to update the instances. Actions you can take in AMS standard patching In addition to testing new AMIs, there are several actions you can take to manage the patching of your infrastructure: • If it took longer to test the updates than the patch window allowed, you can request that AMS apply the updates that were canceled when you're ready by submitting a service request (use the details in the original service notification as the basis). • You can request that an important update (IU) or other update (OU) be applied before the next automated update window by submitting a service request providing a list of the updates, the applicable instances, and other details as appropriate. Since this CT is not automated, it takes longer to schedule and run. Check the service level objectives (SLOs) for the appropriate time. For more information, see AMS service level objectives (SLOs). Additionally, you can use existing, patched, AMS AMIs to create custom AMIs. For information, see AMI | Create. Note You can't request a new AMS AMI based on an important update or other update before the next maintenance window because the AMS AMI release process follows a uniform cadence for the benefit of all AMS customers. Changing what gets patched/opting out With AMS configured patching, in your response to the patching service notification or in a Service Request, you can change what resources get patched. You can do the following: • Define a list of patches that should be excluded from remediation, per stack and per operating system. • Define a list of resources that should be excluded from certain patches or all patching. • Define a list of resources that should be always
ams-ug-299
ams-ug.pdf
299
before the next maintenance window because the AMS AMI release process follows a uniform cadence for the benefit of all AMS customers. Changing what gets patched/opting out With AMS configured patching, in your response to the patching service notification or in a Service Request, you can change what resources get patched. You can do the following: • Define a list of patches that should be excluded from remediation, per stack and per operating system. • Define a list of resources that should be excluded from certain patches or all patching. • Define a list of resources that should be always be excluded from all patching. • Define a list of resources that should be patched on a certain day and certain time (good if you haven't defined a maintenance window). Actions you can take in AMS standard patching Version May 08, 2025 863 AMS Advanced User Guide AMS Advanced Concepts and Procedures To exclude one or more patches, submit a service request, or respond to the patching service notification using the template provided next. Do not submit an RFC. Include in the request the patch name or names that you want excluded and why. Include this information in a Service Request as follows: • Name: The name of the patch. For Windows patches, this is the KB name, such as KB3145384. For Linux patches, this is the package name, such as openssh-6.6.1p1-25.61.amzn1.x86_64. • Reason: A comment indicating why the patch is being excluded. • Expiration Time: The date/time when the exclusion expires. If an excluded patch is already installed, it is removed. The request is reviewed by an operator who will discuss it with you if excluding those patches poses a significant security risk. The expiry date for excluded patches is also negotiated. After the agreed upon expiry date, the exclusion expires, and the patch is installed on any subsequent patching. Patches on the exclusion list are still returned in scan results, if applicable. Note Unlike Windows, Linux patches are version-specific. This distinction is important because new versions of an excluded patch are not automatically excluded. It is your responsibility to notify AMS to exclude new versions of a Linux patch if that's what you want to do. Patch service notification reply templates You must reply to patching service notifications, using the specified format, in order for patching to be performed on your instances. You should do this if you haven't already set a maintenance window with AMS. When you reply to a service notification, use the format given. If no maintenance window is set, let us know when to patch what as shown following: UTC StartTime StackId InstanceId (Optional) 2019-04-01 15:00 stack-123456789012 i-1234566789 2019-04-01 15:00 stack-123456789013 i-1234566784 Actions you can take in AMS standard patching Version May 08, 2025 864 AMS Advanced User Guide AMS Advanced Concepts and Procedures 2019-04-01 15:00 stack-123456789014 i-1234566783 2019-04-01 15:00 stack-123456789015 i-1234566782 If you have a set maintenance window and want certain resources to be excluded from certain patches, use the following format: StackId InstanceId (Optional) Exclude Patches stack-123456789012 i-1234566789 PATCH stack-123456789013 i-1234566784 PATCH stack-123456789014 i-1234566783 PATCH stack-123456789015 i-1234566782 PATCH If you have a set maintenance window and want certain resources to always be excluded from all patching, use the following format: StackId InstanceId (Optional) Exclude Patches stack-123456789012 i-1234566789 ALL stack-123456789015 i-1234566782 ALL Preparing for patching To prepare your environment for automated patching, we recommend the following: • Be sure you have a complete inventory of all instances to be patched. • Ensure that your resources are backed up regularly as part of your Continuity of Business strategy. Additional backups are created as part of the patch sequence, and these are automatically deleted according to your configured Patch Orchestrator retention policy (default is 60 days). • Ensure that all relevant licenses are up to date. • Modify your stack maintenance windows to stagger patching so that testing stacks are patched before production stacks. That way, any errors with patching are found in the testing stacks and can be identified before production stacks are patched. Viewing patch settings To find out what your current patching configuration is you can do the following: • Submit a service request to AMS with the query. Actions you can take in AMS standard patching Version May 08, 2025 865 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Wait for a patch service notification. The patching notice advises you of all patches to be applied and instances to be patched, and also suggests a patch window. You can submit a service request to modify the following: • Scan Interval: The amount of time, in minutes, between compliance scans performed on instances of this stack. Default is 240 (4 hours). • NotificationWindow: How far in advance (in minutes) of a scheduled change (patch) the notification should be sent to
ams-ug-300
ams-ug.pdf
300
take in AMS standard patching Version May 08, 2025 865 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Wait for a patch service notification. The patching notice advises you of all patches to be applied and instances to be patched, and also suggests a patch window. You can submit a service request to modify the following: • Scan Interval: The amount of time, in minutes, between compliance scans performed on instances of this stack. Default is 240 (4 hours). • NotificationWindow: How far in advance (in minutes) of a scheduled change (patch) the notification should be sent to you. Default is 10080 (7 days). AMS standard patching FAQs This section provides answers to some frequently asked questions. • Q: How do I opt out of patching globally? A: To globally opt out of patching, file a service request. Note that you can't opt out of AMS mandatory patches. All stacks will continue to be scanned so that we can report on vulnerabilities. • Q: How do I exclude specific stacks from patching? A: To permanently exclude specific stacks from patching, submit a service request. To exclude certain stacks from a particular patch cycle, respond to the upcoming patching notice with the list of stacks to exclude. For information, see Changing what gets patched/opting out. Note that you can't opt out of mandatory patches. • Q: What happens if I don't approve a patching service notification? A: You have 14 days to approve a standard patching service request and 10 days to approve a critical patching notice. If you don't approve the service request within the time period, the service commitment is nullified and no patching occurs. In the case of mandatory patching, patches are applied regardless of response to the service request. AMS standard patching FAQs Version May 08, 2025 866 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Q: How do I exclude specific patches and packages from being installed? A: To permanently exclude specific patches or packages, submit a service request. To exclude certain patches or packages from a particular patch cycle, respond to the upcoming patching notice with the list of patches or packages to exclude. For details, see Changing what gets patched/opting out. Note that you can't opt out of mandatory patches. • Q: What happens if a system fails as a result of patching? A: AMS monitors each system. AMS sends a service notification to you of the outcome of each update (that is, success or fail) per stack and instance. If a failure is detected, AMS investigates, works to restore the instance, and then an AMS operations engineer attempts to manually patch. For information, see AMS standard patching failures. • Q: What updates are managed by AMS? A: AMS manages operating system level updates that AMS is notified of by the vendor. For more information, see Supported patches. • Q: What updates are not managed by AMS? A: Application-level updates are not managed by AMS. • Q: How are Auto Scaling groups updated? A: Auto Scaling groups are updated with an AMI replacement in the Auto Scaling group configuration and preform a rolling update. A rolling update observes the HealthyHostThreshold setting of your patching configuration, which determines how many Amazon EC2 instances in a stack must be maintained active during patching. For more information, see AMI updates patching (using patched AMIs for Auto Scaling groups). • Q: How do I get updates installed outside the normal cycle? AMS standard patching FAQs Version May 08, 2025 867 AMS Advanced User Guide AMS Advanced Concepts and Procedures A: For OS-level updates that you want installed outside of the normal patching schedule, submit a service request by using the patching notification that you received. This might happen if your testing of a proposed patch took longer than 21 days (for a standard patch) or 14 days (for a critical patch). Out-of-band patching can be done in-place for standalone Amazon EC2 instances. • Q: How are newly deployed stacks or instances patched? A: When creating a new Amazon EC2 stack instance or Auto Scaling group, you should always specify the latest AMS AMI, which will have the latest patches on it already. For mutable infrastructures, inline patching should be performed as soon as the stack is deployed. Patching service commitments Based on your type of infrastructure deployment, and criticality of the update, we provide service commitments for critical security updates for mutable and immutable infrastructures, and important updates for mutable and immutable infrastructures. Standard patching These are AMS service commitments for standard patching. Standard patching, mutable infrastructure (in-place patching) Event/Action Service commitment measurement Important Updates are released in a month. Clock starts Clock stops after service notification is sent. Fourteen days from when the standard patch notification is created, AMS notifies you of upcoming
ams-ug-301
ams-ug.pdf
301
infrastructures, inline patching should be performed as soon as the stack is deployed. Patching service commitments Based on your type of infrastructure deployment, and criticality of the update, we provide service commitments for critical security updates for mutable and immutable infrastructures, and important updates for mutable and immutable infrastructures. Standard patching These are AMS service commitments for standard patching. Standard patching, mutable infrastructure (in-place patching) Event/Action Service commitment measurement Important Updates are released in a month. Clock starts Clock stops after service notification is sent. Fourteen days from when the standard patch notification is created, AMS notifies you of upcoming planned patching through a service notification and by email for each stack. The service notification includes: • A list of update IDs (CSUs and IUs) that are applicable (needed and not applied) for the Patching service commitments Version May 08, 2025 868 AMS Advanced User Guide Event/Action AMS Advanced Concepts and Procedures Service commitment measurement stack, and those updates excluded from patching • IDs of stacks affected • The maintenance window when the updates will be applied You test the impact of the updates and approve or reject the RFC. If you do not reply If you don't approve or reply within 14 days, the pending change is canceled and the within 14 days or you reject the patching in service commitment for the updates is not your response to the service notification, no applicable. action is taken. If you take longer than 14 days to test, and want the updates to be applied after the 14- day period, submit a service request for a new patch RFC based on the details of the previous RFC. If you approve the service notification within 14 days, AMS applies the updates. The clock starts if you approve service notification within 14 days of the receipt. The clock stops after the update installation has been attempted. Not applicable. You can choose to exclude specific updates from an RFC by specifying the updates to be excluded in your response to the service notification. AMS sends a service notification to you of the outcome of each update that was attempted. The service notification includes the following details: • Amazon EC2 instance ID • Update 1 Success/Failed: ARN a1, ARN a2... • Update 2 Success/Failed: ARN b1, ARN b2... • Update N Success/Failed: ARN c1, ARN c2... Standard patching Version May 08, 2025 869 AMS Advanced Concepts and Procedures Service commitment measurement Not applicable. AMS Advanced User Guide Event/Action In case of failed updates, AMS performs an analysis to understand the cause of failure and communicates the outcome of the analysis to you. If the failure is attributable to AMS, AMS retries the updates if within the maintenan ce window, otherwise AMS creates service notifications for the failed instance-update combination and waits for your instructions on a maintenance window. For failures attributable to you, submit a service request for a new patch RFC to update the instances. Not applicable. Critical patching These are AMS service commitments for critical security updates. Critical security updates, mutable infrastructure Service commitment measurement Clock starts The clock stops after the service notification is sent. Event/Action CSU is released. AMS notifies you of the patch RFC through a service notification (which also sends an email) for each stack. The service notification includes: • Updates release date • Update criticality • Update details: KB reference, and so on • IDs of stacks affected Critical patching Version May 08, 2025 870 AMS Advanced User Guide Event/Action AMS Advanced Concepts and Procedures Service commitment measurement You test the updates listed in the RFC, and approve or reject the RFC within 10 days by If you don't approve or reply within 14 days, the pending change is canceled and the replying to the service notification. service commitment for the update is not applicable. You provide a specific maintenance window (per stack) for installing the updates. Maintenance windows specified that are within 24 hours of reply to the service notification may be rescheduled based on available capacity. If you don't reply within 10 days, or if you reject the patch RFC, the pending action is canceled. If you want to apply the updates after the 14- day period, submit a service request for a new patch RFC based on the details of the previous RFC. If you approve the service notification, AMS applies the updates. If the desired maintenance window is not within the service commitment time frame, the service commitment for the update is missed only if the RFC is not run within the desired maintenance window. Not applicable. For multiple updates, you can choose to exclude specific updates from the change by specifying the updates to be excluded in your response to the service notification. AMS sends a service notification to you of the outcome
ams-ug-302
ams-ug.pdf
302
period, submit a service request for a new patch RFC based on the details of the previous RFC. If you approve the service notification, AMS applies the updates. If the desired maintenance window is not within the service commitment time frame, the service commitment for the update is missed only if the RFC is not run within the desired maintenance window. Not applicable. For multiple updates, you can choose to exclude specific updates from the change by specifying the updates to be excluded in your response to the service notification. AMS sends a service notification to you of the outcome of each update that was applied. The service notification includes the following details: • Amazon EC2 instance ID • Update Success: ARN a1, ARN a2... • Update Failed: ARN c1, ARN c2... Critical patching Version May 08, 2025 871 AMS Advanced Concepts and Procedures Service commitment measurement Not applicable. AMS Advanced User Guide Event/Action In case of failed updates, AMS performs an analysis to understand the cause of failure and communicates the outcome of the analysis to you. If the failure is attributable to AMS, AMS retries the updates if within the maintenan ce window, otherwise AMS creates service notifications for the failed instance-update combination and waits for your instructions on a new maintenance window. For failures attributable to you, submit a service request for a new patch RFC to update the instances. Not applicable. Critical security updates, immutable infrastructure Event/Action CSU is released. Service commitment measurement Clock starts AMS notifies you of the following via a service notification: Clock continues to run. • Update release date • Update criticality • Update details (KB reference,and so on) • AMS Amazon Machine Images (AMI) impacted • Anticipated release date and time for new updated AMIs AMS releases updated AMIs in managed account. Clock stops. Critical patching Version May 08, 2025 872 AMS Advanced User Guide Event/Action AMS Advanced Concepts and Procedures Service commitment measurement If you approve the service notification, AMS applies the updates. Not applicable. AMS notifies you of the AMIs shared in your account, through a service notification and by email. If testing the new AMIs takes longer than the allotted time (one week), you can submit a service request to AMS to update your Auto Scaling groups with the new AMS AMI (as is). If you want to modify the new AMS AMI with your configurations, use an RFC with the Management | Other | Other | Update CT (ct-0xdawir96cy7k) to request that we update your Auto Scaling groups. Not applicable. Critical patching Version May 08, 2025 873 AMS Advanced User Guide AMS Advanced Concepts and Procedures Reports and options AWS Managed Services (AMS) collates data from various native AWS services to provide value- added reports on major AMS offerings. AMS offers two types of detailed reporting: • On request reports: You can request certain reports ad hoc through your Cloud Service Delivery Manager (CSDM). These reports don't have a limit because you might need to request them multiple times during onboarding or critical events. However, be aware that these reports aren't designed to be provided on a schedule like weekly reports. To better understand your needs or for more information on using self-service reporting, reach out to your CSDM. • Self-service reports: AMS self-service reports allow you to directly query and analyze data as often as you need. Use self-service reports to access reports from the AMS console and report datasets through S3 buckets (one bucket per account). This allows you to integrate the data into your favorite Business Intelligence (BI) tool so that you can customize reports for your requirements. Topics • On-request reports • Self-service reports On-request reports Topics • AMS Patch reports • AMS Backup reports • Incidents Prevented and Monitoring Top Talkers reports • Billing Charges Details report • Trusted Remediator reports AMS collates data from various native AWS services to provide value added reports on major AMS offerings. For a copy of these reports, make a request to your Cloud Service Delivery Manager (CSDM). On-request reports Version May 08, 2025 874 AMS Advanced User Guide AMS Advanced Concepts and Procedures AMS Patch reports Available reports • Patch Instance Details Summary report • Patch Details report • Instances That Missed Patches report • Patching SSM Coverage report Patch Instance Details Summary report The Patch Instance Details Summary report provides instance details gathered for instances that are onboarded to reporting. This is an informational report that helps identify all the instances onboarded, account status, instance details, maintenance window coverage, maintenance window execution time, stack details, and platform type. This report provides the following: 1. Data on the production and non-production instances of an account. Note: Production and non- production stage is derived from the Account Name and not from the Instance Tags. 2. Data
ams-ug-303
ams-ug.pdf
303
Patch Details report • Instances That Missed Patches report • Patching SSM Coverage report Patch Instance Details Summary report The Patch Instance Details Summary report provides instance details gathered for instances that are onboarded to reporting. This is an informational report that helps identify all the instances onboarded, account status, instance details, maintenance window coverage, maintenance window execution time, stack details, and platform type. This report provides the following: 1. Data on the production and non-production instances of an account. Note: Production and non- production stage is derived from the Account Name and not from the Instance Tags. 2. Data on the distribution of instances by platform type. Note: 'N/A' platform type is when AWS Systems Manager can't retrieve the platform information. 3. Data on the distribution of state of instances, and the number of instances running, stopped, or terminating. Field Name Report Datetime Account Id Definition The date and time the report was generated. AWS Account ID to which the instance ID belongs Account Name AWS account name Production Account Identifier of AMS prod, non-prod accounts, depending on whether account name include value 'PROD', 'NONPROD'. Example: PROD, NONPROD, Not Available AMS Patch reports Version May 08, 2025 875 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Account Status Definition AMS account status. For example: ACTIVE, INACTIVE AMS account service commitment PREMIUM, PLUS Landing Zone Access Restrictions Instance Id Instance Name Instance Platform Type Instance Platform Name Stack Name Stack Type Auto Scaling Group Name Instance Patch Group Flag for account landing zone type. For example: MALZ, NON-MALZ Regions to which access is restricted. For example: US SOIL ID of EC2 instance Name of EC2 instance Operating System (OS) type. For example: Windows, Linux, and so forth Operating System (OS) name. For example: MicrosoftWindowsServer2012R2Standard, RedHatEnterpriseLinuxServer Name of stack that contains instance AMS stack (AMS infrastructure within customer account) or Customer stack (AMS managed infrastructure that supports customer applications). Examples: AMS, CUSTOMER Name of Auto Scaling Group (ASG) that contains the instance Patch group name used to group instances together and apply the same maintenance window. If the patch group is unassigned the value will be "Unassigned" AMS Patch reports Version May 08, 2025 876 AMS Advanced User Guide Field Name AMS Advanced Concepts and Procedures Definition Instance Patch Group Type Patch group type. DEFAULT: default patch group with the default maintenance window, determined by the AMSDefaultPatchGroup:True tag on the instance. CUSTOMER: customer created patch group. NOT_ASSIGNED: no patch group assigned State within the EC2 instance lifecycle . Examples: TERMINATED, RUNNING, STOPPING, STOPPED, SHUTTING-DOWN, PENDING. For more information, see Instance lifecycle. If there is a future Maintenance Window on this instance. Examples: COVERED or NOT_COVERED Next time the maintenance window is expected to execute. If NULL, single window execution, i.e. not recurring Instance State Maintenance Window Coverage Maintenance Window Execution Datetime Patch Details report AWS Managed Services (AMS) Patch Details report provides patch details and maintenance window coverage of various instances, including: 1. Data on Patch groups and its types. 2. Data on Maintenance Windows, duration, cutoff, future dates of maintenance window executions (schedule) and instances impacted in each window. 3. Data on all the operating systems under the account and number of instances that operating system is installed. AMS Patch reports Version May 08, 2025 877 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Report Datetime Account ID Account Name Instance Id Production Account Account Status Instance Platform Type Instance Platform Name Stack Type Instance Patch Group Definition The date and time the report was generated. AWS Account ID to which the instance ID belongs AWS account name ID of EC2 instance Identifier of AMS prod, non-prod accounts, depending on whether account name include value 'PROD', 'NONPROD'. If data is not available value will be "Not Available" AMS account status. For example: ACTIVE, INACTIVE Operating System (OS) type. For example: Windows, Linux Operating System (OS) name. For example: MicrosoftWindowsServer2012R2Standard, RedHatEnterpriseLinuxServer AMS stack (AMS infrastructure within a customer account) or Customer stack (AMS managed infrastructure that supports customer applications). For example: AMS, CUSTOMER Patch group name used to group instances together and apply the same maintenance window. If the patch group is unassigned the value will be "Unassigned" Instance Patch Group Type Patch group type. AMS Patch reports Version May 08, 2025 878 AMS Advanced User Guide Field Name Instance State AMS Advanced Concepts and Procedures Definition DEFAULT: default patch group w/ default maintenance window, determined by AMSDefaultPatchGroup:True tag on the instance CUSTOMER: customer created patch group UNASSIGNED: no patch group assigned State within the EC2 instance lifecycle. For example: TERMINATED, RUNNING, STOPPING, STOPPED, SHUTTING-DOWN, PENDING For more information, see Instance lifecycle. Maintenance Window Id Maintenance window identifier Maintenance Window State Possible values are ENABLED or DISABLED. Maintenance Window Type Maintenance window type Maintenance Window
ams-ug-304
ams-ug.pdf
304
will be "Unassigned" Instance Patch Group Type Patch group type. AMS Patch reports Version May 08, 2025 878 AMS Advanced User Guide Field Name Instance State AMS Advanced Concepts and Procedures Definition DEFAULT: default patch group w/ default maintenance window, determined by AMSDefaultPatchGroup:True tag on the instance CUSTOMER: customer created patch group UNASSIGNED: no patch group assigned State within the EC2 instance lifecycle. For example: TERMINATED, RUNNING, STOPPING, STOPPED, SHUTTING-DOWN, PENDING For more information, see Instance lifecycle. Maintenance Window Id Maintenance window identifier Maintenance Window State Possible values are ENABLED or DISABLED. Maintenance Window Type Maintenance window type Maintenance Window Next Execution Datetime Next time the maintenance window is expected to execute. If NULL, single window Last Execution Maintenance Window Maintenance Window Duration (hrs) execution, i.e. not recurring The latest time the maintenance window was executed The duration of the maintenance window in hours Maintenance Window Coverage The maintenance window coverage Patch Baseline Id Patch baseline currently attached to instance AMS Patch reports Version May 08, 2025 879 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Patch Status Definition Overall patch compliance status. For example: COMPLIANT, NON_COMPLIANT. If there is at least one missing patch, instance is considered noncompliant, otherwise compliant. Compliant - Total Count of compliant patches (all severities) Noncompliant - Total Count of noncompliant patches (all severities) Compliant - Critical Compliant - High Compliant - Medium Compliant - Low Compliant - Informational Compliant - Unspecified Noncompliant - Critical Noncompliant - High Noncompliant - Medium Noncompliant - Low Count of compliant patches with "critical" severity Count of compliant patches with "high" severity Count of compliant patches with "medium" severity Count of compliant patches with "low" severity Count of compliant patches with "informat ional" severity Count of compliant patches with "unspecified" severity Count of noncompliant patches with "critical" severity Count of noncompliant patches with "high" severity Count of noncompliant patches with "medium" severity Count of noncompliant patches with "low" severity AMS Patch reports Version May 08, 2025 880 AMS Advanced User Guide Field Name Noncompliant - Informational Noncompliant - Unspecified AMS Advanced Concepts and Procedures Definition Count of noncompliant patches with "informat ional" severity Count of noncompliant patches with "unspecif ied" severity Instances That Missed Patches report AWS Managed Services (AMS) Instances That Missed Patches report provides details on instances that missed patches during the last maintenance window execution, including: 1. Data on missing patches at the patch ID level. 2. Data on all the instances which have at least one patch missing along with attributes such as patch severity, unpatched days, range, and release date of the patch. Field Name Report Datetime Account ID Definition The date and time the report was generated. AWS Account ID to which the instance ID belongs Account Name AWS account name Production Account Account Status Identifier of AMS prod, non-prod accounts, depending on whether the account name includes the value 'PROD','NONPROD'. AMS account status. For example: ACTIVE or INACTIVE AMS account service tier PREMIUM or PLUS Instance ID ID of EC2 instance AMS Patch reports Version May 08, 2025 881 AMS Advanced User Guide Field Name Instance Platform Type Instance State Patch ID Patch Severity Patch Classification AMS Advanced Concepts and Procedures Definition Operating System (OS) type. For example: Windows State of the EC2 instance lifecycle. For example: TERMINATED, RUNNING, STOPPING, STOPPED, SHUTTING-DOWN, PENDING For more information, see Instance lifecycle. ID of released patch. For example: KB3172729 Severity of patch per publisher. For example: CRITICAL, IMPORTANT, MODERATE, LOW, UNSPECIFIED Classification of patch per publisher. For example: CRITICALUPDATES, SECURITYU PDATES, UPDATEROLLUPS, UPDATES, FEATUREPACKS Patch Release Datetime (UTC) Release date of patch per publisher Patch Install State Days Unpatched Days Unpatched Range Install state of patch on instance per SSM. For example: INSTALLED, MISSING, NOT APPLICABLE Number of days instance unpatched since last SSM scanning Bucketing of days unpatched. For example: <30 DAYS, 30-60 DAYS, 60-90 DAYS, 90+ DAYS Patching SSM Coverage report The AMS Patching SSM Coverage report informs you whether or not the EC2 instances in the account have the SSM Agent installed. AMS Patch reports Version May 08, 2025 882 AMS Advanced User Guide AMS Advanced Concepts and Procedures Definition Customer name for situations where there are multiple sub-customers AWS Region where the resource is located The name of the account The ID of the AWS account ID of EC2 instance Name of EC2 instance Indicates if the resource has the SSM Agent installed ("Compliant") or not ("NON_COM PLIANT") Field Name Customer Name Resource Region Account name AWS Account ID Resource Id Resource Name Compliant flag AMS Backup reports Available reports • Backup Job Success / Failure report • Backup Summary report • Backup Summary/Coverage report Backup Job Success / Failure report The Backup Job Success/Failure report provides information about backups run in the last few weeks. To customize
ams-ug-305
ams-ug.pdf
305
AWS Region where the resource is located The name of the account The ID of the AWS account ID of EC2 instance Name of EC2 instance Indicates if the resource has the SSM Agent installed ("Compliant") or not ("NON_COM PLIANT") Field Name Customer Name Resource Region Account name AWS Account ID Resource Id Resource Name Compliant flag AMS Backup reports Available reports • Backup Job Success / Failure report • Backup Summary report • Backup Summary/Coverage report Backup Job Success / Failure report The Backup Job Success/Failure report provides information about backups run in the last few weeks. To customize the report, specify the number of weeks that you want to retrieve data for. The default number of weeks is 12. The following table lists the data included in the report: Field Name AWS Account ID Account Name Definition AWS Account ID to which the resource belongs AWS account name AMS Backup reports Version May 08, 2025 883 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Backup Job ID Resource ID Resource Type Resource Region Backup State Recovery Point ID Status message Definition The ID of the Backup job The ID of the backed-up resource The type of resource that is being backed up The AWS Region of the backed up resource The state of the backup. For more informati on, see Backup job statuses The unique identifier of the recovery point Description of errors or warnings that occurred during the backup job Backup Size Size of the backup in GB Recovery Point ARN The ARN of the created backup Recovery point age in days Less than 30 days old Backup Summary report Field Name Customer Name Backup Month Backup Year Number of days that have passed since the recovery point was created Indicator of backups that are less than 30 days old Definition Customer name for situations where multiple sub-customers are Month of the backup Year of the backup AMS Backup reports Version May 08, 2025 884 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Resource Type # of Resources Definition The type of resource that is being backed up The number of resources that were backed up # of Recovery points Number of distinct snapshots Backups less than 30 Days Old The count of backups that are less than 30 days old Max Recovery point age The oldest recovery point age in days Min Recovery point age The most recent recovery point age in days Backup Summary/Coverage report The Backup Summary/Coverage report lists how many resources are not currently protected by any AWS Backup plan. Discuss with your CDSM an appropriate plan to increase coverage, where possible, and to reduce the risk of data loss. Field Name Customer Name Region Account name AWS Account ID Resource Type Resource ARN Resource ID Definition Customer name for situations where multiple sub-customers are AWS region where the resource is located The name of the account The ID of the AWS account Type of the resource. Resources are supported by AWS Backup (Aurora, DocumentDB, DynamoDB, EBS, EC2, EFS, FSx, RDS, and S3) ARN of the resource ID of the resource AMS Backup reports Version May 08, 2025 885 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Coverage # of resources perc_coverage Definition Indicates if the resource is covered or not ("COVERED" or "NOT_COVERED") Number of supported resources in the account Percentage of supported resources with a backup executed in the last 30 days. Incidents Prevented and Monitoring Top Talkers reports Available reports • Incidents prevented report • Monitoring Top Talkers report Incidents prevented report The Incidents Prevented report lists the Amazon CloudWatch alarms that were automatically remediated, preventing a possible incident. To learn more, see Auto remediation. The following table lists the information included in this report: Field Name Definition execution_start_time_utc Date in which the automation was executed customer_name account_name AwsAccountId document_name Account customer name The name of the account The ID of the AWS account The name of the SSM document or automatio n executed duration_in_minutes The length of the automation in minutes Region AWS Region where the resource is located Incidents Prevented and Monitoring Top Talkers reports Version May 08, 2025 886 AMS Advanced User Guide Field Name AMS Advanced Concepts and Procedures Definition automation_execution_id The ID of the execution automation_execution_status The status of the execution Monitoring Top Talkers report The Monitoring Top Talkers report presents the number of Amazon CloudWatch alerts generated during a specific time period and provides visualizations of the resources that generate the highest number of alerts. This report helps you identify resources that generate the highest number of alerts. These resources might be candidates for performing Root Cause Analysis to remediate the problem or to modify the alarm thresholds to prevent unnecessary triggers when there isn't
ams-ug-306
ams-ug.pdf
306
886 AMS Advanced User Guide Field Name AMS Advanced Concepts and Procedures Definition automation_execution_id The ID of the execution automation_execution_status The status of the execution Monitoring Top Talkers report The Monitoring Top Talkers report presents the number of Amazon CloudWatch alerts generated during a specific time period and provides visualizations of the resources that generate the highest number of alerts. This report helps you identify resources that generate the highest number of alerts. These resources might be candidates for performing Root Cause Analysis to remediate the problem or to modify the alarm thresholds to prevent unnecessary triggers when there isn't an actual issue. The following table lists the information included in this report: Field Name Customer name AccountId Alert category Description Resource ID Resource Name Region Incident status First occurrence Recent occurrence Definition Name of the customer The ID of the AWS account The type of alert triggered Description of the alert ID of the resource that triggered the alert Name of the resource that triggered the alert AWSRegion where the resource is located Latest status of the incident generated by the alarm First time that the alert was triggered The most recent time that the alert was triggered Incidents Prevented and Monitoring Top Talkers reports Version May 08, 2025 887 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Alert Count Definition Number of alerts generated between the first and recent occurrence Billing Charges Details report AWS Managed Services (AMS) Billing Charges Details report provides details about AMS billing charges with linked accounts and respective AWS services, including: • AMS service-level charges, uplift percentages, account-level AMS service tiers and AMS fees. • Linked accounts and AWS usage charges Field Name Billing Month Payer Account ID Linked Account ID Definition The month and year of the service billed The 12 digit ID identifying the account that will be responsible for paying the AMS charges The 12 digit ID identifying the AMS account that consumes services that generates expenses AWS Service Name The AWS service that was used AWS Charges Pricing Plan Uplift Proportion The AWS charges for the AWS service name listed in AWS Service Name The name of the pricing plan associated with the linked account The uplift percentage (as a decimal V.WXYZ) based on pricing_plan, SLA, and AWS service Adjusted AWS Charges AWS usage adjusted for AMS Billing Charges Details report Version May 08, 2025 888 AMS Advanced User Guide Field Name Uplifted AWS Charges AMS Advanced Concepts and Procedures Definition The percentage of AWS charges to be charged for AMS; adjusted_aws_charges * uplift_pe rcent Instances EC2 RDS Spend Spend on EC2 and RDS instances AMS Charges Prorated Minimum Fee Total AMS charges for the product; uplifted_ aws_charges + instance_ec2_rds_spend + uplifted_ris + uplifted_sp The amount we charge to meet the contractu al minimum Minimum Fee AMS Minimum Fees (if applicable) Linked Account Total AMS Charges Sum of all charges for the linked_account Payer Account Total AMS Charges Sum of all charges for payer account Trusted Remediator reports Available reports • Trusted Remediator Remediation Summary report • Trusted Remediator Configuration Summary report • Trusted Advisor Check Summary report Trusted Remediator Remediation Summary report The Trusted Remediator Remediation Status report provides information about the remediations that occurred during previous remediation cycles. The default number of weeks is 1. To customize the report, specify the number of weeks based on your remediation schedule. Field Name Date Definition The date that the data was collected on. Trusted Remediator reports Version May 08, 2025 889 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Account ID Account Name Check Category Check Name Check ID Execution Mode OpsItem ID OpsItem Status Resource ID Definition The AWS account ID that the resource belongs to The AWS account name The AWS Trusted Advisor check category The name of the remediated Trusted Advisor check The ID of the remediated Trusted Advisor check The execution mode that was configured for the specific Trusted Advisor check The ID of the OpsItem created by Trusted Advisor for remediation The status of the OpsItem created by Trusted Advisor at the time of reporting The ARN of the resource created for remediati on Trusted Remediator Configuration Summary report The Trusted Remediator Configuration Summary report provides information about the current Trusted Remediator Remediation configurations for each Trusted Advisor check. Field Name Date Account ID Definition The date that the data was collected on. The AWS account ID that the configuration applies to Trusted Remediator reports Version May 08, 2025 890 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Account Name Check Category Check Name Check ID Execution Mode Override to Automated Override to Manual Definition The AWS account name The AWS Trusted Advisor check category The name of the remediated Trusted Advisor check that the
ams-ug-307
ams-ug.pdf
307
report The Trusted Remediator Configuration Summary report provides information about the current Trusted Remediator Remediation configurations for each Trusted Advisor check. Field Name Date Account ID Definition The date that the data was collected on. The AWS account ID that the configuration applies to Trusted Remediator reports Version May 08, 2025 890 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Account Name Check Category Check Name Check ID Execution Mode Override to Automated Override to Manual Definition The AWS account name The AWS Trusted Advisor check category The name of the remediated Trusted Advisor check that the configuration applies to The ID of the remediated Trusted Advisor check that the configuration applies to The execution mode that was configured for the specific Trusted Advisor check The tag pattern, if configured, to override execution mode to Automated The tag pattern, if configured, to override execution mode to Manual Trusted Advisor Check Summary report The Trusted Advisor Check Summary report provides information about the current Trusted Advisor checks. This report collects data after each weekly remediation schedule. The default number of weeks is 1. To customize the report, specify the number of weeks based on your remediation cycle. Field Name Date Account ID Customer Name Check Category Definition The date that the data was collected on. The AWS account ID that the configuration applies to The AWS account name The AWS Trusted Advisor check category Trusted Remediator reports Version May 08, 2025 891 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Check Name Check ID Status Resources Flagged Resources Ignored Definition The name of the remediated Trusted Advisor check that the configuration applies to The ID of the remediated Trusted Advisor check that the configuration applies to The alert status of the check. Possible statuses are ok (green), warning (yellow), error (red), or not_available The number of AWS resources that were flagged (listed) by the Trusted Advisor check. The number of AWS resources that were ignored by Trusted Advisor because you marked them as suppressed. Resources in critical state The number of resources in critical state Resources in warning state The number of resources in warning state Self-service reports AWS Managed Services (AMS) self-service reports (SSR) is a feature that collects data from various native AWS services and provides access to reports on major AMS offerings. SSR provides information that you can use to support operations, configuration management, asset management, security management, and compliance. Use SSR to access the reports from the AMS console and report datasets through Amazon S3 buckets (one bucket per account). You can plug the data into your favorite business intelligence (BI) tool to customize the reports based on your unique needs. AMS creates this S3 bucket (S3 bucket name: (ams-reporting-data-a<Account_ID>) in your primary AWS Region, and the data is shared from the AMS control plane hosted in the us-east-1 Region. Important To access this feature, you must have one of the following roles: Self-service reports Version May 08, 2025 892 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Multi-Account Landing Zone: AWSManagedServicesReadOnlyRole • Single-Account Landing Zone: Customer_ReadOnly_Role Important Using custom keys with AWS Glue To encrypt your AWS Glue metadata with a customer-managed KMS key, you must perform the following additional steps to allow AMS to aggregate data from the account: 1. Open the AWS Key Management Service console at https://console.aws.amazon.com/ kms, and then choose Customer Managed Keys. 2. Select the key ID that you plan to use to encrypt the AWS Glue metadata. 3. Choose the Aliases tab, and then choose Create alias. 4. In the text box, enter AmsReportingFlywheelCustomKey, and then choose Create alias. Topics • Patch report (daily) • Backup report (daily) • Incident report (weekly) • Billing report (monthly) • Aggregated reports • AMS self-service reports dashboards • Data retention policy • Offboard from SSR Patch report (daily) Available reports • • Patch details • Instances that missed patches Patch report (daily) Version May 08, 2025 893 AMS Advanced User Guide AMS Advanced Concepts and Procedures This is an informational report that helps identify all the instances onboarded to Patch Orchestrator (PO), account status, instance details, maintenance window coverage, maintenance window execution time, stack details, and platform type. This dataset provides: • Data on the Production and Non-Production instances of an account. Production and Non- Production stage is derived from the account name and not from the instance tags. • Data on the distribution of instances by platform type. The 'N/A' platform type occurs when AWS Systems Manager (SSM) can't get the platform information. • Data on the distribution of state of instances, number of instances running, stopped, or terminating. Console Field Name Dataset Field Name Definition Report Datetime dataset_datetime Account Id aws_account_id Admin Account Id aws_admin_account_id The date and time the report was generated.
ams-ug-308
ams-ug.pdf
308
and platform type. This dataset provides: • Data on the Production and Non-Production instances of an account. Production and Non- Production stage is derived from the account name and not from the instance tags. • Data on the distribution of instances by platform type. The 'N/A' platform type occurs when AWS Systems Manager (SSM) can't get the platform information. • Data on the distribution of state of instances, number of instances running, stopped, or terminating. Console Field Name Dataset Field Name Definition Report Datetime dataset_datetime Account Id aws_account_id Admin Account Id aws_admin_account_id The date and time the report was generated. AWS Account ID to which the instance ID belongs Trusted AWS Organizations account enabled by you. Account Name account_name AWS account name Production Account prod_account Identifier of AMS prod, non- prod accounts, depending on whether account name include value 'PROD', 'NONPROD'. Account Status account_status AMS account status account_sla AMS account service commitment Patch report (daily) Version May 08, 2025 894 AMS Advanced User Guide AMS Advanced Concepts and Procedures Console Field Name Dataset Field Name Definition Landing Zone malz_flag Flag for MALZ-related account Account Type malz_role MALZ role Access Restrictions access_restrictions Regions to which access is restricted Instance Id instance_id ID of EC2 instance Instance Name instance_name Name of EC2 instance Instance Platform Type instance_platform_type Operating System (OS) type Instance Platform Name instance_platform_name Operating System (OS) name Stack Name instance_stack_name Stack Type instance_stack_type Auto Scaling Group Name instance_asg_name Instance Patch Group instance_patch_group Name of stack that contains instance AMS stack (AMS infrastru cture within customer account) or Customer stack (AMS managed infrastru cture that supports customer applications) Name of Auto Scaling Group (ASG) that contains the instance Patch group name used to group instances together and apply the same maintenance window Instance Patch Group Type instance_patch_group_type Patch group type Patch report (daily) Version May 08, 2025 895 AMS Advanced User Guide AMS Advanced Concepts and Procedures Console Field Name Dataset Field Name Definition Instance State instance_state Maintenance Window Coverage mw_covered_flag State within the EC2 instance lifecycle If an instance has at least one enabled maintenance window with a future execution date, then it’s considered covered, otherwise not covered Maintenance Window Execution Datetime earliest_window_execution_t ime Next time the maintenan ce window is expected to execute Maintenance Window Execution Datetime earliest_window_execution_t ime Next time the maintenan ce window is expected to execute Patch details This report provides patch details and maintenance window coverage of various instances. This report provides: • Data on Patch groups and its types. • Data on Maintenance Windows, duration, cutoff, future dates of maintenance window executions (schedule) and instances impacted in each window. • Data on all the operating systems under the account and the number of instances that the operating system is installed. Field Name Dataset Field Name Definition Report Datetime dataset_datetime The date and time the report was generated. Patch report (daily) Version May 08, 2025 896 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Account Id Dataset Field Name Definition aws_account_id AWS Account ID to which the instance ID belongs Account Name account_name AWS account name Instance Id instance_id ID of EC2 instance Instance Name instance_name Name of EC2 instance Production Account prod_account Identifier of AMS prod, non- prod accounts, depending on whether account name include value 'PROD', 'NONPROD'. Account Status account_status AMS account status account_sla AMS account service tier Instance Platform Type instance_platform_type Operating System (OS) type Instance Platform Name instance_platform_name Operating System (OS) name Stack Type instance_stack_type AMS stack (AMS infrastru cture within customer account) or Customer stack (AMS managed infrastru cture that supports customer applications) Patch report (daily) Version May 08, 2025 897 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Instance Patch Group Type instance_patch_group_type Instance Patch Group instance_patch_group Instance State instance_state DEFAULT: default patch group w/ default maintenan ce window, determined by AMSDefaultPatchGroup:True tag on the instance CUSTOMER: customer created patch group NOT_ASSIGNED: no patch group assigned Patch group name used to group instances together and apply the same maintenance window State within the EC2 instance life cycle Maintenance Window Id window_id Maintenance window ID Maintenance Window State window_state Maintenance window state Maintenance Window Type window_type Maintenance window type Maintenance Window Next Execution Datetime window_next execution_time Last Execution Maintenance Window last_execution_window window_next_exec_yyyy Next time the maintenan ce window is expected to execute The latest time the maintenance window was executed Year part of window_ne xt_execution_time Patch report (daily) Version May 08, 2025 898 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition window_next_exec_mm window_next_exec_D window_next _exec_HHMI window_duration mw_covered_flag Maintenance Window Duration (hrs) Maintenance Window Coverage Patch Baseline Id patch_baseline_id Patch Status patch_status Compliant - Critical compliant_critical Compliant - High compliant_high Compliant - Medium compliant_medium Month part of window_ne xt_execution_time Day part of window_ne xt_execution_time
ams-ug-309
ams-ug.pdf
309
Window Next Execution Datetime window_next execution_time Last Execution Maintenance Window last_execution_window window_next_exec_yyyy Next time the maintenan ce window is expected to execute The latest time the maintenance window was executed Year part of window_ne xt_execution_time Patch report (daily) Version May 08, 2025 898 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition window_next_exec_mm window_next_exec_D window_next _exec_HHMI window_duration mw_covered_flag Maintenance Window Duration (hrs) Maintenance Window Coverage Patch Baseline Id patch_baseline_id Patch Status patch_status Compliant - Critical compliant_critical Compliant - High compliant_high Compliant - Medium compliant_medium Month part of window_ne xt_execution_time Day part of window_ne xt_execution_time Hour:Minute part of window_next_execution_time The duration of the maintenance window in hours If an instance has at least one enabled maintenance window with a future execution date, then it’s considered covered, otherwise not covered Patch baseline currently attached to instance Overall patch compliance status. If there is at least one missing patch, instance is considered noncompliant, otherwise compliant. Count of compliant patches with "critical" severity Count of compliant patches with "high" severity Count of compliant patches with "medium" severity Patch report (daily) Version May 08, 2025 899 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Compliant - Low compliant_low Compliant - Informational compliant_informational Compliant - Unspecified compliant_unspecified Compliant - Total compliant_total Noncompliant - Critical noncompliant_critical Noncompliant - High noncompliant_high Noncompliant - Medium noncompliant_medium Noncompliant - Low noncompliant_low Noncompliant - Informational noncompliant _informational Noncompliant - Unspecified noncompliant _unspecified Noncompliant - Total noncompliant_total Count of compliant patches with "low" severity Count of compliant patches with "informational" severity Count of compliant patches with "unspecified" severity Count of compliant patches (all severities) Count of noncompliant patches with "critical" severity Count of noncompliant patches with "high" severity Count of noncompliant patches with "medium" severity Count of noncompliant patches with "low" severity Count of noncompliant patches with "informational" severity Count of noncompliant patches with "unspecified" severity Count of noncompliant patches (all severities) Patch report (daily) Version May 08, 2025 900 AMS Advanced User Guide AMS Advanced Concepts and Procedures Instances that missed patches This report provides details on instances that missed patches during the last maintenance window execution. This report provides: • Data on missing patches at the patch ID level. • Data on all the instances that have at least one missing patch and attributes such as patch severity, unpatched days, range, and release date of the patch. Field Name Dataset Field Name Definition Report Datetime dataset_datetime Account Id aws_account_id The date and time the report was generated AWS Account ID that the instance ID belongs to Account Name account_name AWS account name Customer Name Parent customer_name_parent Customer Name customer_name Production Account prod_account Identifier of AMS prod or non- prod accounts, depending on whether the account name includes the value 'PROD' or 'NONPROD'. Account Status account_status AMS account status Account Type account_type Instance Id account_sla instance_id AMS account service tier ID of your EC2 instance Patch report (daily) Version May 08, 2025 901 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Instance Name instance_name Name of your EC2 instance Instance Platform Type instance_platform_type Operating System (OS) type Instance State instance_state Patch Id Patch Severity patch_id patch_sev Patch Classification patch_class Patch Release Datetime (UTC) release_dt_utc Patch Install State install_state Days Unpatched days_unpatched State within the EC2 instance life cycle ID of released patch Severity of patch per publisher Classification of patch per the patch publisher Release date of patch per publisher Install state of patch on instance per SSM Number of days instance unpatched since last SSM scanning Days Unpatched Range days_unpatched_bucket Bucketing of days unpatched Backup report (daily) The backup report covers primary and secondary (when applicable) regions. It covers the status of backups (success/failure), and data on snapshots taken. This report provides: • Backup status • Number of snapshots taken • Recovery point Backup report (daily) Version May 08, 2025 902 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Backup plan and vault information Field Name Dataset Field Name Definition Report Datetime dataset_datetime Account Id aws_account_id Admin Account Id aws_admin_account_id The date and time the report was generated. AWS Account ID to which the instance ID belongs Trusted AWS Organizations account enabled by you. Account Name account_name AWS account name Account SLA account_sla malz_flag AMS account service commitment Flag for MALZ-related account malz_role MALZ role access_restrictions Regions to which access is restricted Resource ARN resource_arn The Amazon resource name Resource Id resource_id The unique resource identifier Resource Region resource_region The resource's primary (and secondary, when applicable) regions. Resource Type resource_type The type of resource Recovery Point ARN recovery_point_arn The ARN of the recovery point Backup report (daily) Version May 08, 2025 903 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Recovery
ams-ug-310
ams-ug.pdf
310
AWS Organizations account enabled by you. Account Name account_name AWS account name Account SLA account_sla malz_flag AMS account service commitment Flag for MALZ-related account malz_role MALZ role access_restrictions Regions to which access is restricted Resource ARN resource_arn The Amazon resource name Resource Id resource_id The unique resource identifier Resource Region resource_region The resource's primary (and secondary, when applicable) regions. Resource Type resource_type The type of resource Recovery Point ARN recovery_point_arn The ARN of the recovery point Backup report (daily) Version May 08, 2025 903 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Recovery Point Id recovery_point_id The unique identifier of the recovery point Backup snapshot scheduled start datetime start_by_dt_utc Timestamp when snapshot is scheduled to begin Backup snapshot actual start datetime creation_dt_utc Timestamp when snapshot actually begins Backup snapshot completion datetime completion_dt_utc Timestamp when snapshot is completed Backup snapshot expiration datetime expiration_dt_utc Timestamp when snapshot expires Backup Job status backup_job_status State of the snapshot Backup Type backup_type Type of backup Backup Job Id backup_job_id The unique identifier of the backup job Backup Size In Bytes backup_size_in_bytes The backup size in bytes Backup Plan ARN backup_plan_arn The backup plan ARN Backup Plan Id backup_plan_id Backup plan unique identifier Backup Plan Name backup_plan_name The Backup Plan name Backup Plan Version backup_plan_version The backup plan version Backup Rule Id backup_rule_id The backup rule id Backup Vault ARN backup_vault_arn Backup vault ARN Backup Vault Name backup_vault_name The backup vault name IAM Role ARN iam_role_arn The IAM role ARN Backup report (daily) Version May 08, 2025 904 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Recovery Point Status recovery_point_status Recovery point status Recovery Point Delete After Days recovery_point_delete_after _days Recovery point delete after days Recovery point move to cold storage after days recovery_point_move_to_cold _storage_after_days Number of days after completion date when backup Recovery Point Encryption Status recovery_point_is_encrypted snapshot is moved to cold storage Recovery point encryption status Recovery Point Encryption Key ARN recovery_point_encryption_k ey_arn Recovery point encryption key ARN Volume State volume_state Volume State Instance Id instance_id Unique instance Id Instance State instance_state Instance state Stack Id stack_id Cloudformation stack unique identifier Stack Name stack_name Stack Name Tag: AMS Default Patch Group tag_ams_default_pa tch_group Tag Value: AMS Default Patch Group Tag: App Id tag_app_id Tag Value: App ID Tag: App Name tag_app_name Tag Value: App Name Tag: Backup tag_backup Tag Value: Backup Tag: Compliance Framework tag_compliance_framework Tag Value: Compliance Framework Backup report (daily) Version May 08, 2025 905 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Tag: Cost Center tag_cost_center Tag Value: Cost Center Tag: Customer tag_customer Tag Value: Customer Tag: Data Classification tag_data_classification Tag Value: Data Classification Tag: Environment Type tag_environment_type Tag Value: Environment Type Tag: Hours of Operation tag_hours_of_operation Tag Value: Hours of Operation Tag: Owner Team tag_owner_team Tag Value: Owner Team Tag: Owner Team Email tag_owner_team_email Tag Value: Owner Team Email Tag: Patch Group tag_patch_group Tag Value: Patch Group Tag: Support Priority tag_support_priority Tag Value: Support Priority Incident report (weekly) This report provides the aggregated list of incidents along with its priority, severity and latest status, including: • Data on support cases categorized as incidents on the managed account • Incident information required to visualize the incident metrics for the managed account • Data on incident categories and remediation status of every incident Both visualization and data are available for the Weekly incident report. • Visualization can be accessed through the AMS console in the account through the Reports page. • Dataset with the following schema, can be accessed through S3 bucket in the managed account. • Use the provided date fields to filter incidents based on the month, quarter, week, and/or day that the incident was created or resolved. Incident report (weekly) Version May 08, 2025 906 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Report Datetime dataset_datetime Account Id aws_account_id Admin Account Id aws_admin_account_id The date and time the report was generated. AWS Account ID to which the incident belongs. Trusted AWS Organizations account enabled by you. Account Name account_name AWS account name. Case Id case_id The ID of the incident. Created Month created_month The month when the incident was created. The priority of the incident. The severity of the incident. The status of the incident. Priority Severity Status Category priority severity status yuma_category The category of the incident. Created Day created_day Created Week created_wk The day when the incident was created in YYYY-MM-DD format. The week when the incident was created in YYYY-WW format. Sunday to Saturday is counted as the beginning and end of a week. Week is from 01 to 52. Week 01 is always the week that contains the first day of the year. For example, 2023-12-31 and
ams-ug-311
ams-ug.pdf
311
Month created_month The month when the incident was created. The priority of the incident. The severity of the incident. The status of the incident. Priority Severity Status Category priority severity status yuma_category The category of the incident. Created Day created_day Created Week created_wk The day when the incident was created in YYYY-MM-DD format. The week when the incident was created in YYYY-WW format. Sunday to Saturday is counted as the beginning and end of a week. Week is from 01 to 52. Week 01 is always the week that contains the first day of the year. For example, 2023-12-31 and Incident report (weekly) Version May 08, 2025 907 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Created Quarter created_qtr Resolved Day resolved_day Resolved Week resolved_wk Resolved Month resolved_month Resolved Quarter resolved_qtr 2024-01-01 are in week 2024-01. The quarter when the incident was created in YYYY- Q format. 01/01 to 03/31 is defined as Q1, and so on. The day when the incident was resolved in YYYY-MM-DD format. The week when the incident was resolved in YYYY-WW format. Sunday to Saturday is counted as the beginning and end of a week. Week is from 01 to 52. Week 01 is always the week that contains the first day of the year. For exmaple, 2023-12-31 and 2024-01-01 are in week 2024-01. The month when the incident was resolved in YYYY-MM format. The quarter when the incident was resolved in YYYY-Q format. 01/01 to 03/31 is defined as Q1, and so on. Incident report (weekly) Version May 08, 2025 908 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Created Grouping rule grouping_rule Instance IDs instance_ids Number of alerts number_of_alerts Created at created_at Alarm ARNs alarm_arns Related alarms related_alarms Billing report (monthly) Billing charges details The grouping rule that applies to the incident. Either "no_grouping" or "instance _grouping". The instance associated with the incident. The number of alerts associated with that incident. If you have grouping enabled, then this number can be greater than 1. If you do not have grouping enabled, then it will always be 1. The timestamp when the incident was created. The Amazon Resource Name ("arn") of the alarms associate d with your incident. The human-readable names of all the alarms associated with the incident. This report provides details about AMS billing charges with linked accounts and respective AWS services. This report provides: Billing report (monthly) Version May 08, 2025 909 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Data on AMS service-level charges, uplift percentages, account-level AMS service tiers and AMS fees. • Data on linked accounts and AWS usage charges. Important The Monthly Billing report is only available in your Management Payer Account (MPA) or your defined Charge Account. These are the accounts where your AMS monthly bill is sent. If you're unable to locate these accounts, then contact your Cloud Service Delivery Manager (CSDM) for assistance. Field Name Billing Date date Dataset Field Name Definition Payer Account Id payer_account_id Linked Account Id linked_account_id AWS Service Name product_name AWS Charges aws_charges Pricing Plan pricing_plan The month and year of the service billed The 12 digit ID identifying the account responsible for paying the AMS charges The 12 digit ID identifying the AMS account that consumes services that generates expanses The AWS service that was used The AWS charges for the AWS service name in AWS Service Name The pricing plan associated with the linked account Billing report (monthly) Version May 08, 2025 910 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition AMS Service Group tier_uplifting_groups Uplift Proportion uplift_percent AMS service group code that determines uplift percentage The uplift percentage (as a decimal V.WXYZ) based on pricing_plan, SLA, and AWS service Adjusted AWS Charges adjusted_aws_usage AWS usage adjusted for AMS Uplifted AWS Charges uplifted_aws_charges Instances EC2 RDS Spend instances_ec2_rds_spend The percentage of AWS charges to be charged for AMS; adjusted_aws_charges * uplift_percent Spend on EC2 and RDS instances Reserved Instance Charges ris_charges Reserved instance charges Uplifted Reserved Instance Charges uplifted_ris The percentage of reserved instance charges to becharged for AMS; ris_charg es * uplift_percent Savings Plan Charges sp_charges SavingsPlan usage charges Uplifted Savings Plan Charges uplifted_sp AMS Charges ams_charges The percentage of savings plans charges to be chargedfo r AMS; sp_charges * uplift_pe rcent Total ams charges for the product; uplifted_aws_charg es + instance_ec2_rds_spend + uplifted_ris + uplifted_sp Billing report (monthly) Version May 08, 2025 911 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Prorated Minimum Fee prorated_minimum Linked Account Total AMS Charges Payer Account Total AMS Charges linked_account_total ams_charges payer_account_total ams_charges Minimum Fee minimum_fees Reserved Instance and Savings Plan discount adj_ri_sp_charges The amount we charge to meet the
ams-ug-312
ams-ug.pdf
312
Charges sp_charges SavingsPlan usage charges Uplifted Savings Plan Charges uplifted_sp AMS Charges ams_charges The percentage of savings plans charges to be chargedfo r AMS; sp_charges * uplift_pe rcent Total ams charges for the product; uplifted_aws_charg es + instance_ec2_rds_spend + uplifted_ris + uplifted_sp Billing report (monthly) Version May 08, 2025 911 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field Name Dataset Field Name Definition Prorated Minimum Fee prorated_minimum Linked Account Total AMS Charges Payer Account Total AMS Charges linked_account_total ams_charges payer_account_total ams_charges Minimum Fee minimum_fees Reserved Instance and Savings Plan discount adj_ri_sp_charges The amount we charge to meet the contractual minimum Sum of all charges for the linked_account Sum of all charges for payer account AMS Minimum Fees (if applicable) RI/SP discount to be applied against RI/SP charges (applicable under certain circumstances) Aggregated reports Aggregated self-service reporting (SSR) provides you a view of existing self-service reports aggregated at the organization level, cross-account. This gives you visibility into key operational metrics, like patch compliance, backup coverage, and incidents, across all the accounts under AMS management within your AWS Organizations. Aggregated SSR is available across all commercial AWS Regions where AWS Managed Services is available. For a full list of available Regions, see the Region table. Enable aggregated reports You must manage aggregated SSR from an AWS Organizations management account. The management account is the AWS account that you used to create your organization. To enable Aggregated SSR for an AWS Organizations management account that's onboarded to AMS, access your AMS console and navigate to Reports. Select Organization Access in the top- Aggregated reports Version May 08, 2025 912 AMS Advanced User Guide AMS Advanced Concepts and Procedures right-hand corner to open the AWS Managed Services Console: Organization View pane. From this pane, you can manage the Aggregated SSR functionality. AWS Organizations management accounts that aren't onboarded to AMS don't have access to the AMS console. To enable Aggregated SSR for an AWS Organizations management account that is not onboarded to AMS, first authenticate to your AWS account, then navigate to the AWS console and search for Managed Services. This opens the AMS Marketing page. On this page, select the Organization Access link in the navigation bar to open the AWS Managed Services console: Organization View, where you can manage the Aggregated SSR functionality. The first time you access the AWS Managed Services Console: Organization View, complete the following steps: 1. If you have not already set up AWS Organizations, choose Enable AWS Organizations from your console. For additional information on setting up AWS Organizations, see the AWS Organizations User Guide. You can skip this step if you already use AWS Organizations. 2. To enable the Aggregated Self-Service Reporting service. select Enable trusted access on the console. 3. (Optional) Register a Delegated Administrator to have read access for the organizational view. View aggregated reports as a delegated administrator A delegated administrator is the account you choose to have read access to the aggregated reports. The delegated administrator must be an account onboarded to AMS and be the only account that has read access to aggregated reports. To choose a delegated administrator, enter the account ID in Step 3 on the AWS Managed Services Console: Organization View. You can have only one delegated administrator account registered at a time. Note that the delegated administrator account must be an AMS-managed account. To update a delegated administrator account, navigate to the AWS Managed Services Console: Organization View and select Remove the Delegated Administrator. The console prompts you to insert a new account ID to register as the delegated administrator. Read aggregated reports If you don't register a delegated administrator, and your AWS Organizations management account is onboarded to AMS, then the AWS Organizations management account gets read access to the Aggregated reports Version May 08, 2025 913 AMS Advanced User Guide AMS Advanced Concepts and Procedures aggregated reports by default. If the AWS Organizations management account is not managed by AMS, then you must choose a delegated administrator account to have read access to the aggregated reports. At any time, only a single account onboarded to AMS has read access to the aggregated reports, either the AWS Organizations management account or the registered delegated administrator. All other member accounts within your organization (and onboarded to AMS) still have access only to single-account reports for each individual account. After you enable Aggregated SSR, navigate to your Reports. All your existing self-service reports are listed in this section, and a blue tag indicates that they have been aggregated. Note that you must access the AMS console from the account that you chose to have read access to the aggregated reports. This is either the AWS Organizations management account or the delegated administrator account. After you enable Aggregated SSR, aggregated reports are available from the next reporting
ams-ug-313
ams-ug.pdf
313
administrator. All other member accounts within your organization (and onboarded to AMS) still have access only to single-account reports for each individual account. After you enable Aggregated SSR, navigate to your Reports. All your existing self-service reports are listed in this section, and a blue tag indicates that they have been aggregated. Note that you must access the AMS console from the account that you chose to have read access to the aggregated reports. This is either the AWS Organizations management account or the delegated administrator account. After you enable Aggregated SSR, aggregated reports are available from the next reporting cycle onward. Disable aggregated reports To disable Aggregated SSR, open the AWS Managed Services Console: Organization View. Select Disable trusted access. After you disable trusted access for Aggregated SSR, your AMS self- service reports stop being aggregated at the organization level, across accounts. Also note that deactivation takes effect from the next reporting cycle onwards. After disabling Aggregated SSR, there is a wait before the reports in your AMS console appear as single-account reports. This delay occurs because the feature deactivation takes effect from the next reporting cycle onwards. AMS self-service reports dashboards AMS self-service reports offers two dashboards: Resource Tagger dashboard and Security Config Rules dashboard. Resource Tagger dashboard The AMS Resource Tagger Dashboard provides detailed information about the resources supported by Resource Tagger, as well as the current status of the tags that Resource Tagger is configured to apply to those resources. AMS self-service reports dashboards Version May 08, 2025 914 AMS Advanced User Guide AMS Advanced Concepts and Procedures Resource Tagger coverage by resource type This dataset consists of a list of resources that have tags managed by Resource Tagger. Resource coverage by resource type is visualized as four line charts that describe the following metrics: • Resource Count: The total number of resources in the Region, by resource type. • Resources Missing Managed Tags: The total number of resources in the Region, by resource type, that require managed tags but aren't tagged by Resource Tagger. • Unmanaged Resources: The total number of resources in the Region, by resource type, that don't have managed tags applied to them by Resource Tagger. This usually means that these resources are not matched by any Resource Tagger configurations, or are explicitly excluded from configurations. • Managed Resources: Counterpart to Unmanaged Resources metric (Resource Count - Unmanaged Resources). The following table lists the data provided by this report. Field name Dataset field name Definition Report Datetime dataset_datetime The date and time the report was generated (UTC time) AWS account ID aws_account_id AWS account ID Admin Account Id aws_admin_account_id Trusted AWS Organizations account enabled by you. Region region AWS Region Resource Type resource_type This field identifies the type of resource. Only resource types supported by Resource Tagger are included. AMS self-service reports dashboards Version May 08, 2025 915 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field name Dataset field name Definition Resource Count resource_count ResourcesMissingMa nagedTags resource_missing_m anaged_tags_count UnmanagedResources unmanaged_resource_count Number of resources (of the specified resource type) deployed in this Region. Number of resources (of the specified resource type) that require managed tags, according to the configura tion profiles, but have not yet been tagged by Resource Tagger. Number of resources (of the specified resource type) with no managed tags applied by Resource Tagger. Typically , these resources didn't match any Resource Tagger configuration block, or are explicitly excluded from configuration blocks. Resource Tagger configuration rule compliance This dataset consists of a list of resources in an AWS Region, by resource type, that have a certain configuration profile applied to them. It's visualized as a line chart. The following table lists the data provided by this report. Field name Dataset field name Definition Report Datetime dataset_datetime The date and time the report was generated (UTC time) AWS account ID aws_account_id AWS account ID AMS self-service reports dashboards Version May 08, 2025 916 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field name Dataset field name Definition Admin Account Id aws_admin_account_id Trusted AWS Organizations account enabled by you. Region region AWS Region Resource Type resource_type Configuration Profile ID configuration_profile_id MatchingResourceCount resource_count This field identifies the type of resource. Only resource types supported by Resource Tagger are included. The ID of the Resource Tagger configuration profile. A configuration profile is used to define policies and rules used to tag your resources. Number of resources (of the specified resource type) that match the Resource Tagger configuration profile ID. For a resource to match the configuration profile, the profile must be enabled and the resource must match the profile's rule. Resource Tagger non-compliant resources This dataset consists of a list of resources that are non-compliant for a single Resource Tagger configuration. This data is a daily snapshot of resource compliance, showing the
ams-ug-314
ams-ug.pdf
314
resource types supported by Resource Tagger are included. The ID of the Resource Tagger configuration profile. A configuration profile is used to define policies and rules used to tag your resources. Number of resources (of the specified resource type) that match the Resource Tagger configuration profile ID. For a resource to match the configuration profile, the profile must be enabled and the resource must match the profile's rule. Resource Tagger non-compliant resources This dataset consists of a list of resources that are non-compliant for a single Resource Tagger configuration. This data is a daily snapshot of resource compliance, showing the state of customer resources at the time these reports are delivered to customer accounts (there isn't a historical view). It's visualized as a pivot table consisting of resources that are non-complaint for a given configuration. The following table lists the data provided by this report. AMS self-service reports dashboards Version May 08, 2025 917 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field name Dataset field name Definition Report Datetime dataset_datetime The date and time the report was generated (UTC time) AWS account ID aws_account_id AWS account ID Admin Account Id aws_admin_account_id Trusted AWS Organizations account enabled by you. Region region AWS Region Resource Type resource_type Resource ID resource_id Coverage State coverage_state Configuration Profile ID configuration_profile_id This field identifies the type of resource. Only resource types supported by Resource Tagger are included. The unique identifier for resources supported by Resource Tagger. This field indicates if the resource is tagged as configured by the Resource Tagger configuration ID. The ID of the Resource Tagger configuration profile. A configuration profile is used to define policies and rules used to tag your resources. Security Config Rules dashboard The Security Config Rules Dashboard provides an in-depth look at resource and AWS Config rule compliance of AMS accounts. You can filter the report by rule severity to prioritize the most critical findings. The following table lists the data provided by this report. AMS self-service reports dashboards Version May 08, 2025 918 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field name Dataset field name Definition AWS account ID AWS account ID Admin Account Id aws_admin_account_id report datetime Report Date The account ID tied to related resources. Trusted AWS Organizations account enabled by you. The date and time the report was generated. customer_name Customer Name The customer name. account_name Account Name The name associated with the account ID resource_id Resource ID An identifier for a resource. resource_region Resource Region resource_type Resource Type The AWS Region where the resource is located. The AWS service or resource type. resource_name Resource Name The name for the resource. resource_ams_flag Resource AMS Flag config_rule Config Rule config_rule_description Config Rule Description If the resource is AMS owned, then this flag is set to TRUE. If the resource is customer- owned, then this flag is set to FALSE. If ownership is not known, then this flag is set to UNKNOWN. The non-customizable name for the config rule. A description of the config rule. AMS self-service reports dashboards Version May 08, 2025 919 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field name Dataset field name Definition source_identifier Source Identifier compliance_flag Compliance Flag rule_type Rule Type exception_flag Exception Flag cal_dt Date remediation_description Remediation Description severity Severity customer_action Customer Action recommendation Recommendation A unique identifier for the managed config rule and no identifier for a custom config rule. Shows if the resources are compliant or non-compliant with the config rules. Indicates if the rule is predefined or custom built. The resource exception flag shows the risk acceptanc e against a noncompliant resource. If the resource exception flag is TRUE for a resource, then the resource is exempted. If the exception flag is NULL, then the resource is not exempted. The evaluation date of the rule. A description of how to remediate rule compliance. Config rule severity indicates the impact of non-compl iance. Action needed by you to remediate thus rule. A description of what the config rule checks for. AMS self-service reports dashboards Version May 08, 2025 920 AMS Advanced User Guide AMS Advanced Concepts and Procedures Field name Dataset field name Definition remediation_category Remediation Category The default actions that AMS takes when this rule becomes non-compliant. Data retention policy AMS SSR has a data retention policy per report after the period reported, the data is cleared out and no longer available. Report name Data Retention SSR Console Data Retention SSR S3 Bucket Instance Details Summary (Patch Orchestrator) Patch Details Instances that missed patches during maintenance window execution 2 Months 2 Months 2 Months AMS Billing Charges Details 2 Years Daily Backup Report 1 Month Weekly Incident Report 2 Months Security Config Rules Dashboard 3 Months Resource Tagger dashboard 1 year 2 Years 2 Years 2 Years 2 Years 2 Years 2
ams-ug-315
ams-ug.pdf
315
when this rule becomes non-compliant. Data retention policy AMS SSR has a data retention policy per report after the period reported, the data is cleared out and no longer available. Report name Data Retention SSR Console Data Retention SSR S3 Bucket Instance Details Summary (Patch Orchestrator) Patch Details Instances that missed patches during maintenance window execution 2 Months 2 Months 2 Months AMS Billing Charges Details 2 Years Daily Backup Report 1 Month Weekly Incident Report 2 Months Security Config Rules Dashboard 3 Months Resource Tagger dashboard 1 year 2 Years 2 Years 2 Years 2 Years 2 Years 2 Years 2 Years 2 years Data retention policy Version May 08, 2025 921 AMS Advanced User Guide AMS Advanced Concepts and Procedures Offboard from SSR To offboard from the SSR service, create a service request (SR) through the AMS console. After you submit the SR, an AMS operations engineers helps you offboard from SSR. In the SR, provide the reason for that you want to offboard. To offboard an account and perform a resources cleanup, create an SR through the AMS console. After you submit the SR, an AMS operations engineers helps you delete the SSR Amazon S3 bucket. If you offboard from AMS, you are automatically offboarded from the AMS SSR console. AMS automatically stops sending data to your account. AMS deletes your SSR S3 bucket as part of the offboarding process. Offboard from SSR Version May 08, 2025 922 AMS Advanced User Guide AMS Advanced Concepts and Procedures Incident reports, service requests, and billing questions in AMS Topics • Incident management • Service request management • Billing questions With AWS Managed Services (AMS), you can request help with operational issues and requests at any time through the AMS console. AMS operations engineers are available to respond to your incidents and service requests 24x7, with response time Service Level Agreements (SLAs) and Service Level Objectives (SLOs), dependent on your selected account Service Tier (Plus, Premium). AMS operations engineers proactively notify you of important alerts and questions using the same mechanisms. Incident management Topics • What is incident management? • Incident management service commitments • Incident management examples Incidents are AWS service performance issues that impact your managed environment, as determined by AWS Managed Services (AMS) or you. Incidents identified by the AMS team are first received as "events": a change in system state captured by monitoring. If a configured threshold is breached, the event triggers an alarm, also called an alert. The AMS operations team determines if the event is non-impacting, an incident (a service interruption or degradation), or a problem (the underlying root cause of one or more resolved incidents). The AMS team also receives incidents identified by you through the Support center or programmatically using the AWS Support API with the service code sentinel-report- incident. Incident management Version May 08, 2025 923 AMS Advanced User Guide AMS Advanced Concepts and Procedures After your incident is received by the AMS operations team, it's reviewed to ensure that the incident is not better classified as a service request. If it should be classified as a service request, it's immediately reclassified and the AMS service request team takes over and you are notified. If the incident can be resolved by the receiving operator, steps are taken to immediately to resolve the incident. AMS operators consult internal documentation for a resolution and, if needed, escalate the incident to other support resources until the incident is resolved. To be kept informed at each step of the incident resolution process, be sure to fill in the CC Emails option, and, if you'll connect by federation, log in before following the link in the email that AMS sends. After it is resolved, the AMS operations team documents the incident and resolution for future use. If an incident resolution requires infrastructure changes, a security review might be needed. Infrastructure changes that might require a security review include those related to IAM, or resource-based policy, or risk approvals. Those types of incidents require an AMS Operations engineer to create an RFC before making the change, and your approval to that RFC is required. For example, should the incident resolution require the update of an IAM policy, there would be an AMS security review and then an AMS Operations engineer would create an RFC with the Management | Advanced stack components | Identity and Access Management (IAM) | Update entity or policy change type (ct-27tuth19k52b4) and wait for you to approve the RFC before proceeding. Note AMS now allows incident resolution that requires infrastructure changes to be made without the additional step of RFC approval. If the changes needed to resolve the incident do NOT require a security review (the change is not related to IAM, or resource-based policy, or risk approvals), AMS can make the changes
ams-ug-316
ams-ug.pdf
316
there would be an AMS security review and then an AMS Operations engineer would create an RFC with the Management | Advanced stack components | Identity and Access Management (IAM) | Update entity or policy change type (ct-27tuth19k52b4) and wait for you to approve the RFC before proceeding. Note AMS now allows incident resolution that requires infrastructure changes to be made without the additional step of RFC approval. If the changes needed to resolve the incident do NOT require a security review (the change is not related to IAM, or resource-based policy, or risk approvals), AMS can make the changes based on your approval received in the incident, without needing separate approval in an RFC. For definitions of incident management terms, see AMS Key Terms. To understand the escalation path of incidents, see Getting help. For a description of AMS response to incidents, see AMS incident response. What is incident management? Incident management is the process AMS uses to record, act on, communicate progress of, and provide notification of, active incidents. What is incident management? Version May 08, 2025 924 AMS Advanced User Guide AMS Advanced Concepts and Procedures The goal of the incident management process is to ensure that normal operation of your managed service is restored as quickly as possible, the business impact is minimized, and all concerned parties are kept informed. Examples of incidents include (but are not restricted to) loss of or degradation of network connectivity, a non-responsive process or API, or a scheduled task not being performed (for example, a failed backup). The following graphic depicts the workflow of an incident reported by you to AMS. This graphic depicts the workflow of an incident reported by AMS to you. Incident priority Incidents created in AWS Support center, console or Support API (SAPI), have different classifications than incidents created in the AMS console. • Low: Non-critical functions of your business service, or application, related to AWS or AMS resources are impacted. What is incident management? Version May 08, 2025 925 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Medium: A business service or application related to AWS and/or AMS resources is moderately impacted and is functioning in a degraded state. • High: Your business is significantly impacted. Critical functions of your application related to AWS and/or AMS resources are unavailable. Reserved for the most critical outages affecting production systems. Note The AWS Support Console offers five levels of incident priority that we translate to the three AMS levels. Problem vs incident When AMS believes that an incident reveals a larger defect or misconfiguration and could recur, it is considered a problem rather than just an incident. In such cases, AMS undertakes analyses of the problem and offers suggestions to resolve the problem. Incident management service commitments Incident management service commitments Event or action Service commitment measurement Case 1: An event with known impact is generated. AMS opens an incident and informs Clock for incident response and incident resolution starts when: you. Case 2: AMS contacts you to confirm the impact of the event. You confirm the event is an incident. Case 1: AMS creates an incident. Case 2: You confirm the alert is an incident. Case 3: You submit an incident. Case 3: You notice an issue and submit an incident report. Service commitments depend on the priority of the incident created. If you submit the incident, AMS sends a response to acknowledge it. Clock for incident resolution continues ticking. Clock for incident response time stops when AMS sends the incident acknowledgement. Incident management service commitments Version May 08, 2025 926 AMS Advanced User Guide Event or action AMS Advanced Concepts and Procedures Service commitment measurement If AMS creates the incident on your behalf, a separate incident response is not sent. Note Time spent waiting for inputs from you is excluded from incident resolutio n time calculations. For incidents that AMS creates, the initial response time is the time of the creation of the initial incident notification to you. For the resources / services in question, AMS checks the health to verify if: • AMS detected event or customer submitted incident qualifies as an incident, and In case incident priority changes, the service commitment for the new priority is applicabl e; clock continues ticking. In cases when an incident is closed because it does not meet the definition of an incident, service commitments • the incident is correctly prioritized, and are not applicable; clock stops. If an incident you submit is not correctly prioritized, AMS re-prioritizes it. If AMS changes an incident priority, a notification is sent to you along with reasoning behind the priority change. In certain cases, an issue you submitted may not qualify as an incident, depending on the cause. In those cases, AMS closes the incident and sends you a
ams-ug-317
ams-ug.pdf
317
for the new priority is applicabl e; clock continues ticking. In cases when an incident is closed because it does not meet the definition of an incident, service commitments • the incident is correctly prioritized, and are not applicable; clock stops. If an incident you submit is not correctly prioritized, AMS re-prioritizes it. If AMS changes an incident priority, a notification is sent to you along with reasoning behind the priority change. In certain cases, an issue you submitted may not qualify as an incident, depending on the cause. In those cases, AMS closes the incident and sends you a notificat ion explaining the reason why. Irrespective of the event categorization, AMS works with you to assist as needed. To understand the rules for incident categoriz ation, see Incident priority. Incident management service commitments Version May 08, 2025 927 AMS Advanced User Guide Event or action AMS works on the incident to resolve it within service commitment. In certain cases, if AMS determines that unavailable stack(s) or resource(s) cannot be resolved in a timely manner, AMS will offer Infrastructure Restore AMS Advanced Concepts and Procedures Service commitment measurement Clock stops when: • AMS has restored all Unavailable services or resources pertaining to that Incident to an available state, or as an option for resolution. Infrastructure • an infrastructure restore is started. Restore involves re-deploying existing stack(s), based on the templates of the impacted stack(s), and initiating a data restore based on the last known restore point (EBS/RDS snapshot), unless otherwise specified by you. Ephemeral data on individual EC2 instances will be lost. If you do not authorize an Infrastructure Restore as recommended by AWS, you will not be eligible for a service credit for the associated Incident Resolution Time Service Commitment. Occasionally, AMS needs clarification from, or activity by, you to keep incident resolution Clock stops when: AMS is waiting for a response or action from you. efforts moving forward, unless you have a pre- defined, approved action. As a result, there is communication between AMS and you in order to resolve incidents Clock restarts when: AMS receives the response from you or the action AMS requires of you is completed. Note For a complete list of service commitments, download the AMS Service Level Agreement. Incident management examples Incident management examples. Topics Incident management examples Version May 08, 2025 928 AMS Advanced User Guide • Incident testing • Reporting incidents • Monitoring and updating incidents • Managing incidents with the AWS Support API • Responding to AMS-generated incidents AMS Advanced Concepts and Procedures The following examples describe using the AMS console to submit an incident. Once submitted, the AMS team works with you to resolve the incident per your Service Level Agreement (SLA). Incident testing When testing AMS incident submissions, we ask that you include in the subject text this flag: AMSTestNoOpsActionRequired. This flag lets AMS know that the incident submission is only for testing. When AMS operations engineers see that flag, they will not respond in any way to the incident submission. Reporting incidents Use the AMS console to report an incident. It's important to create a new incident for each new issue or question. When opening cases related to old inquiries, it's helpful to include the related case number so we can refer to previous correspondence. Note If case correspondence strays from the original issue, an AMS operator might ask you to report a new incident. To report an incident using the AMS console: 1. From the left navigation, choose Incidents The Incidents list opens: Incident management examples Version May 08, 2025 929 AMS Advanced User Guide AMS Advanced Concepts and Procedures If your incident list is empty, the Clear filter option resets the filter to Any status. If you know you want to use phone or chat, click Create incident in Support Center to open the incident Create page in the Support Center Console, auto-populated with the AMS service type. Important • Phone calls initiated with Support are recorded, to better improve response. If the call drops, you must call back through the Support Center case, AWS has no mechanism for calling you back. • Phone and chat support is designed to help with support cases, incidents. and service requests, not RFC or security issues. • For RFC issues, use the correspondence option on the relevant RFC details page, to reach an AMS engineer. • For security issues, create a high-priority (P1 or P2) support case. The live chat feature is not for security events. Incident management examples Version May 08, 2025 930 AMS Advanced User Guide AMS Advanced Concepts and Procedures 2. If you want to find an existing incident, select an incident status filter in the drop-down list. • All incidents that are not yet resolved. • A new incident that is not yet assigned.
ams-ug-318
ams-ug.pdf
318
and service requests, not RFC or security issues. • For RFC issues, use the correspondence option on the relevant RFC details page, to reach an AMS engineer. • For security issues, create a high-priority (P1 or P2) support case. The live chat feature is not for security events. Incident management examples Version May 08, 2025 930 AMS Advanced User Guide AMS Advanced Concepts and Procedures 2. If you want to find an existing incident, select an incident status filter in the drop-down list. • All incidents that are not yet resolved. • A new incident that is not yet assigned. • An incident that has been assigned. • An incident that you reopened. • An assigned, complicated incident. • Incidents that require your feedback before the next step. • Incidents to which you have recently submitted information. • An incident that has concluded. • All incidents in the account. 3. Choose Create. The Create an incident page opens: Incident management examples Version May 08, 2025 931 AMS Advanced User Guide AMS Advanced Concepts and Procedures 4. Select a Priority: • Low: Non-critical functions of your business service or application related to AWS/AMS resources are impacted. • Medium: A business service or application related to AWS/AMS resources is moderately impacted and functioning in a degraded state. Incident management examples Version May 08, 2025 932 AMS Advanced User Guide AMS Advanced Concepts and Procedures • High: Your business is significantly impacted. Critical functions of your application related to AWS/AMS resources are unavailable. Reserved for the most critical outages affecting production systems. 5. Select a Category. Note If you are going to test incident functionality, then add the no-action flag (AMSTestNoOpsActionRequired) to your incident title. 6. Enter information for: • Subject: A descriptive title for the incident report. • CC emails: A list of email addresses for people you want informed about the incident report and resolution. • Details: A comprehensive description of the incident, the systems impacted, and the expected outcome of the resolution. Answer the pre-set questions, or delete them and enter any relevant information. To add an attachment, choose Add Attachment, browse to the attachment you want, and click Open. To delete the attachment, click the Delete icon: 7. Choose Submit. A details page opens with information on the incident—such as Type, Subject, Created, ID, and Status—and a Correspondence area that includes the description of the request you created. Click Reply to open a correspondence area and provide additional details or updates in status. . Click Close Case when the incident has been resolved. Click Load More if there is more correspondence than will fit on one page. Don't forget to rate the communication! Incident management examples Version May 08, 2025 933 AMS Advanced User Guide AMS Advanced Concepts and Procedures Your incident displays on the Incidents list page. YouTube Video: How do I raise an incident from the AWS Managed Services console? Monitoring and updating incidents You can update, monitor, and review incident reports and service requests, both called cases, by using the AMS console, or programmatically using the Support API. For information on using the Support API, see DescribeCases operation. To monitor a case, incident or service request, using the AMS console, follow these steps. 1. In the AMS console Incident reports or Service requests dashboard, browse to a case and choose the Subject to open a details page with current status and correspondences. Incident management examples Version May 08, 2025 934 AMS Advanced User Guide AMS Advanced Concepts and Procedures When a reported incident or service request case is updated by the AMS operations team, you receive an email and a link to the incident in the AMS console so you can respond. You can't respond to incident correspondence by replying to the email. Important You must have entered an email address to receive notifications of state change for a service request or incident case. Notifications only go to the email address added to the case when it's created. The link in the notification email will not work unless you are using an email server on your AMS federated network. However, you can respond to the correspondence by going to your AMS console and using the case details page. 2. If there are many cases in the list, you can use the Filter option: • All open (default): Use this filter to see all cases that have not been resolved. • Unassigned: Use if you've just submitted the case and have not received any notice that the case state has changed. Note, incidents and service request cases are addressed with different promptness depending on the submitted priority (incidents) or your service level agreement (service requests). • Open: Use if you have received notice that the case is "Pending Amazon" action; this means that the case has been
ams-ug-319
ams-ug.pdf
319
details page. 2. If there are many cases in the list, you can use the Filter option: • All open (default): Use this filter to see all cases that have not been resolved. • Unassigned: Use if you've just submitted the case and have not received any notice that the case state has changed. Note, incidents and service request cases are addressed with different promptness depending on the submitted priority (incidents) or your service level agreement (service requests). • Open: Use if you have received notice that the case is "Pending Amazon" action; this means that the case has been assigned but work has not yet begun. • Reopened: Use if you have received notice that the case was reopened after having been resolved. • Work in progress: Use if you have received notice that an operator has begun to work on the case. Incident management examples Version May 08, 2025 935 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Pending customer action: Use if you have received an operator request for action on your part. • Customer action completed: Use if you have received notice that your action on the case has been processed. • Resolved: Use to view cases that you know have been resolved. Resolved cases are maintained in history for twelve months. 3. 4. 5. • Any status: Use this filter to see all cases, regardless of status. To check the latest status, refresh the page. If there are so many correspondences that they do not all appear on the page, choose Load More. To provide an update to the case status, choose Reply, enter the new correspondence, and then choose Submit. 6. To close out the case after it has been resolved to your satisfaction, choose Close case. Be sure to rate the service through the 1-5 star rating to let AMS know how we're doing! Managing incidents with the AWS Support API The AWS Support API enables you to create incidents and add correspondence to them throughout investigations of your issues and interactions with AWS Support staff. The AWS Support API models much of the behavior of the AWS Support Center. For more details about how you can use this AWS support service, see Programming an AWS Support Case. Note When using the AWS Support API, or SAPI, for AMS Advanced incidents, use this service code: sentinel-report-incident. Responding to AMS-generated incidents AMS proactively monitors your resources; for more information, see Monitoring and event management. Sometimes AMS identifies and creates an incident case, most often to notify you of an incident. In the event that action is required on your part to resolve an incident, AMS sends a notification to the contact information you have provided for the account. You respond to this Incident management examples Version May 08, 2025 936 AMS Advanced User Guide AMS Advanced Concepts and Procedures incident in the same way as you would any other incident. You would usually respond to incidents via the AMS console; in some cases, contact by email or phone is required. Note AMS sends communications to your primary email address on your AWS account; we recommend adding an alternate Operations contact email alias to facilitate the incident management process. This is covered during the AMS onboarding process and related onboarding documentation. If you have provided AMS with non-resource based contacts (that you informed your CSDM of) during onboarding, those contact are used. For example, you could provide a list of contacts named "SecurityContacts" to your CSDMs/CAs to use for security-related incidents or notifications. Contact tags on your instances/resources are used for AMS-generated incidents, if you have provided your consent to CSDM for using tag information. To learn more about this notification service, see Notifications. Service request management Topics • When to use a service request • How service request management works • Testing a service request in AMS • Creating a service request in AMS • Monitoring and updating service requests in AMS • Responding to an AMS-generated service requests Service requests are communications to AMS created by you to ask for information or advice. A good example of a standard service request is for guidance or help in configuring an AMS service, like Alarm Manager, Patch Orchestrator, and so forth. You can also receive service requests from AMS; these are called outbound service requests or service notifications. To see a list of your service requests, and outbound service requests,(service notifications), sent to you by AMS, look on the Service requests page of the AMS console. To learn more about outbound service requests, see Responding to an AMS-generated service requests. Service request management Version May 08, 2025 937 AMS Advanced User Guide AMS Advanced Concepts and Procedures You create an AWS Managed Services (AMS) service request by using the AMS console or, programmatically,
ams-ug-320
ams-ug.pdf
320
like Alarm Manager, Patch Orchestrator, and so forth. You can also receive service requests from AMS; these are called outbound service requests or service notifications. To see a list of your service requests, and outbound service requests,(service notifications), sent to you by AMS, look on the Service requests page of the AMS console. To learn more about outbound service requests, see Responding to an AMS-generated service requests. Service request management Version May 08, 2025 937 AMS Advanced User Guide AMS Advanced Concepts and Procedures You create an AWS Managed Services (AMS) service request by using the AMS console or, programmatically, by using the Support API. For details on using the API, see Support API. For AMS choose the sentinel-service-request service code. After your service request is received by the AMS operations team, it is prioritized according to your service level agreement. To be kept informed at each step of the service request resolution process, be sure to fill in the CC Emails option, and, if you will connect by federation, log in before following the link in the email AMS sends. Use the AMS console Create Service Request page to perform the following tasks: • Create and update a service request • Get a list of, and detailed information about, all of your current service requests • Narrow your search for service requests by dates and incident identifiers, including requests that have been resolved • Add communications and file attachments to your requests, and add email recipients for case correspondence • Resolve service requests • Rate service request communications When to use a service request The following examples describe a service request: • AMS or AWS general guidance • Patch MW related questions • Backup schedule related questions • Questions about the functionality of AWS services The following are examples of what shouldn't be raised in a service request: • Access issues • Patch failure • Backup failure • RFC failure or RFC that causes business interruption (Use Incident for business interruption) When to use a service request Version May 08, 2025 938 AMS Advanced User Guide AMS Advanced Concepts and Procedures • RFC questions or additional input or change of RFC scope (Use RFC bidirectional correspondence) How service request management works Service requests are handled by the on-call AMS operations team. After your service request is received by the AMS operations team, it's reviewed to ensure that the request is not more properly classified as an incident. If it should be classified as an incident, it's immediately reclassified, the AMS incident management team takes over, and you're notified. If the service request can be resolved with the submission of an RFC, the reviewing operator sends you an email requesting that you submit the appropriate RFC (details are provided). If the AMS operator can resolve the service request, steps to do so are taken immediately. For example, if the service request is for architecture advice or other information, then the operator refers you to the appropriate resources or answers the question directly. If the analysis of your service request identifies a bug or a feature request, then AMS sends you a notification through the service request. Since there is no ETA for feature requests or bug fixes, the original service request is closed. Contact your CSDM for follow up questions related to the original service request. If the service request is out of scope for AMS operations, the operator either sends the request to your cloud service delivery manager so they can communicate with you, or to the appropriate AWS operations team, along with an email to you, as to what steps are being taken. The service request is not resolved until you have indicated that you're satisfied with the outcome. Note We recommend providing a contact email, name, and phone number in all cases to facilitate communications. Testing a service request in AMS When testing AMS service requests, we ask that you include in the subject text this flag: AMSTestNoOpsActionRequired to let AMS know that the service request is only for testing. When AMS operations engineers see that flag, they do not respond to the service request. How service request management works Version May 08, 2025 939 AMS Advanced User Guide AMS Advanced Concepts and Procedures Creating a service request in AMS To create a service request using the AWS Managed Services (AMS) console: 1. From the left navigation, choose Service requests. The Service requests list opens. If your service request list is empty, the Clear filter option resets the filter to Any status. If you know you want to use phone or chat, click Create service request in Support Center to open the service request Create page in the Support Center Console, auto-populated with the AMS service type. Creating a service request Version May 08, 2025 940 AMS
ams-ug-321
ams-ug.pdf
321
User Guide AMS Advanced Concepts and Procedures Creating a service request in AMS To create a service request using the AWS Managed Services (AMS) console: 1. From the left navigation, choose Service requests. The Service requests list opens. If your service request list is empty, the Clear filter option resets the filter to Any status. If you know you want to use phone or chat, click Create service request in Support Center to open the service request Create page in the Support Center Console, auto-populated with the AMS service type. Creating a service request Version May 08, 2025 940 AMS Advanced User Guide AMS Advanced Concepts and Procedures Note Phone calls initiated with Support center are recorded, to better improve response. If the call drops, you must call back through the Support Center case, AWS has no mechanism for calling you back. Important Phone and chat support is designed to help with support cases, incidents and service requests. For RFC issues, use the correspondence option on the relevant RFC details page, to reach an AMS engineer. 2. If you want to find an existing service request, select a service request status filter in the drop- down list. • All service requests that are not yet resolved. • A new service request that is not yet assigned. • A service request that has been assigned. • A service request that you reopened. • An assigned, complicated, service request. • Service requests that require your feedback before the next step. • Service requests to which you have recently submitted information. • A service request that has concluded. • All service requests in the account. 3. Choose Create. The Create a service request page opens. Creating a service request Version May 08, 2025 941 AMS Advanced User Guide AMS Advanced Concepts and Procedures 4. Select a Category. Note If you are going to test service request functionality, add the no-action flag, AMSTestNoOpsActionRequired. to your service request title. 5. Enter information for: • Subject: This creates a link to the service request details on the list page. • CC emails: These emails receive correspondence in addition to your default email contacts. • Details: Provide as much information here as possible. Creating a service request Version May 08, 2025 942 AMS Advanced User Guide AMS Advanced Concepts and Procedures To add an attachment, choose Add Attachment, browse to the attachment you want, and click Open. To delete the attachment, click the Delete icon: . 6. Choose Submit. A details page opens with information on the service request--such as Type, Subject, Created, ID, and Status--and a Correspondence area that includes the description of the request you created. Additionally, your service request displays on the Service Request list page. Use this when you have an alert but have not yet heard from AMS. Click Reply to open a correspondence area and provide additional details or status updates. Click Resolve Case when the service request has been resolved. Click Load More to view additional correspondences that do not fit on the inital page. Don't forget to rate the communication! Creating a service request Version May 08, 2025 943 AMS Advanced User Guide AMS Advanced Concepts and Procedures For billing-related queries, use the Other Category in the AMS console; the ct-1e1xtak34nx76 change type in the AMS CM API, or the IssueType=AMS in the Support API. YouTube Video: How and when to raise service requests from AWS Console and what are it’s Service Level Objectives? Monitoring and updating service requests in AMS You can update, monitor, and review incident reports and service requests, both called cases, by using the AMS console, or programmatically using the Support API. For information on using the Support API, see DescribeCases operation. To monitor a case, incident or service request, using the AMS console, follow these steps. 1. In the AMS console Incident reports or Service requests dashboard, browse to a case and choose the Subject to open a details page with current status and correspondences. When a reported incident or service request case is updated by the AMS operations team, you receive an email and a link to the incident in the AMS console so you can respond. You can't respond to incident correspondence by replying to the email. Monitoring and updating service requests Version May 08, 2025 944 AMS Advanced User Guide AMS Advanced Concepts and Procedures Important You must have entered an email address to receive notifications of state change for a service request or incident case. Notifications only go to the email address added to the case when it's created. The link in the notification email will not work unless you are using an email server on your AMS federated network. However, you can respond to the correspondence by going to your AMS console and using the case details page. 2.
ams-ug-322
ams-ug.pdf
322
by replying to the email. Monitoring and updating service requests Version May 08, 2025 944 AMS Advanced User Guide AMS Advanced Concepts and Procedures Important You must have entered an email address to receive notifications of state change for a service request or incident case. Notifications only go to the email address added to the case when it's created. The link in the notification email will not work unless you are using an email server on your AMS federated network. However, you can respond to the correspondence by going to your AMS console and using the case details page. 2. If there are many cases in the list, you can use the Filter option: • All open (default): Use this filter to see all cases that have not been resolved. • Unassigned: Use if you've just submitted the case and have not received any notice that the case state has changed. Note, incidents and service request cases are addressed with different promptness depending on the submitted priority (incidents) or your service level agreement (service requests). • Open: Use if you have received notice that the case is "Pending Amazon" action; this means that the case has been assigned but work has not yet begun. • Reopened: Use if you have received notice that the case was reopened after having been resolved. • Work in progress: Use if you have received notice that an operator has begun to work on the case. • Pending customer action: Use if you have received an operator request for action on your part. • Customer action completed: Use if you have received notice that your action on the case has been processed. • Resolved: Use to view cases that you know have been resolved. Resolved cases are maintained in history for twelve months. 3. 4. 5. • Any status: Use this filter to see all cases, regardless of status. To check the latest status, refresh the page. If there are so many correspondences that they do not all appear on the page, choose Load More. To provide an update to the case status, choose Reply, enter the new correspondence, and then choose Submit. Monitoring and updating service requests Version May 08, 2025 945 AMS Advanced User Guide AMS Advanced Concepts and Procedures 6. To close out the case after it has been resolved to your satisfaction, choose Close case. Be sure to rate the service through the 1-5 star rating to let AMS know how we're doing! Responding to an AMS-generated service requests AMS patch management sends service requests (aka service notification) to you prior to the time of your set maintenance window; for more information, see AMS maintenance window. AMS also sends service notifications to you when there is a chance that your infrastructure will be impacted by an AWS service or when an EC2 instance in your account may need to be rebooted; for more information, see Service notifications. Note AMS sends communications to the primary email address on your AWS account that you have given; we recommend adding an alternate Operations contact email alias to facilitate the service request or service notification management process. Adding these emails is covered during the AMS onboarding process and related onboarding documentation. Billing questions To submit a billing-related question, complete the following steps: 1. Open the AWS Support Center at https://console.aws.amazon.com/support/home#/. 2. Choose Account & billing. 3. Choose Create case. 4. Choose Account and billing, and then follow the prompts to submit your case. Responding to an AMS-generated service requests Version May 08, 2025 946 AMS Advanced User Guide AMS Advanced Concepts and Procedures Billing questions Version May 08, 2025 947 AMS Advanced User Guide AMS Advanced Concepts and Procedures Operations On Demand Operations on Demand (OOD) is an AWS Managed Services (AMS) feature that extends the standard scope of your AMS operations plan by providing operational services that are not currently offered natively by the AMS operations plans or AWS. Once selected, the catalog offering is delivered by a combination of automation and highly skilled AMS resources. There are no long term commitments or additional contracts, allowing you to extend your existing AMS and AWS operations and capabilities as needed. You agree to purchase blocks of hours (OOD blocks), 20 hours per block, on a monthly basis. You can select from the catalog of standardized offerings and initiate a new OOD engagement through a service request. Examples of OOD offerings include assisting with the maintenance of Amazon EKS, operations of AWS Control Tower, and management of SAP clusters. New catalog offerings are added regularly based on demand and the operational use cases we see most often. OOD is available for both AMS Advanced and AMS Accelerate operations plans and is available in all AWS Regions where AMS is available. AMS performs Customer Security Risk Management
ams-ug-323
ams-ug.pdf
323
of hours (OOD blocks), 20 hours per block, on a monthly basis. You can select from the catalog of standardized offerings and initiate a new OOD engagement through a service request. Examples of OOD offerings include assisting with the maintenance of Amazon EKS, operations of AWS Control Tower, and management of SAP clusters. New catalog offerings are added regularly based on demand and the operational use cases we see most often. OOD is available for both AMS Advanced and AMS Accelerate operations plans and is available in all AWS Regions where AMS is available. AMS performs Customer Security Risk Management (CSRM) while implementing your requested changes. To learn more about the CSRM process, see Change request security reviews. Operations on Demand catalog of offerings Operations on Demand (OOD) offers you the services described in the following table. Note For definitions of key terms refer to the AWS Managed Services documentation Key Terms. Operations Plan AMS Accelerate Title Description Expected Outcomes Amazon EKS cluster maintenance AMS frees your container developers by handling the ongoing maintenance of your Amazon Elastic Kubernetes Service Customer teams assisted with the underlying operations work of (Amazon EKS) deployments. Version May 08, 2025 948 AMS Advanced User Guide AMS Advanced Concepts and Procedures AMS performs the end-to-end updating Amazon EKS clusters. procedures necessary to update a cluster addressing the component s of control plane, add-ons, and nodes. AMS performs the updating to managed node types as well as a curated set of Amazon EKS and Kubernetes add-ons. AMS Accelerate AMI Building and Vending AMS provides ongoing management of AMI building and Customer security posture improved and customer time spent on building and vending AMIs reduced. vending for customers. Our engineers perform a monthly release of subscribed AMIs, release on-demand AMIs for emergent patching activities, manage changes using runbooks, and monitor AMI builds using CloudWatch Monitoring. We also provide troubleshooting assistanc e and detailed reporting for all AMIs used in designated accounts. This offering requires AMI build Pipelines to be deployed via EC2 Image builder. AMS does not support any other automation or service that interacts with EC2 Image builder. Version May 08, 2025 949 AMS Advanced User Guide AMS Accelerate Curated change execution AMS Advanced Concepts and Procedures Work with our skilled operation s engineers to translate your Customers assisted with business requirements into defining, creating, validated change requests that and executing can be executed safely within custom change your AWS environment. Take requests. Changes advantage of our unique approach can be manual to automation and knowledge or automated of operational best practices (for (CloudFormation, example, impact assessment, roll SSM). Includes backs, two-person rule), whether consultation it is a simple change at scale or a with Support complex action with downstream for configura impacts. tion guidance when necessary . Not intended for changes to application code, application installat ion/deployment, data migration, or OS configuration changes. Version May 08, 2025 950 AMS Advanced User Guide AMS Accelerate AMS Advanced Concepts and Procedures AWS Network Firewall AMS collaborates with you to onboard your firewall and Operations implement and manage the Customer teams assisted with reducing policies and rules for ongoing managemen firewall operations. Our engineers t overhead by do this by leveraging our quickly detecting operational best practices and unintentional automation to configure standardi network firewall zed policies and rules, and by changes, resulting enabling monitoring to detect in improved incident changes made outside of the resolution and automation process. AMS quickly reduced root cause notifies you of unwanted changes analysis time for and provides options to include them, if requested, or restore the both expected and unexpected issues. account to a previous configura tion to ensure the overall stability of your systems. AMS Accelerate AWS Control Tower Ongoing operations and management of your AWS Control Customer teams assisted with some operations Tower landing zone, including of the underlying AWS Transit Gateway and AWS operations work Organizations - providing a of managing AWS comprehensive landing zone Control Tower, AWS solution. We handle account Transit Gateway, vending, SCP and OU managemen and AWS Organizat t, drift remediation, SSO user ions. management, and AWS Control Tower upgrades with our library of custom controls and guardrails. Version May 08, 2025 951 AMS Advanced User Guide AMS Accelerate AMS Advanced Concepts and Procedures AWS landing zone Accelerat AMS provides ongoing operation s of AWS landing zones deployed Customer teams assisted with e operations through AWS Landing Zone ongoing operation s and management of the AWS Landing Zone Accelerator solution. Accelerator (LZA). Our engineers handle configura tion file changes, AWS Control Tower (CT) environment management (account vending, OU creation, CT guardrails), service contol policy (SCP) management, CT drift detection and remediati on, network configuration management, and updates to CT and the LZA framework. AWS LZA provides a means to set up and govern a
ams-ug-324
ams-ug.pdf
324
Advanced User Guide AMS Accelerate AMS Advanced Concepts and Procedures AWS landing zone Accelerat AMS provides ongoing operation s of AWS landing zones deployed Customer teams assisted with e operations through AWS Landing Zone ongoing operation s and management of the AWS Landing Zone Accelerator solution. Accelerator (LZA). Our engineers handle configura tion file changes, AWS Control Tower (CT) environment management (account vending, OU creation, CT guardrails), service contol policy (SCP) management, CT drift detection and remediati on, network configuration management, and updates to CT and the LZA framework. AWS LZA provides a means to set up and govern a secure, multi-account AWS environment using operation al best practices and services such as AWS Control Tower. AMS Accelerate SAP Cluster Assist Dedicated alarming, monitorin g, cluster patching, backup, and Customer or partner SAP teams assisted incident remediation for your with some of SAP clusters. This catalog item the underlying allows you to offload some of operations work. the ongoing operational work from your SAP operations team Still requires the customer to provide so that they can focus on capacity other SAP capabilit management and performance ies such as capacity tuning. management, performance tuning, DBA, and SAP basis administration. Version May 08, 2025 952 AMS Advanced User Guide AMS Accelerate AMS Advanced Concepts and Procedures SQL Server on EC2 Operations AMS collaborates with you to onboard, implement, and manage SQL Server customers assisted the ongoing operations of your with offloadin SQL Server databases deployed on g patching and backup database operations to improve resilience, and security posture of their workloads , in addition to optimizing license costs by bringing their own licenses (BYOL) to EC2. EC2 instances. Our engineers leverage our operational best practices and automation to free up your database teams by performin g tasks such as backup and patching, extending AMS operational support to SQL Server patching to include cluster-a ware rolling updates, backup and restore services aligned with our ransomware defense strategy, and monitoring adherence to customer-provided backup and patching controls. AMS Advanced Amazon EKS Cluster AMS frees your container developers by handling the Customer teams assisted with Maintenance ongoing maintenance and health the underlying of your Amazon Elastic Kubernetes operations work of Service (Amazon EKS) deploymen updating Amazon EKS clusters. ts. AMS performs the end-to-end procedures necessary to update a cluster addressing the component s of control plane, add-ons, and nodes. AMS performs the updating to managed node types as well as a curated set of Amazon EKS and Kubernetes add-ons. Version May 08, 2025 953 AMS Advanced User Guide AMS Advanced AMS Advanced Concepts and Procedures Priority RFC Execution Designated AMS operations engineer capacity to prioritize Customers receive a response SLO of 8 the execution of your requests hours for RFCs. for change (RFC). All submissions receive a higher level of response and priority order can be adjusted by interacting directly with engineers through an Amazon Chime meeting room. Version May 08, 2025 954 AMS Advanced User Guide AMS Advanced Concepts and Procedures AMS Advanced and AMS Legacy OS Upgrade Avoid an instance migration by upgrading instances to a This solution is provided for Accelerate supported operating system applications that version. We can perform an in- can no longer be place upgrade on your selected re-installed on a instances leveraging automatio new instance (for n and the upgrade capabilities of example, lost source the software vendors (for example, code, ISV out of Microsoft Windows 2008 R2 to business, and so on). Microsoft Windows 2012 R2). You can roll failed This approach is ideal for legacy upgrades back to applications that cannot be easily their original state. re-installed on a new instance From an operational and provides additional protectio n from known and unmitigat perspective, rolling back is preferred ed security threats on older OS because it puts the instance in a more supportable state with the latest security patches. versions. The following operating systems are supported for in-place upgrades: • Microsoft Windows 2012 R2 to Microsoft Windows 2016 and above • Microsoft Windows 2016 to Microsoft Windows 2022 and above • Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 • Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 • Oracle Linux 7 to Oracle Linux 8 Topics Version May 08, 2025 955 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Requesting AMS Operations On Demand • Making changes to Operations on Demand offerings Requesting AMS Operations On Demand AWS Managed Services (AMS) Operations on Demand (OOD) is available for all AWS accounts that have been onboarded to AMS. To take advantage of Operations on Demand, request additional information from your cloud service delivery manager (CSDM), Solutions Architect (SA), account manager, or Cloud Architect (CA). Available OOD offerings are listed in the preceding Operations on Demand catalog of offerings
ams-ug-325
ams-ug.pdf
325
Linux 7 to Oracle Linux 8 Topics Version May 08, 2025 955 AMS Advanced User Guide AMS Advanced Concepts and Procedures • Requesting AMS Operations On Demand • Making changes to Operations on Demand offerings Requesting AMS Operations On Demand AWS Managed Services (AMS) Operations on Demand (OOD) is available for all AWS accounts that have been onboarded to AMS. To take advantage of Operations on Demand, request additional information from your cloud service delivery manager (CSDM), Solutions Architect (SA), account manager, or Cloud Architect (CA). Available OOD offerings are listed in the preceding Operations on Demand catalog of offerings table. After the engagement scoping is completed, submit a service request to AMS Operations to initiate an engagement for OOD. Each OOD service request must contain the following detailed information pertaining to the engagement: • The specific OOD offerings requested, and for each specific OOD offering: • The number of blocks (one block is equal to 20 hours of operational resource time in a given calendar month, to be charged at AWS’s then-current standard rate for the applicable Operations on Demand offering) to allocate to the specific OOD offering. • The account ID for each AWS Managed Services account for which the specific OOD offering is being requested. OOD service requests must be submitted by you through either: • The AWS Managed Services account that receives the applicable Operations on Demand offerings, or • An AWS Managed Services account that is an AWS Organizations Management account in all features mode, on behalf of any of its member accounts that are AWS Managed Services accounts. After the OOD service request is received, AMS Operations reviews and updates the accounts with their approval, partial approval, or denial. Once the OOD offerings service request is approved, AMS and you coordinate to begin the engagement. No OOD offerings are initiated until the service request is approved and an engagement start date is agreed on. Requesting AMS Operations On Demand Version May 08, 2025 956 AMS Advanced User Guide AMS Advanced Concepts and Procedures AMS uses a monthly subscription allocation of OOD blocks. We allocate the approved number of blocks monthly, starting from the engagement start date, until you request to opt out through a new service request. OOD blocks are valid for a calendar month. Unused blocks, or block portions, are not rolled over or carried forward to future months. You are billed a minimum of one OOD block each month, regardless of the number of hours actually used. Any additional, allocated, OOD block in which no hours were used, is not billed. Making changes to Operations on Demand offerings To request changes to ongoing engagements for Operations on Demand (OOD) offerings, submit a service request containing the following information: • The modification(s) being requested, and • The requested date for the modifications to become effective. After receiving the OOD service request, AMS Operations reviews the request and either updates with their approval or requests that the assigned CSDM work with you to determine the scope and implications of the modification. If the modification is determined to require a scoping effort with the CSDM, you are required to submit a second OOD service request to initiate the modified engagement following the completion of the scoping exercise. Once approved, the most recently modified block allocation becomes and continues to stay active, superseding any prior block allocations, unless agreed otherwise by AWS and you. Making changes to Operations on Demand offerings Version May 08, 2025 957 AMS Advanced User Guide AMS Advanced Concepts and Procedures Document history The following table describes the important changes in each release of the AMS Advanced User Guide. For notification about updates to this documentation, you can subscribe to an RSS feed. • API version: 2019-05-21 • New or Updated CTs and Walkthroughs: AMS Advanced Change Type Reference Document History. • New AMS AMIs: AMS Amazon Machine Images (AMIs). Change Description Date AMS Advanced Trusted Remediator FAQ and updates. Several updates to supported Trusted Advisor checks, May 8, 2025 Trusted Remediator FAQ (added "What resources does Trusted Remediator deploy to your accounts?"), and more. See also Trusted Advisor checks supported by Trusted Remediator. AMS Advanced Standard security controls update. Added "Security group sharing" controls. May 8, 2025 AMS Advanced protected namespaces. AMS Advanced log locations. AMS Advanced Self-Serv ice Provisioning mode for AMS protected namespace April 24, 2025 s EPSMarketplaceSubs and EPS criptionRole added. Additional log locations added. A prerequisite for AppStream 2.0 has been added: You must include an Amazon S3 April 24, 2025 April 24, 2025 Version May 08, 2025 958 AMS Advanced User Guide AMS Advanced Concepts and Procedures AppStream 2.0 prerequisites bucket name when submittin update. g the provisioning RFC for the service. AMS Advanced New Amazon RDS auto-remediation alert. Alert ID:- 0224, triggers when the requested
ams-ug-326
ams-ug.pdf
326
group sharing" controls. May 8, 2025 AMS Advanced protected namespaces. AMS Advanced log locations. AMS Advanced Self-Serv ice Provisioning mode for AMS protected namespace April 24, 2025 s EPSMarketplaceSubs and EPS criptionRole added. Additional log locations added. A prerequisite for AppStream 2.0 has been added: You must include an Amazon S3 April 24, 2025 April 24, 2025 Version May 08, 2025 958 AMS Advanced User Guide AMS Advanced Concepts and Procedures AppStream 2.0 prerequisites bucket name when submittin update. g the provisioning RFC for the service. AMS Advanced New Amazon RDS auto-remediation alert. Alert ID:- 0224, triggers when the requested allocated March 27, 2025 storage reaches or exceeds the configured maximum storage threshold. AMS Advanced AWS has closed new customer access Existing customers can still use the service but there will March 27, 2025 to Amazon CloudSearch, be no new features. effective July 25, 2024. AMS Advanced AWS has closed new customer access Existing customers can still use the service but there will March 27, 2025 to AWS CodeCommit, be no new features. effective July 25, 2024. Trusted Remediator is now available. Trusted Remediator, an AWS Managed Services solution March 19, 2025 AMS Advanced New auto- remediations RDS alert. AMS Advanced New feature: Incident notifications. that automates the remediati on of AWS Trusted Advisor checks, is now available. RDS-EVENT-0224 added. March 17, 2025 March 13, 2025 You can use AppRegistry to create applications and customize the incident notifications for those applications. AMS Update to RDS alarm monitoring threshold. The RDS Average CPU Utilizati on alarm threshold has been changed from 75% to 90%. February 20, 2025 Version May 08, 2025 959 AMS Advanced User Guide AMS Advanced Concepts and Procedures Updated Self-service reports with new data options for aggregated report viewing Update to the AWS Batch SSP January 28, 2025 January 28, 2025 Added data options to include new Field Name: Admin Account ID, Dataset Field Name: aws_admin _account_id , and Definition: Trusted AWS Organization account enabled by the customer for the following Self-service reports: • Patch report (daily) • Backup report (daily) • Incident report (weekly) You can use the following RFC to provision AWS Batch in your AMS account: Management | AWS service | Self-provisioned service | Add (ct-1w8z66n899dct). New AMS feature: Aggregated Self Service Reports Aggregated self-service reporting (SSR) provides you January 21, 2025 Update to Forecast SSP section a view of existing self-serv ice reports aggregated at the organization level, cross-acc ount. Added note: AWS has closed new customer access to Amazon Forecast, effective July 29, 2024. Amazon Forecast existing customers can continue to use the service as normal. January 10, 2025 Version May 08, 2025 960 AMS Advanced User Guide AMS Advanced Concepts and Procedures Update to AMS protected namespaces section Added a missing protected namespace (*mc, *MC, and January 9, 2025 *Mc) to the list of AMS protected namespaces. Update to How monitoring works section Added information on a new feature, configuring alert January 8, 2025 notifications by resource, or instance ID, rather than by incident. Updated: Tag-based update content Fixed typo in keyname and corrected bad config file path. January 6, 2025 Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. November 21, 2024 Updated Operations On Demand offerings table The following operating systems are supported for in- November 11, 2024 place upgrades: • Microsoft Windows 2016 to Microsoft Windows 2022 and above Version May 08, 2025 961 AMS Advanced User Guide AMS Advanced Concepts and Procedures Updated Operations On Demand offerings table The following operating systems are supported for in- November 1, 2024 place upgrades: • Microsoft Windows 2012 R2 to Microsoft Windows 2016 and above • Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 • Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 • Oracle Linux 7 to Oracle Linux 8 Updated Supported configura tions Updated supported Oracle Linux operating systems to October 24, 2024 Updated AMS Amazon Machine Images (AMIs) October 24, 2024 9.0-9.3, 8.0-8.9, 7.5-7.9. Updated Windows-bassed AMIs to remove Windows 2012 and 2012 R2. Updated Linux-based AMIs to remove several AMIS that are no longer support and to add the following: • Amazon Linux 2 (ARM64) • RHEL 9 • SUSE Linux Enterprise Server 15 SP5 You can now include multiple email addresses in tag-based alerts. Multiple email addresses are now supported in tag-based alerts. September 20, 2024 Version May 08, 2025 962 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change request security reviews section added. A new section has been added that provides details on the September 17, 2024 New section added. change request security review process. A new section describin g how change request security reviews occur in AMS Advanced is now available.
ams-ug-327
ams-ug.pdf
327
to add the following: • Amazon Linux 2 (ARM64) • RHEL 9 • SUSE Linux Enterprise Server 15 SP5 You can now include multiple email addresses in tag-based alerts. Multiple email addresses are now supported in tag-based alerts. September 20, 2024 Version May 08, 2025 962 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change request security reviews section added. A new section has been added that provides details on the September 17, 2024 New section added. change request security review process. A new section describin g how change request security reviews occur in AMS Advanced is now available. September 12, 2024 New service supported by AMS Advanced. AWS Resilience Hub is now supported by AMS Advanced. August 30, 2024 New services supported by AMS Advanced. Five new services are now supported by AMS Advanced: August 21, 2024 • Amazon Bedrock • Amazon Kendra • Amazon Quantum Ledger Database (Amazon QLDB) • AWS Service Catalog AppRegistry • Amazon Managed Service for Prometheus A new endpoint security network default setting is Update source is now included in EPS default now available. network settings. Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. August 21, 2024 July 30, 2024 AMS now supports Amazon Route 53 Resolver DNS Firewall. AMS now supports Amazon Route 53 Resolver DNS Firewall July 30, 2024 Version May 08, 2025 963 AMS Advanced User Guide AMS Advanced Concepts and Procedures AWS DataSync SSPS update Security Config Rules Dashboard AWS DataSync no longer requires the "datasync-" prefix on Amazon S3 bucket names. The Security Config Rules Dashboard is now available in Self-Service reporting. July 30, 2024 July 24, 2024 AMS now supports Oracle Linux 8.9, RHEL 8.10, and AMS now supports Oracle Linux 8.9, RHEL 8.10, and July 5, 2024 RHEL 9.4. RHEL 9.4. Amazon Bedrock now available in Self-service provisioning mode You can now request Amazon Bedrock in AMS SSP mode. June 27, 2024 Amazon Route 53 Resolver DNS firewall events in AMS now monitors Amazon Route 53 Resolver DNS Security Incident Response firewall events in Security June 21, 2024 Incident Response Added additional informati on on how to enable the AMS Added additional informati on on how to enable the AMS June 5, 2024 bring your own EPS (BYOEPS) bring your own EPS (BYOEPS) feature. feature. Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. May 23, 2024 Information added on using a custom role with AWS Amplify in self-service provisioning mode (MALZ environments only). Instructions added on how MALZ environments can use a custom role with AWS Amplify in self-service provisioning mode. May 23, 2024 Version May 08, 2025 964 AMS Advanced User Guide AMS Advanced Concepts and Procedures Amazon Kendra is now available in Self-Service Amazon Kendra is now available in Self-Service Provisioning mode. Provisioning mode. May 23, 2024 AMS Advanced supports additional operating systems. AMS Advanced supports Red Hat Enterprise Linux (RHEL) April 25, 2024 9.x and Ubuntu 20.04 and 22.04. AMS Advanced supports ARM64 architecture for AMS Advanced supports ARM64 architecture for Amazon Linux 2. Amazon Linux 2. April 25, 2024 Updated Offboard from multi-account landing zone Added detailed information on how to offboard Applicati April 11, 2024 (MALZ) landing zone accounts on and Core accounts from section. multi-account landing zone. Updated: Service request management description. Updated Service request management description in Service description topic. March 21, 2024 Updated: Incident management service commitments section. Added a link to the AMS Service Level Agreement. March 21, 2024 Updated: How service request management works section. Added clarification on how AMS handles service requests March 21, 2024 that contain a feature request or a bug. Updated: Get support section. Updated Get support section March 21, 2024 to include a new Billing questions section. Updated: AMS Automated IAM Provisioning Updated AMS Automated IAM Provisioning with custom deny list information March 21, 2024 Version May 08, 2025 965 AMS Advanced User Guide Earlier updates AMS Advanced Concepts and Procedures The following table describes the important changes to the documentation of the AMS Advanced guide prior to March 2024. Change Description Link February 2024 Updated Supported Operating Systems Updated Supported Operating Systems to include SUSE Linux Enterprise Server 15 SP5. See Supported configurations Added note to Alerts from baseline monitoring in AMS. Added note indicating that the alarm for EC2 Non-root See Alerts from baseline monitoring in AMS Volume Usage is disabled by default. Added a new section AMS Event Router to Monitoring and event management. Updated: AMS AMI Notes Added a new section discussin g the AMS Advanced Event See Using Amazon EventBrid ge Managed Rules in AMS Router. Zip file includes notes on the latest AMS
ams-ug-328
ams-ug.pdf
328
2024. Change Description Link February 2024 Updated Supported Operating Systems Updated Supported Operating Systems to include SUSE Linux Enterprise Server 15 SP5. See Supported configurations Added note to Alerts from baseline monitoring in AMS. Added note indicating that the alarm for EC2 Non-root See Alerts from baseline monitoring in AMS Volume Usage is disabled by default. Added a new section AMS Event Router to Monitoring and event management. Updated: AMS AMI Notes Added a new section discussin g the AMS Advanced Event See Using Amazon EventBrid ge Managed Rules in AMS Router. Zip file includes notes on the latest AMS Amazon machine See AMIs.csv-and-notes .02.2024 images (AMIs) and a CSV file of the latest AMIs. February 2024 Added a new section for Amazon EventBridge rule service-linked role for AMS Advanced Added a new section for Amazon EventBridge rule service-linked role for AMS Advanced in the Infrastru cture Security section. See Amazon EventBridge rule service-linked role for AMS Advanced Earlier updates Version May 08, 2025 966 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated Self Servicing Provision Mode section Added a new section for the new Amazon Inspector in Self Servicing Provision Mode. See Amazon Inspector Classic (AMS SSPS) January 2024 Updated Planned event management (PEM) section Added a new section for SSM Agent auto installation Added AWS Resilience Hub (AMS SSPS) Added additional details and an FAW to Planned event management (PEM). See Planned event management in AWS Managed Services Added a new section for SSM Agent auto installation in Automated EC2 instance configuration. Added a new SSPS service. See SSM Agent automatic installation See Use AMS SSP to provision AWS Resilience Hub in your AMS account Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine See AMIs.csv-and-notes .01.2024 images (AMIs) and a CSV file of the latest AMIs. December 2023 Updated Direct Change mode in AMS Added a new subsection, Direct Change Mode use cases, to Direct Change mode in AMS. See Direct Change mode in AMS Updated AWS Amplify (AMS SSPS) Updated FAQ to clarify that a Risk Acceptance is required to request Amplify. See Use AMS SSP to provision AWS Amplify in your AMS account Earlier updates Version May 08, 2025 967 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link New AWS Elastic Disaster Recovery (AMS SSPS) Added a new SSPS service New Amazon Managed Service for Prometheus (AMS SSPS) Added a new SSPS service See Use AMS SSP to provision AWS Elastic Disaster Recovery in your AMS account See Use AMS SSP to provision Amazon Managed Service for Prometheus in your AMS account Updated How continuity management works section. Added a new subsection, AMS backup monitoring and reporting. See How continuity management works New Amazon DevOps Guru (AMS SSPS) Added a new SSPS service See Use AMS SSP to provision Amazon DocumentDB (with MongoDB compatibility) in your AMS account Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine See AMIs.csv-and-notes .12.2023 images (AMIs) and a CSV file of the latest AMIs. November, 2023 Updated Amazon CloudWatch Synthetics (AMS SSPS) Updated FAQs to use the correct role names. Updated Amazon API Gateway Self-service Provisioning mode Added an additional role, customer_apigatewa y_cloudwatch_role the API Gateway section. , to See Use AMS SSP to provision Amazon CloudWatch Synthetics in your AMS account See Use AMS SSP to provision Amazon API Gateway in your AMS account Earlier updates Version May 08, 2025 968 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Added a new service to Self- service Provisioning mode Added AWS Service Catalog AppRegistry to the Self- Service Provisioning mode section See Use AMS SSP to provision AWS Service Catalog AppRegistry in your AMS account Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine See AMIs.csv-and-notes .11.2023 September, 2023 Added a note to Using Patch Orchestrator images (AMIs) and a CSV file of the latest AMIs. Added the following note to Using Patch Orchestrator section: "Patch failure alerts aren't created for instances that have unsupported operating systems, or that are stopped during the maintenance window" See Patch management in AMS Updated data encryption with additional services Added services to Data encryption in AMS. See Data protection in AMS Added new paragraph to RFC error messages. Corrected IAM role names. Added a new paragraph to add Create a service request link. Corrected the IAM rolename customer_emr_cluster_autosc aling_role. See Troubleshooting RFC errors in AMS See Self-Service Provisioning mode in AMS Earlier updates Version May 08, 2025 969 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated baselone monitoring information Removed reference to two deprecated alarms See Alerts from baseline monitoring in AMS RDSReadLatencyAlarm and RDSWriteLatencyAlarm. August, 2023 Added: AMS
ams-ug-329
ams-ug.pdf
329
data encryption with additional services Added services to Data encryption in AMS. See Data protection in AMS Added new paragraph to RFC error messages. Corrected IAM role names. Added a new paragraph to add Create a service request link. Corrected the IAM rolename customer_emr_cluster_autosc aling_role. See Troubleshooting RFC errors in AMS See Self-Service Provisioning mode in AMS Earlier updates Version May 08, 2025 969 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated baselone monitoring information Removed reference to two deprecated alarms See Alerts from baseline monitoring in AMS RDSReadLatencyAlarm and RDSWriteLatencyAlarm. August, 2023 Added: AMS Security Incident Response Added documentation for using AMS Security Incident See Security Incident Response in AMS Response. July, 2023 Added: Automated IAM Provisioning Added documentation for using Automated IAM See Automated IAM Provision ing AMS Provisioning. Updated: Access roles table Added missing roles for AMS Access. See AMS customer account access IAM roles June, 2023 Updated: List of monitored RDS alerts. Updated the list of RDS alerts for AMS baseline monitoring. See Alerts from baseline monitoring in AMS. 9 new RDS alert types were added and 3 existing RDS alert types were removed. Updated: Access roles table New roles for AMS Security. See AMS customer account access IAM roles May, 2023 Updated: Service Billing Start Date policy. Updated definitions of Billing Start Date. See AMS key terms. April, 2023 Earlier updates Version May 08, 2025 970 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link See Billing report (monthly). Updated: Monthly Billing Self-Service Report. Added note: The Monthly Billing reports are only available in a Management Payer account (AMS Advanced multi-account landing zone), but are available for all linked AMS Accelerate-managed accounts. Updated: Removed "Standard Patching" content Updated: What is AMS? Updated: Offboarding multi- account landing zone AMS uses Patch Orchestrator. Patch management in AMS Moved some topics previousl y under What is AMS? to be part of the AMS Service Description. Made various clarifications. Service description Offboard from AMS multi-acc ount landing zone accounts Updated: AWS Transfer Family (AMS SSPS) Added link to transfer setup tutorial. Use AMS SSP to provision AWS Transfer Family in your Updated Content: Self-service provisioning Replaced "CodeSuite" with "Code services" per AWS legal. AMS account Use AMS SSP to provision AMS Code services in your AMS account Updated Content: CloudWatc h metrics and alarms Added link to Example: Count occurrences of a term. Creating custom CloudWatch metrics and alarms in AMS Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. AMIs.csv-and-notes.04.2023 March, 2023 Earlier updates Version May 08, 2025 971 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated Content: Offboardi ng from AMS Clarified what resources are deleted when offboard Offboard from AMS multi-acc ount landing zone accounts Updated: AMS AMIs multi-account landing zone accounts Added link to AMI ZIP file for each month in the Doc History section. Document history Updated: Auto remediation Removed LVM support for EC2 volume automation. AMS automatic remediation of alerts Updated: Patch RACI Several updates and clarficat ions to the RACI for patching. AMS responsibility matrix (RACI) Updated Content: Self-service provisioning Added an FAQ bullet. To launch a new AWS Datasync Self-Service Provisioning mode in AMS agent, WIGS ingestion is not required. Updated Content: Self-service provisioning Added an FAQ bullet. To launch a new AWS Datasync Self-Service Provisioning mode in AMS Updated: AMS AMI Notes February, 2023 Updated Content: Offboardi ng from AMS agent, WIGS ingestion is not required. Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. Clarified how to offboard multi-account landing zone environments, VPCs, and Application accounts AMIs.csv-and-notes.03.2023 Offboard AMS accounts Earlier updates Version May 08, 2025 972 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated Content: Finding ARNs Added DynamoDB describe-table CLI for finding a DynamoDB table ARN Find Amazon Resource Names (ARNs) in AMS Updated Content: Self-Serv ice Provisioning Removed the AMS "CodeSuit e" option as it is not an actual Self-Service Provisioning mode in AMS SSPS. You can still use the Management | AWS service | Self-provisioned service | Add (review required) (ct-3qe6i o8t6jtny) change type and request the three services: CodeBuild, CodeDeploy and CodePipeline. AMS will then provision the following IAM roles to your account: customer_codebuild _service_role , customer_codedeplo y_service_role , and aws_code_pipeline_ . After service_role provisioned in your account, you must onboard the role in your federation solution. Corrected roles needed for multi-account landing zone (MALZ) vs single-account landing zone (SALZ). Sharing Keys using Secrets Manager FAQs Updated Content: Secrets Manager update Updated Content: AMS automatic remediation of Added support for Logical Volume Manager (LVM) EC2 volume usage remediati on automation
ams-ug-330
ams-ug.pdf
330
AWS service | Self-provisioned service | Add (review required) (ct-3qe6i o8t6jtny) change type and request the three services: CodeBuild, CodeDeploy and CodePipeline. AMS will then provision the following IAM roles to your account: customer_codebuild _service_role , customer_codedeplo y_service_role , and aws_code_pipeline_ . After service_role provisioned in your account, you must onboard the role in your federation solution. Corrected roles needed for multi-account landing zone (MALZ) vs single-account landing zone (SALZ). Sharing Keys using Secrets Manager FAQs Updated Content: Secrets Manager update Updated Content: AMS automatic remediation of Added support for Logical Volume Manager (LVM) EC2 volume usage remediati on automation alerts volumes. Earlier updates Version May 08, 2025 973 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated Content: AMS Amazon Machine Images Added the section Offboardi ng AMS AMIs with sample AMS Amazon Machine Images (AMIs) (AMIs) code to remove AMIs from your account. Updated Content: IAM User Role Updated the IAM policy: AMSBillingPolicy. IAM user role in AMS New Content: Unsupported OSes Added information on what services AMS provides for Capabilities for unsupported operating systems in AMS Updated Content: On- demand reports unsupported operating systems (OSes). Certain on-demand reports not available in AMS Advanced and were mistaken shown as available. On-request reports Updated Content: Offboardi ng AMS Accounts Clarified instructions for offboarding MALZ application Offboard AMS Application accounts accounts. Updated Content: Secrets Manager Corrected the names of IAM roles required to use Secrets Secrets Manager in AWS Managed Services FAQs Manager. Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. Updated: AMS AMI Notes January, 2023 AMIs.csv-and-notes.02.2023 New Content: AWS Device Farm (AMS SSPS) Added a new SSPS service: AWS Device Farm. Use AMS SSP to provision AWS Device Farm in your AMS account Earlier updates Version May 08, 2025 974 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated: supported Windows versions Added support for Windows Server 2022. AMS Amazon Machine Images (AMIs), Supported configura tions, and AMS AMI notificat ions with SNS Updated: Continuity management Updated the rules in the Default AMS backup plan. Default backup plans, multi- account landing zone Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. AMIs.csv-and-notes.01.2023 December, 2022 Updated: Using bastions Fixed bad link. Accessing instances using bastions Updated: Resource Scheduler Made several improveme nts and added links to AWS AWS Managed Services Resource Scheduler Instance Scheduler for more context. Earlier updates Version May 08, 2025 975 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated: Windows AMIs and Supported Configurations (for Updated AMS AMI content added from EC2Launch AMS Amazon Machine Images (AMIs) and Service description new Windows AMIs) (Windows Server 2016 and later) to EC2Launch (Windows Server 2016 and Windows Server 2019) and added EC2LaunchV2 (Windows Server 2022 and later). Updated Windows-based AMIs from Microsoft Windows Server (2012, 2012 R2, 2016, and 2019) to Microsoft Windows Server (2012, 2012 R2, 2016, 2019 and 2022). Updated: Resourced Scheduler section Improved methods for deploying and customizing AWS Managed Services Resource Scheduler AMS Resource Scheduler. Updated: Setting upu AMS: private and public DNS Updated the DNS architecture diagram. Setting up private and public DNS Updated: MALZ network architecture Updated the diagram and added guidance for Accelerate MALZ network architecture Updated: Setting up: Using tags application accounts. New note: custom tagging is only supported for MALZ application accounts, not core accounts. AMS infrastructure automatic tagging Updated: Access managemen t: using bastions Updated introduction to inclue RDP bastions. Saving costs on Single-ac count landing zone (SALZ) bastions Earlier updates Version May 08, 2025 976 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated: AMS default settings: alerts Added EC2 instance: Non- Root Volume Usage to the Alerts from baseline monitoring in AMS table of alerts. Updated: Continuity Management Added guidance about continuous backups. How continuity management works Updated: Automated EC2 instance configuration Added support for PowerBrok er Identity Service (PBIS) and Automatically update PBIS on Linux instances and On Instance Code (OIC). Automatically update code on Linux instances Updated: Self-Service Provisioning for Secrets Updated the CT for adding Secrets Manager to your Use AMS SSP to provision AWS Secrets Manager in your Manager account (under FAQs). AMS account Updated: Log management Updated the list of EC2 system-level logs. Amazon Elastic Compute Cloud (Amazon EC2) - system Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. November, 2022 level logs AMIs.csv-and-notes.12.2022 Updated: AMS Amazon Machine Images (AMIs) Updated supported SUSE Linux versions. AMS Amazon Machine Images (AMIs) Updated: MALZ accounts Added guidance for deleting a Customer Managed applicati
ams-ug-331
ams-ug.pdf
331
Secrets Updated the CT for adding Secrets Manager to your Use AMS SSP to provision AWS Secrets Manager in your Manager account (under FAQs). AMS account Updated: Log management Updated the list of EC2 system-level logs. Amazon Elastic Compute Cloud (Amazon EC2) - system Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. November, 2022 level logs AMIs.csv-and-notes.12.2022 Updated: AMS Amazon Machine Images (AMIs) Updated supported SUSE Linux versions. AMS Amazon Machine Images (AMIs) Updated: MALZ accounts Added guidance for deleting a Customer Managed applicati on account. Customer Managed applicati on accounts Updated: Setting up AMS Added customer-ams- amazon2-security- enhanced . AMS AMI notifications with SNS Earlier updates Version May 08, 2025 977 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated: How monitoring works Updated explanation of service notifications and How monitoring works incident reports. Updated: MALZ Application account types Improved the explanation of account types. Application account types Updated: Developer mode Added a warning about Developer mode. Before you begin with AMS Developer mode Updated: Planned event management Added the section: Types of PEM Planned event management in AWS Managed Services Updated: Amazon Machine Images (AMIs) Updated supported SUSE Linux versions AMS Amazon Machine Images (AMIs) Updated: AMS AMI Notes October, 2022 New: Automated Instance Configuration New: Only manual CT is acceptable for some SSPS Update: Setting up AMS Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. New section describes the Automated Instance Configuration process. Updated over 50 self-service provisioning service FAQs to use the manual CT and not the automated CT for adding SSPS. Added two policies to the Amazon EC2 IAM instance profiles for MALZ. AMIs.csv-and-notes.11.2022 Automated instance configura tion in AMS Advanced Self-Service Provisioning mode in AMS EC2 IAM instance profile Earlier updates Version May 08, 2025 978 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link New: Library of custom detective and preventive Added a set of example service control policies Curated SCPs and Config Rules rules. (SCPs) and preventive Config rule controls based off our learnings from multiple customers. Update: AWS Backup warning Added a warning: "Do not edit AMS backup plans as your How continuity management works changes may be lost. Instead, create new backup plans using ct-2hyozbpa0sx0m for your custom configurations." Update: AWS Backup caution Added a note about adding new IAM roles to your Deploying IAM resources in AMS Advanced federation. Update: Monitoring management Alerts generate incident reports, not service requests. How monitoring works Update: Bring your own EPS Update: Accelerate Applicati on account Updated: AMS AMI Notes September, 2022 Applies to SALZ as well as MALZ. Clarified that your Accelerat e account is an Application account. Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. AMS bring your own EPS Application account types AMIs.csv-and-notes.10.2022 Earlier updates Version May 08, 2025 979 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated: CLI command examples for finding resources Added new example and that the --region option may be needed. Finding the data you need (SKMS), AMS Updated: Provisioning IAM roles IAM roles can now be created and managed with the Creating stacks using Direct Change mode Updated: AMS Technical Standards Updated: How continuity management works Updated: Security and compliance AWSManagedServices CloudFormationAdmi nRole . AMS-STD-007 Logging: (#20) Clarified forwarding requirements. Revised Start Backup Job wording to "on-demand" rather than "existing". Updated description and guidance for standard AMS-STD-007 number 20: forwarding logs between accounts. Security and compliance How continuity management works Security and compliance Updated: Change management use cases Removed a broken link to the legacy Change Management Change management use cases User Guide. Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. Updated: AMS AMI Notes August 11, 2022 AMIs.csv-and-notes.09.2022 Earlier updates Version May 08, 2025 980 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated: Chapter headings for consistency and readabili "MALZ network architect ure" and "SALZ network What is AWS Managed Services? y, moved some topic sub-secti architecture" are now, both, ons into more appropriate subsections of the top- sections level "Network architecture" section, formerly the "AMS network architecture" section "Modes for change management" is the new heading for "Change management" "Default settings" is now a subsection of "Setting up AMS" "AD FS claim rule and SAML settings" (formerly "ActiveDi rectory Federation Services (ADFS) claim rule and SAML settings) is now a subsection of "Setting up AMS" "Access management" is the new heading for "Access in AMS" and is moved up in the TOC "Finding the
ams-ug-332
ams-ug.pdf
332
network What is AWS Managed Services? y, moved some topic sub-secti architecture" are now, both, ons into more appropriate subsections of the top- sections level "Network architecture" section, formerly the "AMS network architecture" section "Modes for change management" is the new heading for "Change management" "Default settings" is now a subsection of "Setting up AMS" "AD FS claim rule and SAML settings" (formerly "ActiveDi rectory Federation Services (ADFS) claim rule and SAML settings) is now a subsection of "Setting up AMS" "Access management" is the new heading for "Access in AMS" and is moved up in the TOC "Finding the data you need" is the new heading for "Service knowledge management" "Reports and options" is the new heading for "AMS Earlier updates Version May 08, 2025 981 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Reporting" and is lower down in the TOC "Operations on Demand" is now the last topic in the TOC Updated: Finding and ARN, New: Finding a resource with Both procedures completely rewritten for usefulness. Find Amazon Resource Names (ARNs) in AMS and Find an ARN resources by ARN in AMS. Updated: Connecting your CMA with Transit Gateway The automation does not support adding routes to Connecting your CMA with Transit Gateway core route domains, and the procedure needed updating. Updated: MALZ basic components pricing All prices are in US Dollars, formatted with dollar signs. AMS environment basic components Updated: AMS AMI Notes Zip file includes notes on the latest AMS Amazon machine images (AMIs) and a CSV file of the latest AMIs. AMIs.csv-and-notes.08.2022 July 14, 2022 Updated: Self-Service Reporting Added instructions for encrypting AWS Glue metadata with KMS keys. Self-service reports Updated: AMS baseline monitoring Added DeleteRecoveryPoint backup alert. Alerts from baseline monitoring in AMS Updated: Supported operating systems Added End of Support date for Amazon Linux 2. Supported configurations Earlier updates Version May 08, 2025 982 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated: Self-Service Provisioning Added a prerequisite for the AWS Transfer SSPS. Use AMS SSP to provision AWS Transfer Family in your Updated: AMS Reporting Added note about Opt-in Regions. AMS account Reports and options Updated: RFC correspondence and attachment Clarified allowed text file types; in particular, YAML files Add RFC correspondence and attachments (console) must end in .yaml (not .yml). June 21, 2022 Updated content Modes overview. The AMS mode previousl y known as "Change Management mode" or "Standard CM mode" is now known as "RFC mode." The modes section has been expanded. New alarm Added a AWS Backup alarm. June 16, 2022 New content Incident management. Incidents that are not a security risk can now be resolved by AMS with your approval in the incident report and do not need a separate RFC and approval. Alerts from baseline monitoring in AMS Incident management Earlier updates Version May 08, 2025 983 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link MALZ: Updated network architecture diagram. Networking account architect ure Updates: The VPC Peering for the master account VPC to shared services vpc should be removed as it doesn't exist. Sagemaker self-service provisioned service (SSPS). Use AMS SSP to provision Amazon SageMaker AI in your Updated with new IAM role AMS account added at onboarding for Sagemaker's use. To list of AMIs supported for SNS notifications: AMS AMI notifications with SNS Updated content Added customer-ams-sles1 2, customer-ams-sles15, customer-ams-amazon1- security-enhanced, customer- ams-rhel8, customer-ams- rhel8-security-enhanced, customer-ams-ubuntu18, customer-ams-windows2012, customer-ams-windows2019, and customer-ams-windo ws2019-security-enhanced. Removed customer-ams-rhel6 and customer-ams-rhel6- security-enhanced AMIs. Removed escalation emails. Getting help in AWS Managed Services Moved topic list to below opening paragraphs. What is AWS Managed Services? Earlier updates Version May 08, 2025 984 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated service logs with better links for load balaning logs, also re-formatted. AMS aggregated service logs EKS self-service provision ing service (SSPS). Added Use AMS SSP to provision Amazon EKS on AWS Fargate information on enabling in your AMS account envelope secrets encryption in your cluster. June 09, 2022 Updated content, Getting help May 12, 2022 Removed escalation path emails. AMS provides communication methods through incident reports, service requests, and RFCs. Getting help in AWS Managed Services New content, Operations on Demand (OOD) subscription AMS has changed Operation s on Demand onboarding Operations On Demand model April 14, 2022 from the current signup and renew model, to a subscript ion allocation and default opt-in model. When you onboard an AMS account, you are automatically enrolled in Operations on Demand now. New content, Cost Optimizat ion AMS provides recommend ations for cost optimization. Cost optimization in AWS Managed Services Earlier updates Version May 08, 2025 985 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated content, Accelerate account in MALZ An incorrect
ams-ug-333
ams-ug.pdf
333
Managed Services New content, Operations on Demand (OOD) subscription AMS has changed Operation s on Demand onboarding Operations On Demand model April 14, 2022 from the current signup and renew model, to a subscript ion allocation and default opt-in model. When you onboard an AMS account, you are automatically enrolled in Operations on Demand now. New content, Cost Optimizat ion AMS provides recommend ations for cost optimization. Cost optimization in AWS Managed Services Earlier updates Version May 08, 2025 985 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated content, Accelerate account in MALZ An incorrect role name (CustomerDefaultAdminRole) AMS Accelerate accounts "Accessing your Accelerate was updated to the correct account" section. role name (AccelerateDefault AdminRole). Updated content, AMS access IAM roles Added other AMS IAM roles used to access your accounts. Why and when AMS accesses your account "AMS customer account access IAM roles" section. Updated content, AMS backup plans Added AMS-managed backup plans. AMS backup plans and AMS backup vaults Updated content, AWS Secrets Manager Updated the FAQ. Use AMS SSP to provision AWS Secrets Manager in your AMS account Updated content, Direct Change Mode (DCM) AMS does not support onboarding Service Catalog Getting Started with Direct Change mode onboarding customers to DCM. Updated content, Service Description Clarified the Supported Services section: Supported AWS services • Amazon EKS on AWS Fargate -> Amazon Elastic Kubernetes Service on Fargate • Amazon ECS for Fargate - > Amazon Elastic Container Service on AWS Fargate • Amazon Kinesis -> Amazon Kinesis Data Streams Earlier updates Version May 08, 2025 986 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated content, Offboardi ng MALZ accounts Updated to reference new change types for offboarding Offboard AMS Application accounts application accounts. Updated content, Developer mode incident management Updated incident SLA description to: AMS SLA Incident management in AMS Developer mode does not apply for resources created or updated outside of AMS Change Managemen t (Developer Mode included) therefore, resources updated or created in Developer mode are automatically degraded to a P3 and support is best effort. Updated content, DCM onboarding The RFC template for new DCM now includes a field for Getting Started with Direct Change mode your SAML Provider ARN. Updated content, DCM for AWS CloudFormation Instructions for creating and updating AWS CloudForm AMS Transform ation stacks now include YAML examples. Updated content, MALZ Tools account There is a new IAM role for migrations: AWSManage dServicesMigration Role . AWS Application Migration Service (AWS MGN) and Enable access to the new AMS Tools account New content, multi-account landing zone accounts You can create an Accelerat e account in your multi- account landing zone AMS Management account. AMS Accelerate accounts Earlier updates Version May 08, 2025 987 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Updated content, API/CLI SDK installation The installation instructions listed the wrong file name Using the AMS API and CLI for Mac/Linux installs, and an incorrect command. This has been fixed. Updated content, Accelerate account in MALZ There was an incorrect rule name (CustomerDefaultAd AMS Accelerate accounts, "Accessing your Accelerate minRole), it's been updated account" section to the correct one (Accelera teDefaultAdminRole). Updated content, monitoring Root usage monitoring was revised from 85% to 95%. Alerts from baseline monitoring in AMS Updated content, AMI notifications You can create many types of SNS notifications for new AMS AMI notifications with SNS AMS AMIs, we've added information on creating various types. Updated content, AMS default settings Removed references to Macie Classic, replaced by Macie. Alerts from baseline monitoring in AMS Updated content, AMS reserved prefixes Alphabetized the list of reserved prefixes. AMS reserved prefixes Updated content, Service Description The features sections on change management and self-service provisioning were updated with more informati on on AMS modes. AWS Managed Services (AMS) AMS Advanced operation plan features Updated content, AWS Secrets Manager Sharing Keys using Secrets Manager. February 10, 2022 Earlier updates Version May 08, 2025 988 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link New content, Self-serv ice provisioning, Amazon Connect Added an FAQ for how to request to add a list of countries for outbound or inbound calls. February 10, 2022 New content, Self-service provisioning, Amazon EKS on Added an FAQ restriction that deploying EKS clusters February 10, 2022 Fargate through the AWS cloud development kit (CDK) or CloudFormation Ingest is not supported in AMS. Changed content, Developer mode Correction, you do not use an RFC or service request to February 10, 2022 assign users to your federatio n solution, you do that yourself depending on your solution. Note that IAM is not supported in DCM. February 10, 2022 Changed content, Direct Change mode (DCM) DCM, note validations that we do. February 10, 2022 DCM, clarify restrictions of
ams-ug-334
ams-ug.pdf
334
February 10, 2022 New content, Self-service provisioning, Amazon EKS on Added an FAQ restriction that deploying EKS clusters February 10, 2022 Fargate through the AWS cloud development kit (CDK) or CloudFormation Ingest is not supported in AMS. Changed content, Developer mode Correction, you do not use an RFC or service request to February 10, 2022 assign users to your federatio n solution, you do that yourself depending on your solution. Note that IAM is not supported in DCM. February 10, 2022 Changed content, Direct Change mode (DCM) DCM, note validations that we do. February 10, 2022 DCM, clarify restrictions of different roles. February 10, 2022 Changed content, Monitoring baseline alerts Redshift cluster resource alerts changed. February 10, 2022 Changed content, Self-service reporting Added the exact s3 bucket name, (ams-reporting-data- a<Account_ID>) for customers to use to fetch the reports. February 10, 2022 Earlier updates Version May 08, 2025 989 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Changed content, updated content to reference Updated multi-account landing zone (MALZ) applicati February 10, 2022 automated change on account content to types instead of manual reference automated change Management | Other | Other types (three, "Associating the (MOO) Changed content: AMS AMIs. New content: Self-service provisioning. TGW attachment to a route table", "Create routes in the TGW route tables to connect to this VPC", and "Configur ing your VPC Route tables to point at the AMS Multi-Acc ount Landing Zone transit gateway"). Receiving alerts generated by AMS Tag-based alert notifications Added new information about security-enhanced AMIs. see Supported configurations, AMS Amazon Machine Images (AMIs), and Security enhanced AMIs. Added Amazon Fsx for OpenZFS. See Use AMS SSP to provision Amazon FSx for OpenZFS in your AMS account. January 27, 2022 January 27, 2022 Earlier updates Version May 08, 2025 990 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Changed content: Code-Depl oy self-service provisioning Additional role name, and additional restriction note. January 27, 2022 service (SSPS). see Use AMS SSP to provision AWS CodeDeploy in your AMS account. Changed content: Updated links. Fixed broken links: AMS-AMIs, Finding your settings, Finding January 13, 2022 Changed content: EKS Support for Fargate a Stack ID, Finding a VPC ID, ListVpcSummaries, ListStack Summaries, and GetStack APIs. For example, see Find stack IDs in AMS. Added limitation to FAQs: Creating or managing EC2 nodegroups with EKS is not supported. See Use AMS SSP to provision Amazon EKS on AWS Fargate in your AMS account. January 13, 2022 Changed content: CloudForm ation, Direct Change Mode Added instructions for creating or updating CF stacks January 13, 2022 (DCM) using AmsStackTransform. See Creating stacks using Direct Change mode. Earlier updates Version May 08, 2025 991 AMS Advanced User Guide AMS Advanced Concepts and Procedures Change Description Link Changed content: Uniformity in AWS Service Names AMS references to AWS services exactly match January 13, 2022 the official AWS titles or metadata. Previously, there were minor variations that complicated pattern matching. For example, see Use AMS SSP to provision Alexa for Business in your AMS account. Changed content: Self service provisioning of Elastic Added an FAQ item for using ECR to manage user permissio January 13, 2022 Container Registry (ECR) ns. See Use AMS SSP to provision Amazon Elastic Container Registry in your AMS account. Earlier updates Version May 08, 2025 992 AMS Advanced User Guide AMS Advanced Concepts and Procedures AWS Glossary For the latest AWS terminology, see the AWS glossary in the AWS Glossary Reference. Version May 08, 2025 993
analytics-java-api-001
analytics-java-api.pdf
1
Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink: Managed Service for Apache Flink Developer Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Table of Contents ....................................................................................................................................................... xvi What is Managed Service for Apache Flink? .................................................................................. 1 Decide between using Managed Service for Apache Flink or Managed Service for Apache Flink Studio .............................................................................................................................................................. 1 Choose which Apache Flink APIs to use in Managed Service for Apache Flink ................................ 3 Choose a Flink API .................................................................................................................................. 3 Get started with streaming data applications ........................................................................................ 5 How it works .................................................................................................................................... 6 Program your Apache Flink application .................................................................................................. 6 DataStream API ....................................................................................................................................... 6 Table API ................................................................................................................................................... 7 Create your Managed Service for Apache Flink application ................................................................ 7 Create an application ................................................................................................................................... 8 Build your Managed Service for Apache Flink application code .................................................... 8 Create your Managed Service for Apache Flink application ........................................................... 9 Start your Managed Service for Apache Flink application ........................................................... 10 Verify your Managed Service for Apache Flink application ......................................................... 11 Enable system rollbacks ...................................................................................................................... 11 Run an application ..................................................................................................................................... 14 Identify application and job status ................................................................................................... 14 Run batch workloads ........................................................................................................................... 15 Application resources ................................................................................................................................ 16 Managed Service for Apache Flink application resources ............................................................ 16 Apache Flink application resources ................................................................................................... 16 Pricing ........................................................................................................................................................... 17 How it works .......................................................................................................................................... 16 AWS Region availability ....................................................................................................................... 18 Pricing examples ................................................................................................................................... 19 Review DataStream API components ..................................................................................................... 23 Connectors .............................................................................................................................................. 24 Operators ................................................................................................................................................ 33 Event tracking ........................................................................................................................................ 34 Table API components .............................................................................................................................. 35 Table API connectors ........................................................................................................................... 35 iii Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Table API time attributes .................................................................................................................... 37 Use Python .................................................................................................................................................. 37 Program your Python application ..................................................................................................... 38 Create your Python application ......................................................................................................... 41 Monitor your Python application ...................................................................................................... 42 Use runtime properties ............................................................................................................................. 44 Manage runtime properties using the console ............................................................................... 44 Manage runtime properties using the CLI ....................................................................................... 45 Access runtime properties in a Managed Service for Apache Flink application ........................ 47 Use Apache Flink connectors ................................................................................................................... 48 Known issues .......................................................................................................................................... 51 Implement fault tolerance ....................................................................................................................... 51 Configure checkpointing in Managed Service for Apache Flink .................................................. 52 Review checkpointing API examples ................................................................................................. 53 Manage application backups using snapshots ..................................................................................... 55 Manage automatic snapshot creation .............................................................................................. 56 Restore from a snapshot that contains incompatible state data ................................................ 57 Review snapshot API examples .......................................................................................................... 58 Use in-place version upgrades for Apache Flink ................................................................................. 61 Upgrade applications ........................................................................................................................... 61 Upgrade to a new version .................................................................................................................. 62 Roll back application upgrades .......................................................................................................... 68 Best practices ......................................................................................................................................... 69 Known issues .......................................................................................................................................... 69 Implement application scaling ................................................................................................................ 71 Configure application parallelism and ParallelismPerKPU ........................................................... 71 Allocate Kinesis Processing Units ...................................................................................................... 72 Update your application's parallelism .............................................................................................. 73 Use automatic scaling .......................................................................................................................... 74 maxParallelism considerations ........................................................................................................... 77 Add tags to applications .......................................................................................................................... 77 Add tags when an application is created ........................................................................................ 78 Add or update tags for an existing application .............................................................................. 79 List tags for an application ................................................................................................................ 79 Remove tags from an application ..................................................................................................... 79 Use CloudFormation .................................................................................................................................. 80 iv Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Before you begin .................................................................................................................................. 80 Write a Lambda function .................................................................................................................... 80 Create a Lambda role .......................................................................................................................... 82 Invoke the Lambda function .............................................................................................................. 83 Review an extended example ............................................................................................................ 83 Use the Apache Flink Dashboard ............................................................................................................ 89 Access your application's Apache Flink Dashboard ........................................................................ 89 Release versions ............................................................................................................................. 91 Amazon Managed Service for Apache Flink 1.20 ................................................................................ 93 Supported features ............................................................................................................................... 93 Components ........................................................................................................................................... 94 Known issues .......................................................................................................................................... 94 Amazon Managed Service for Apache Flink 1.19 ................................................................................ 95 Supported features ............................................................................................................................... 96 Changes in Amazon Managed Service for Apache Flink 1.19.1 .................................................. 98 Components ......................................................................................................................................... 100 Known issues ....................................................................................................................................... 100 Amazon Managed Service for Apache Flink 1.18 .............................................................................. 100 Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.15 ................... 102 Components ......................................................................................................................................... 104 Known issues ....................................................................................................................................... 104 Amazon Managed Service for Apache Flink 1.15 .............................................................................. 105 Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.15 ................... 106 Components ......................................................................................................................................... 104 Known issues ....................................................................................................................................... 107 Earlier versions ......................................................................................................................................... 108 Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions .... 109 Building applications
analytics-java-api-002
analytics-java-api.pdf
2
............................................................................................................................... 96 Changes in Amazon Managed Service for Apache Flink 1.19.1 .................................................. 98 Components ......................................................................................................................................... 100 Known issues ....................................................................................................................................... 100 Amazon Managed Service for Apache Flink 1.18 .............................................................................. 100 Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.15 ................... 102 Components ......................................................................................................................................... 104 Known issues ....................................................................................................................................... 104 Amazon Managed Service for Apache Flink 1.15 .............................................................................. 105 Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.15 ................... 106 Components ......................................................................................................................................... 104 Known issues ....................................................................................................................................... 107 Earlier versions ......................................................................................................................................... 108 Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions .... 109 Building applications with Apache Flink 1.8.2 ............................................................................. 110 Building applications with Apache Flink 1.6.2 ............................................................................. 111 Upgrading applications ..................................................................................................................... 112 Available connectors in Apache Flink 1.6.2 and 1.8.2 ................................................................ 112 Getting Started: Flink 1.13.2 ........................................................................................................... 113 Getting Started: Flink 1.11.1 ........................................................................................................... 139 Getting started: Flink 1.8.2 - deprecating ..................................................................................... 165 Getting started: Flink 1.6.2 - deprecating ..................................................................................... 191 Legacy examples ................................................................................................................................. 216 v Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use Studio notebooks with Managed Service for Apache Flink ............................................... 386 Use the correct Studio notebook Runtime version ........................................................................... 387 Create a Studio notebook ...................................................................................................................... 388 Perform an interactive analysis of streaming data ........................................................................... 389 Flink interpreters ................................................................................................................................ 389 Apache Flink table environment variables .................................................................................... 390 Deploy as an application with durable state ..................................................................................... 391 Scala/Python criteria ......................................................................................................................... 393 SQL criteria .......................................................................................................................................... 393 IAM permissions ....................................................................................................................................... 393 Use connectors and dependencies ....................................................................................................... 394 Default connectors ............................................................................................................................. 394 Add dependencies and custom connectors .................................................................................. 396 User-defined functions ........................................................................................................................... 397 Considerations with user-defined functions ................................................................................. 398 Enable checkpointing .............................................................................................................................. 399 Set the checkpointing interval ........................................................................................................ 399 Set the checkpointing type .............................................................................................................. 400 Upgrade Studio Runtime ........................................................................................................................ 400 Upgrade your notebook to a new Studio Runtime ..................................................................... 400 Work with AWS Glue ............................................................................................................................... 405 Table properties .................................................................................................................................. 405 Examples and tutorials for Studio notebooks in Managed Service for Apache Flink ................. 407 Tutorial: Create a Studio notebook in Managed Service for Apache Flink .............................. 408 Tutorial: Deploy a Studio notebook as a Managed Service for Apache Flink application with durable state ....................................................................................................................................... 428 View example queries to analyza data in a Studio notebook ................................................... 431 Troubleshoot Studio notebooks for Managed Service for Apache Flink ...................................... 444 Stop a stuck application ................................................................................................................... 444 Deploy as an application with durable state in a VPC with no internet access ...................... 444 Deploy-as-app size and build time reduction .............................................................................. 445 Cancel jobs ........................................................................................................................................... 447 Restart the Apache Flink interpreter .............................................................................................. 448 Create custom IAM policies for Managed Service for Apache Flink Studio notebooks .............. 448 AWS Glue .............................................................................................................................................. 449 CloudWatch Logs ................................................................................................................................ 449 vi Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Kinesis streams .................................................................................................................................... 450 Amazon MSK clusters ........................................................................................................................ 453 Tutorial: Get started using the DataStream API in Managed Service for Apache Flink ........... 454 Review application components ........................................................................................................... 166 Complete the required prerequisites ................................................................................................... 455 Set up an account .................................................................................................................................... 456 Sign up for an AWS account ............................................................................................................ 114 Create a user with administrative access ...................................................................................... 115 Grant programmatic access .............................................................................................................. 458 Next Step .............................................................................................................................................. 459 Set up the AWS CLI ................................................................................................................................. 460 Next step .............................................................................................................................................. 461 Create an application ............................................................................................................................ 461 Create dependent resources ............................................................................................................. 462 Set up your local development environment ............................................................................... 463 Download and examine the Apache Flink streaming Java code ............................................... 464 Write sample records to the input stream .................................................................................... 469 Run your application locally ............................................................................................................ 470 Observe input and output data in Kinesis streams ..................................................................... 473 Stop your application running locally ............................................................................................ 474 Compile and package your application code ................................................................................ 474 Upload the application code JAR file ............................................................................................. 475 Create and configure the Managed Service for Apache Flink application ............................... 475 Next step .............................................................................................................................................. 482 Clean up resources .................................................................................................................................. 482 Delete your Managed Service for Apache Flink application ...................................................... 483 Delete your Kinesis data streams .................................................................................................... 483 Delete your Amazon S3 objects and bucket ................................................................................. 483 Delete your IAM resources ................................................................................................................ 484 Delete your CloudWatch resources ................................................................................................. 484 Explore additional resources for Apache Flink ............................................................................. 484 Explore additional resources ................................................................................................................. 485 Tutorial: Get started using the TableAPI in Managed Service for Apache Flink ....................... 486 Review application components ........................................................................................................... 486 Complete the required prerequisites ................................................................................................... 487 Create an application .............................................................................................................................. 488 vii Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create dependent resources ............................................................................................................. 488 Set up your local development environment ............................................................................... 489 Download and examine the Apache Flink streaming Java code ............................................... 490 Run your application locally ............................................................................................................ 496 Observe the application writing data to an S3 bucket ............................................................... 498 Stop your application running locally ............................................................................................ 499 Compile and package your application code ................................................................................ 499 Upload the application code JAR file ............................................................................................. 500 Create and configure the Managed Service for Apache Flink application ............................... 501 Next step ...................................................................................................................................................
analytics-java-api-003
analytics-java-api.pdf
3
prerequisites ................................................................................................... 487 Create an application .............................................................................................................................. 488 vii Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create dependent resources ............................................................................................................. 488 Set up your local development environment ............................................................................... 489 Download and examine the Apache Flink streaming Java code ............................................... 490 Run your application locally ............................................................................................................ 496 Observe the application writing data to an S3 bucket ............................................................... 498 Stop your application running locally ............................................................................................ 499 Compile and package your application code ................................................................................ 499 Upload the application code JAR file ............................................................................................. 500 Create and configure the Managed Service for Apache Flink application ............................... 501 Next step ................................................................................................................................................... 507 Clean up resources .................................................................................................................................. 507 Delete your Managed Service for Apache Flink application ...................................................... 507 Delete your Amazon S3 objects and bucket ................................................................................. 507 Delete your IAM resources ................................................................................................................ 508 Delete your CloudWatch resources ................................................................................................. 509 Next step .............................................................................................................................................. 509 Explore additional resources ................................................................................................................. 509 Tutorial: Get started using Python in Managed Service for Apache Flink ............................... 510 Review application components ........................................................................................................... 510 Fulfill the prerequisites ........................................................................................................................... 511 Create an application .............................................................................................................................. 513 Create dependent resources ............................................................................................................. 513 Set up your local development environment ............................................................................... 515 Download and examine the Apache Flink streaming Python code .......................................... 516 Manage JAR dependencies ............................................................................................................... 519 Write sample records to the input stream .................................................................................... 521 Run your application locally ............................................................................................................ 523 Observe input and output data in Kinesis streams ..................................................................... 525 Stop your application running locally ............................................................................................ 525 Package your application code ........................................................................................................ 525 Upload the application package to an Amazon S3 bucket ........................................................ 526 Create and configure the Managed Service for Apache Flink application ............................... 526 Next step .............................................................................................................................................. 533 Clean up resources .................................................................................................................................. 533 Delete your Managed Service for Apache Flink application ...................................................... 533 Delete your Kinesis data streams .................................................................................................... 534 viii Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Amazon S3 objects and bucket ................................................................................. 534 Delete your IAM resources ................................................................................................................ 534 Delete your CloudWatch resources ................................................................................................. 535 Tutorial: Get started using Scala in Managed Service for Apache Flink ................................... 536 Create dependent resources .................................................................................................................. 536 Write sample records to the input stream ......................................................................................... 537 Download and examine the application code ................................................................................... 539 Compile and upload the application code ......................................................................................... 540 Create and run the application (console) ........................................................................................... 541 Create the Application ....................................................................................................................... 541 Configure the application ................................................................................................................. 542 Edit the IAM policy ............................................................................................................................. 544 Run the application ............................................................................................................................ 545 Stop the application .......................................................................................................................... 546 Create and run the application (CLI) ................................................................................................... 546 Create a permissions policy .............................................................................................................. 546 Create an IAM policy ......................................................................................................................... 548 Create the application ....................................................................................................................... 549 Start the application .......................................................................................................................... 550 Stop the application .......................................................................................................................... 381 Add a CloudWatch logging option ................................................................................................. 382 Update environment properties ...................................................................................................... 382 Update the application code ............................................................................................................ 383 Clean up AWS resources ......................................................................................................................... 554 Delete your Managed Service for Apache Flink application ...................................................... 554 Delete your Kinesis data streams .................................................................................................... 554 Delete your Amazon S3 object and bucket .................................................................................. 555 Delete your IAM resources ................................................................................................................ 555 Delete your CloudWatch resources ................................................................................................. 555 Use Apache Beam with Managed Service for Apache Flink applications ................................. 556 Limitations of Apache Flink runner with Managed Service for Apache Flink .............................. 556 Apache Beam capabilities with Managed Service for Apache Flink ............................................... 557 Creating an application using Apache Beam ..................................................................................... 557 Create dependent resources ............................................................................................................. 558 Write sample records to the input stream .................................................................................... 558 Download and examine the application code .............................................................................. 559 ix Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Compile the application code .......................................................................................................... 560 Upload the Apache Flink streaming Java code ............................................................................ 561 Create and run the Managed Service for Apache Flink application ......................................... 561 Clean Up ............................................................................................................................................... 565 Next steps ............................................................................................................................................ 567 Training workshops, labs, and solution implementations ........................................................ 568 Managed Service for Apache Flink workshop .................................................................................... 568 Develop Apache Flink applications locally before deploying to Managed Service for Apache Flink ............................................................................................................................................................ 568 Event detection with Managed Service for Apache Flink Studio ................................................... 569 AWS Streaming Data Solution .............................................................................................................. 569 Practice using a Clickstream lab with Apache Flink and Apache Kafka ........................................ 569 Set up custom scaling using Application Auto Scaling .................................................................... 570 View a sample Amazon CloudWatch dashboard ............................................................................... 570 Use templates for AWS Streaming data solution for Amazon MSK .............................................. 570 Explore more Managed Service for Apache Flink solutions on GitHub ......................................... 570 Use practical utilities for Managed Service for Apache Flink ................................................... 572 Snapshot manager ................................................................................................................................... 572 Benchmarking ........................................................................................................................................... 572 Examples for creating and working with Managed Service for Apache Flink applications ...... 573 Java examples for Managed Service for Apache Flink ..................................................................... 573 Python examples for Managed Service for Apache Flink ................................................................ 577 ................................................................................................................................................................ 577 Scala examples for Managed Service for Apache Flink ................................................................... 578 Security in Managed Service for Apache Flink .......................................................................... 580 Data protection ........................................................................................................................................ 581 Data encryption .................................................................................................................................. 581 Identity and Access Management
analytics-java-api-004
analytics-java-api.pdf
4
MSK .............................................. 570 Explore more Managed Service for Apache Flink solutions on GitHub ......................................... 570 Use practical utilities for Managed Service for Apache Flink ................................................... 572 Snapshot manager ................................................................................................................................... 572 Benchmarking ........................................................................................................................................... 572 Examples for creating and working with Managed Service for Apache Flink applications ...... 573 Java examples for Managed Service for Apache Flink ..................................................................... 573 Python examples for Managed Service for Apache Flink ................................................................ 577 ................................................................................................................................................................ 577 Scala examples for Managed Service for Apache Flink ................................................................... 578 Security in Managed Service for Apache Flink .......................................................................... 580 Data protection ........................................................................................................................................ 581 Data encryption .................................................................................................................................. 581 Identity and Access Management for Managed Service for Apache Flink .................................... 582 Audience ............................................................................................................................................... 582 Authenticating with identities ......................................................................................................... 583 Managing access using policies ....................................................................................................... 586 How Amazon Managed Service for Apache Flink works with IAM ........................................... 589 Identity-based policy examples ....................................................................................................... 596 Troubleshooting .................................................................................................................................. 599 Cross-service confused deputy prevention ................................................................................... 601 Compliance validation for Managed Service for Apache Flink ....................................................... 603 x Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide FedRAMP .............................................................................................................................................. 603 Resilience and disaster recovery in Managed Service for Apache Flink ........................................ 604 Disaster recovery ................................................................................................................................ 604 Versioning ............................................................................................................................................. 605 Infrastructure security in Managed Service for Apache Flink ......................................................... 605 Security best practices for Managed Service for Apache Flink ....................................................... 605 Implement least privilege access .................................................................................................... 606 Use IAM roles to access other Amazon services .......................................................................... 606 Implement server-side encryption in dependent resources ...................................................... 606 Use CloudTrail to monitor API calls ............................................................................................... 606 Logging and monitoring in Amazon Managed Service for Apache Flink ................................. 608 Logging in Managed Service for Apache Flink .................................................................................. 609 Querying Logs with CloudWatch Logs Insights ........................................................................... 609 Monitoring in Managed Service for Apache Flink ............................................................................. 609 Set up application logging in Managed Service for Apache Flink ................................................. 611 Set up CloudWatch logging using the console ............................................................................ 611 Set up CloudWatch logging using the CLI .................................................................................... 612 Control application monitoring levels ............................................................................................ 617 Apply logging best practices ............................................................................................................ 618 Perform logging troubleshooting ................................................................................................... 618 Use CloudWatch Logs Insights ........................................................................................................ 618 Analyze logs with CloudWatch Logs Insights .................................................................................... 619 Run a sample query ........................................................................................................................... 619 Review example queries .................................................................................................................... 620 Metrics and dimensions in Managed Service for Apache Flink ...................................................... 623 Application metrics ............................................................................................................................ 623 Kinesis Data Streams connector metrics ....................................................................................... 651 Amazon MSK connector metrics ..................................................................................................... 652 Apache Zeppelin metrics ................................................................................................................... 654 View CloudWatch metrics ................................................................................................................. 655 Set CloudWatch metrics reporting levels ...................................................................................... 656 Use custom metrics with Amazon Managed Service for Apache Flink .................................... 657 Use CloudWatch Alarms with Amazon Managed Service for Apache Flink ............................ 661 Write custom messages to CloudWatch Logs .................................................................................... 673 Write to CloudWatch logs using Log4J ......................................................................................... 673 Write to CloudWatch logs using SLF4J .......................................................................................... 674 xi Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Log Managed Service for Apache Flink API calls with AWS CloudTrail ......................................... 675 Managed Service for Apache Flink information in CloudTrail ................................................... 676 Understand Managed Service for Apache Flink log file entries ................................................ 677 Tune performance ....................................................................................................................... 679 Troubleshoot performance issues ........................................................................................................ 679 Understand the data path ................................................................................................................ 679 Performance troubleshooting solutions ........................................................................................ 680 Use performance best practices ........................................................................................................... 682 Manage scaling properly ................................................................................................................... 682 Monitor external dependency resource usage ............................................................................. 684 Run your Apache Flink application locally .................................................................................... 684 Monitor performance .............................................................................................................................. 685 Monitor performance using CloudWatch metrics ........................................................................ 685 Monitor performance using CloudWatch logs and alarms ......................................................... 685 Managed Service for Apache Flink and Studio notebook quota .............................................. 686 Manage maintenance tasks for Managed Service for Apache Flink ......................................... 688 Choose a maintenance window ............................................................................................................ 690 Identify maintenance instances ............................................................................................................ 690 Achieve production readiness for your Managed Service for Apache Flink applications ......... 692 Load-test your applications ................................................................................................................... 692 Define Max parallelism ........................................................................................................................... 692 Set a UUID for all operators ................................................................................................................. 693 Best practices ............................................................................................................................... 694 Minimize the size of the uber JAR ....................................................................................................... 694 Fault tolerance: checkpoints and savepoints ..................................................................................... 697 Unsupported connector versions .......................................................................................................... 697 Performance and parallelism ................................................................................................................ 698 Setting per-operator parallelism .......................................................................................................... 698 Logging ...................................................................................................................................................... 699 Coding ........................................................................................................................................................ 699 Managing credentials .............................................................................................................................. 700 Reading from sources with few shards/partitions ............................................................................ 700 Studio notebook refresh interval ......................................................................................................... 701 Studio notebook optimum performance ............................................................................................ 701 How watermark strategies and idle shards affect time windows .................................................. 701 Summary .............................................................................................................................................. 703 xii Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Example ................................................................................................................................................ 703 Set a UUID for all operators ................................................................................................................. 712 Add ServiceResourceTransformer to the Maven shade plugin ....................................................... 713 Apache Flink stateful functions .................................................................................................. 714 Apache Flink application template ...................................................................................................... 714 Location of the module configuration ................................................................................................ 715 Learn about Apache Flink settings ............................................................................................. 716 Apache Flink configuration .................................................................................................................... 716 State backend ........................................................................................................................................... 717 Checkpointing ........................................................................................................................................... 717 Savepointing ............................................................................................................................................. 718 Heap sizes .................................................................................................................................................. 719 Buffer debloating ..................................................................................................................................... 719 Modifiable Flink configuration properties .......................................................................................... 719 Restart strategy .................................................................................................................................. 719 Checkpoints and state
analytics-java-api-005
analytics-java-api.pdf
5
.................................................. 701 Summary .............................................................................................................................................. 703 xii Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Example ................................................................................................................................................ 703 Set a UUID for all operators ................................................................................................................. 712 Add ServiceResourceTransformer to the Maven shade plugin ....................................................... 713 Apache Flink stateful functions .................................................................................................. 714 Apache Flink application template ...................................................................................................... 714 Location of the module configuration ................................................................................................ 715 Learn about Apache Flink settings ............................................................................................. 716 Apache Flink configuration .................................................................................................................... 716 State backend ........................................................................................................................................... 717 Checkpointing ........................................................................................................................................... 717 Savepointing ............................................................................................................................................. 718 Heap sizes .................................................................................................................................................. 719 Buffer debloating ..................................................................................................................................... 719 Modifiable Flink configuration properties .......................................................................................... 719 Restart strategy .................................................................................................................................. 719 Checkpoints and state backends ..................................................................................................... 720 Checkpointing ...................................................................................................................................... 720 RocksDB native metrics ..................................................................................................................... 720 RocksDB options ................................................................................................................................. 721 Advanced state backends options ................................................................................................... 721 Full TaskManager options ................................................................................................................. 721 Memory configuration ....................................................................................................................... 722 RPC / Akka ........................................................................................................................................... 722 Client ..................................................................................................................................................... 723 Advanced cluster options .................................................................................................................. 723 Filesystem configurations ................................................................................................................. 723 Advanced fault tolerance options ................................................................................................... 723 Memory configuration ....................................................................................................................... 722 Metrics ................................................................................................................................................... 723 Advanced options for the REST endpoint and client .................................................................. 724 Advanced SSL security options ........................................................................................................ 724 Advanced scheduling options .......................................................................................................... 724 Advanced options for Flink web UI ................................................................................................ 724 View configured Flink properties ......................................................................................................... 724 Configure MSF to access resources in an Amazon VPC ............................................................. 725 Amazon VPC concepts ............................................................................................................................ 725 xiii Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide VPC application permissions ................................................................................................................. 726 Add a permissions policy for accessing an Amazon VPC ........................................................... 726 Establish internet and service access for a VPC-connected Managed Service for Apache Flink application ................................................................................................................................................. 727 Related information ........................................................................................................................... 729 Use the Managed Service for Apache Flink VPC API ........................................................................ 729 Create application .............................................................................................................................. 729 AddApplicationVpcConfiguration .................................................................................................... 730 DeleteApplicationVpcConfiguration ................................................................................................ 730 Update application ............................................................................................................................. 731 Example: Use a VPC ................................................................................................................................ 731 Troubleshoot Managed Service for Apache Flink ...................................................................... 732 Development troubleshooting .............................................................................................................. 732 System rollback best practices ........................................................................................................ 733 Hudi configuration best practices ................................................................................................... 734 Apache Flink Flame Graphs .............................................................................................................. 734 Credential provider issue with EFO connector 1.15.2 ................................................................. 734 Applications with unsupported Kinesis connectors ..................................................................... 735 Compile error: "Could not resolve dependencies for project" ................................................... 737 Invalid choice: "kinesisanalyticsv2" ................................................................................................. 738 UpdateApplication action isn't reloading application code ....................................................... 738 S3 StreamingFileSink FileNotFoundExceptions ............................................................................ 738 FlinkKafkaConsumer issue with stop with savepoint .................................................................. 740 Flink 1.15 Async Sink Deadlock ....................................................................................................... 741 Amazon Kinesis data streams source processing out of order during re-sharding ................ 750 Real-time vector embedding blueprints FAQ and troubleshooting ......................................... 751 Runtime troubleshooting ....................................................................................................................... 763 Troubleshooting tools ........................................................................................................................ 763 Application issues ............................................................................................................................... 763 Application is restarting .................................................................................................................... 768 Throughput is too slow ..................................................................................................................... 771 Unbounded state growth ................................................................................................................. 772 I/O bound operators .......................................................................................................................... 773 Upstream or source throttling from a Kinesis data stream ....................................................... 773 Checkpoints .......................................................................................................................................... 774 Checkpointing is timing out ............................................................................................................. 780 xiv Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Checkpoint failure for Apache Beam ............................................................................................. 782 Backpressure ........................................................................................................................................ 784 Data skew ............................................................................................................................................. 785 State skew ............................................................................................................................................ 785 Integrate with resources in different Regions .............................................................................. 786 Document history ........................................................................................................................ 787 API example code ........................................................................................................................ 792 AddApplicationCloudWatchLoggingOption ........................................................................................ 793 AddApplicationInput ................................................................................................................................ 793 AddApplicationInputProcessingConfiguration ................................................................................... 794 AddApplicationOutput ............................................................................................................................ 795 AddApplicationReferenceDataSource ................................................................................................... 795 AddApplicationVpcConfiguration .......................................................................................................... 796 CreateApplication ..................................................................................................................................... 796 CreateApplicationSnapshot .................................................................................................................... 798 DeleteApplication ..................................................................................................................................... 798 DeleteApplicationCloudWatchLoggingOption ................................................................................... 798 DeleteApplicationInputProcessingConfiguration ............................................................................... 798 DeleteApplicationOutput ........................................................................................................................ 799 DeleteApplicationReferenceDataSource .............................................................................................. 799 DeleteApplicationSnapshot .................................................................................................................... 799 DeleteApplicationVpcConfiguration ..................................................................................................... 800 DescribeApplication ................................................................................................................................. 800 DescribeApplicationSnapshot ................................................................................................................ 800 DiscoverInputSchema .............................................................................................................................. 800 ListApplications ........................................................................................................................................ 801 ListApplicationSnapshots ....................................................................................................................... 801 StartApplication ....................................................................................................................................... 802 StopApplication ........................................................................................................................................ 802 UpdateApplication ................................................................................................................................... 802 API Reference ............................................................................................................................... 804 ...................................................................................................................................................... 805 xv Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Amazon Managed Service for Apache Flink was previously known as Amazon Kinesis Data Analytics for Apache Flink. xvi Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide What is Amazon Managed Service for Apache Flink? With Amazon Managed Service for Apache Flink, you can use Java, Scala, Python, or SQL to process and analyze streaming data. The service enables you to author and run code against streaming sources and static sources to perform time-series analytics, feed real-time dashboards, and metrics. You can build applications with the language of your choice in Managed Service for Apache Flink using open-source libraries based on Apache Flink. Apache Flink is a popular framework and engine for processing data streams. Managed Service for Apache Flink provides the underlying infrastructure for your Apache Flink applications. It handles core capabilities like provisioning compute resources, AZ failover resilience, parallel computation, automatic scaling, and application backups (implemented as checkpoints and snapshots). You can use the high-level Flink programming features (such as operators, functions, sources, and sinks) in the same way that you use them when hosting the Flink infrastructure yourself. Decide between using Managed Service for Apache Flink or Managed Service for Apache Flink Studio You have two options for running your Flink jobs with
analytics-java-api-006
analytics-java-api.pdf
6
a popular framework and engine for processing data streams. Managed Service for Apache Flink provides the underlying infrastructure for your Apache Flink applications. It handles core capabilities like provisioning compute resources, AZ failover resilience, parallel computation, automatic scaling, and application backups (implemented as checkpoints and snapshots). You can use the high-level Flink programming features (such as operators, functions, sources, and sinks) in the same way that you use them when hosting the Flink infrastructure yourself. Decide between using Managed Service for Apache Flink or Managed Service for Apache Flink Studio You have two options for running your Flink jobs with Amazon Managed Service for Apache Flink. With Managed Service for Apache Flink, you build Flink applications in Java, Scala, or Python (and embedded SQL) using an IDE of your choice and the Apache Flink Datastream or Table APIs. With Managed Service for Apache Flink Studio, you can interactively query data streams in real time and easily build and run stream processing applications using standard SQL, Python, and Scala. You can select which method that best suits your use case. If you are unsure, this section will offer high level guidance to help you. Decide between using Managed Service for Apache Flink or Managed Service for Apache Flink Studio 1 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Before deciding on whether to use Amazon Managed Service for Apache Flink or Amazon Managed Service for Apache Flink Studio you should consider your use case. If you plan to operate a long running application that will undertake workloads such as Streaming ETL or Continuous Applications, you should consider using Managed Service for Apache Flink. This is because you are able to create your Flink application using the Flink APIs directly in the IDE of your choice. Developing locally with your IDE also ensures you can leverage software development lifecycle (SDLC) common processes and tooling such as code versioning in Git, CI/CD automation, or unit testing. If you are interested in ad-hoc data exploration, want to query streaming data interactively, or create private real-time dashboards, Managed Service for Apache Flink Studio will help you meet these goals in just a few clicks. Users familiar with SQL can consider deploying a long-running application from Studio directly. Decide between using Managed Service for Apache Flink or Managed Service for Apache Flink Studio 2 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note You can promote your Studio notebook to a long-running application. However, if you want to integrate with your SDLC tools such as code versioning on Git and CI/CD automation, or techniques such as unit-testing, we recommend Managed Service for Apache Flink using the IDE of your choice. Choose which Apache Flink APIs to use in Managed Service for Apache Flink You can build applications using Java, Python, and Scala in Managed Service for Apache Flink using Apache Flink APIs in an IDE of your choice. You can find guidance on how to build applications using the Flink Datastream and Table API in the documentation. You can select the language you create your Flink application in and the APIs you use to best meet the needs of your application and operations. If you are unsure, this section provides high level guidance to help you. Choose a Flink API The Apache Flink APIs have differing levels of abstraction that may effect how you decide to build your application. They are expressive and flexible and can be used together to build your application. You do not have to use only one Flink API. You can learn more about the Flink APIs in the Apache Flink documentation. Flink offers four levels of API abstraction: Flink SQL, Table API, DataStream API, and Process Function, which is used in conjunction with the DataStream API. These are all supported in Amazon Managed Service for Apache Flink. It is advisable to start with a higher level of abstraction where possible, however some Flink features are only available with the Datastream API where you can create your application in Java, Python, or Scala. You should consider using the Datastream API if: • You require fine-grained control over state • You want to leverage the ability to call an external database or endpoint asynchronously (for example for inference) • You want to use custom timers (for example to implement custom windowing or late event handling) • You want to be able to modify the flow of your application without resetting the state Choose which Apache Flink APIs to use in Managed Service for Apache Flink 3 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Choosing a language with the DataStream API: • SQL can be embedded in any Flink application, regardless the programming language chosen. • If you are if
analytics-java-api-007
analytics-java-api.pdf
7
ability to call an external database or endpoint asynchronously (for example for inference) • You want to use custom timers (for example to implement custom windowing or late event handling) • You want to be able to modify the flow of your application without resetting the state Choose which Apache Flink APIs to use in Managed Service for Apache Flink 3 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Choosing a language with the DataStream API: • SQL can be embedded in any Flink application, regardless the programming language chosen. • If you are if planning to use the DataStream API, not all connectors are supported in Python. • If you need low-latency/high-throughput you should consider Java/Scala regardless the API. • If you plan to use Async IO in the Process Functions API you will need to use Java. The choice of the API can also impact your ability to evolve the application logic without having to reset the state. This depends on a specific feature, the ability to set UID on operators, that is only available in the DataStream API for both Java and Python. For more information, see Set UUIDs For All Operators in the Apache Flink Documentation. Choose a Flink API 4 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Get started with streaming data applications You can start by creating a Managed Service for Apache Flink application that continuously reads and processes streaming data. Then, author your code using your IDE of choice, and test it with live streaming data. You can also configure destinations where you want Managed Service for Apache Flink to send the results. To get started, we recommend that you read the following sections: • Managed Service for Apache Flink: How it works • Get started with Amazon Managed Service for Apache Flink (DataStream API) Altenatively, you can start by creating a Managed Service for Apache Flink Studio notebook that allows you to interactively query data streams in real time, and easily build and run stream processing applications using standard SQL, Python, and Scala. With a few clicks in the AWS Management Console, you can launch a serverless notebook to query data streams and get results in seconds. To get started, we recommend that you read the following sections: • Use a Studio notebook with Managed Service for Apache Flink • Create a Studio notebook Get started with streaming data applications 5 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink: How it works Managed Service for Apache Flink is a fully managed Amazon service that lets you use an Apache Flink application to process streaming data. First, you program your Apache Flink application, and then you create your Managed Service for Apache Flink application. Program your Apache Flink application An Apache Flink application is a Java or Scala application that is created with the Apache Flink framework. You author and build your Apache Flink application locally. Applications primarily use either the DataStream API or the Table API. The other Apache Flink APIs are also available for you to use, but they are less commonly used in building streaming applications. The features of the two APIs are as follows: DataStream API The Apache Flink DataStream API programming model is based on two components: • Data stream: The structured representation of a continuous flow of data records. • Transformation operator: Takes one or more data streams as input, and produces one or more data streams as output. Applications created with the DataStream API do the following: • Read data from a Data Source (such as a Kinesis stream or Amazon MSK topic). • Apply transformations to the data, such as filtering, aggregation, or enrichment. • Write the transformed data to a Data Sink. Applications that use the DataStream API can be written in Java or Scala, and can read from a Kinesis data stream, a Amazon MSK topic, or a custom source. Your application processes data by using a connector. Apache Flink uses the following types of connectors: • Source: A connector used to read external data. Program your Apache Flink application 6 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Sink: A connector used to write to external locations. • Operator: A connector used to process data within the application. A typical application consists of at least one data stream with a source, a data stream with one or more operators, and at least one data sink. For more information about using the DataStream API, see Review DataStream API components. Table API The Apache Flink Table API programming model is based on the following components: • Table Environment: An interface to underlying data that you use to create and host
analytics-java-api-008
analytics-java-api.pdf
8
Apache Flink Managed Service for Apache Flink Developer Guide • Sink: A connector used to write to external locations. • Operator: A connector used to process data within the application. A typical application consists of at least one data stream with a source, a data stream with one or more operators, and at least one data sink. For more information about using the DataStream API, see Review DataStream API components. Table API The Apache Flink Table API programming model is based on the following components: • Table Environment: An interface to underlying data that you use to create and host one or more tables. • Table: An object providing access to a SQL table or view. • Table Source: Used to read data from an external source, such as an Amazon MSK topic. • Table Function: A SQL query or API call used to transform data. • Table Sink: Used to write data to an external location, such as an Amazon S3 bucket. Applications created with the Table API do the following: • Create a TableEnvironment by connecting to a Table Source. • Create a table in the TableEnvironment using either SQL queries or Table API functions. • Run a query on the table using either Table API or SQL • Apply transformations on the results of the query using Table Functions or SQL queries. • Write the query or function results to a Table Sink. Applications that use the Table API can be written in Java or Scala, and can query data using either API calls or SQL queries. For more information about using the Table API, see Review Table API components. Create your Managed Service for Apache Flink application Managed Service for Apache Flink is an AWS service that creates an environment for hosting your Apache Flink application and provides it with the following settings:: Table API 7 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Use runtime properties: Parameters that you can provide to your application. You can change these parameters without recompiling your application code. • Implement fault tolerance: How your application recovers from interrupts and restarts. • Logging and monitoring in Amazon Managed Service for Apache Flink: How your application logs events to CloudWatch Logs. • Implement application scaling: How your application provisions computing resources. You create your Managed Service for Apache Flink application using either the console or the AWS CLI. To get started creating a Managed Service for Apache Flink application, see Tutorial: Get started using the DataStream API in Managed Service for Apache Flink. Create a Managed Service for Apache Flink application This topic contains information about creating a Managed Service for Apache Flink application. This topic contains the following sections: • Build your Managed Service for Apache Flink application code • Create your Managed Service for Apache Flink application • Start your Managed Service for Apache Flink application • Verify your Managed Service for Apache Flink application • Enable system rollbacks for your Managed Service for Apache Flink application Build your Managed Service for Apache Flink application code This section describes the components that you use to build the application code for your Managed Service for Apache Flink application. We recommend that you use the latest supported version of Apache Flink for your application code. For information about upgrading Managed Service for Apache Flink applications, see Use in- place version upgrades for Apache Flink. You build your application code using Apache Maven. An Apache Maven project uses a pom.xml file to specify the versions of components that it uses. Create an application 8 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Managed Service for Apache Flink supports JAR files up to 512 MB in size. If you use a JAR file larger than this, your application will fail to start. Applications can now use the Java API from any Scala version. You must bundle the Scala standard library of your choice into your Scala applications. For information about creating a Managed Service for Apache Flink application that uses Apache Beam, see Use Apache Beam with Managed Service for Apache Flink applications. Specify your application's Apache Flink version When using Managed Service for Apache Flink Runtime version 1.1.0 and later, you specify the version of Apache Flink that your application uses when you compile your application. You provide the version of Apache Flink with the -Dflink.version parameter. For example, if you are using Apache Flink 1.19.1, provide the following: mvn package -Dflink.version=1.19.1 For building applications with earlier versions of Apache Flink, see Earlier versions. Create your Managed Service for Apache Flink application After you have built your application code, you do the following to create your Managed Service for Apache Flink application: • Upload your Application code: Upload your application code to
analytics-java-api-009
analytics-java-api.pdf
9
for Apache Flink Runtime version 1.1.0 and later, you specify the version of Apache Flink that your application uses when you compile your application. You provide the version of Apache Flink with the -Dflink.version parameter. For example, if you are using Apache Flink 1.19.1, provide the following: mvn package -Dflink.version=1.19.1 For building applications with earlier versions of Apache Flink, see Earlier versions. Create your Managed Service for Apache Flink application After you have built your application code, you do the following to create your Managed Service for Apache Flink application: • Upload your Application code: Upload your application code to an Amazon S3 bucket. You specify the S3 bucket name and object name of your application code when you create your application. For a tutorial that shows how to upload your application code, see the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. • Create your Managed Service for Apache Flink application: Use one of the following methods to create your Managed Service for Apache Flink application: • Create your Managed Service for Apache Flink application using the AWS console: You can create and configure your application using the AWS console. When you create your application using the console, your application's dependent resources (such as CloudWatch Logs streams, IAM roles, and IAM policies) are created for you. Create your Managed Service for Apache Flink application 9 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide When you create your application using the console, you specify what version of Apache Flink your application uses by selecting it from the pull-down on the Managed Service for Apache Flink - Create application page. For a tutorial about how to use the console to create an application, see the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. • Create your Managed Service for Apache Flink application using the AWS CLI: You can create and configure your application using the AWS CLI. When you create your application using the CLI, you must also create your application's dependent resources (such as CloudWatch Logs streams, IAM roles, and IAM policies) manually. When you create your application using the CLI, you specify what version of Apache Flink your application uses by using the RuntimeEnvironment parameter of the CreateApplication action. Note You can change the RuntimeEnvironment of an existing application. To learn how, see Use in-place version upgrades for Apache Flink. Start your Managed Service for Apache Flink application After you have built your application code, uploaded it to S3, and created your Managed Service for Apache Flink application, you then start your application. Starting a Managed Service for Apache Flink application typically takes several minutes. Use one of the following methods to start your application: • Start your Managed Service for Apache Flink application using the AWS console: You can run your application by choosing Run on your application's page in the AWS console. • Start your Managed Service for Apache Flink application using the AWS API: You can run your application using the StartApplication action. Start your Managed Service for Apache Flink application 10 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Verify your Managed Service for Apache Flink application You can verify that your application is working in the following ways: • Using CloudWatch Logs: You can use CloudWatch Logs and CloudWatch Logs Insights to verify that your application is running properly. For information about using CloudWatch Logs with your Managed Service for Apache Flink application, see Logging and monitoring in Amazon Managed Service for Apache Flink. • Using CloudWatch Metrics: You can use CloudWatch Metrics to monitor your application's activity, or activity in the resources your application uses for input or output (such as Kinesis streams, Firehose streams, or Amazon S3 buckets.) For more information about CloudWatch metrics, see Working with Metrics in the Amazon CloudWatch User Guide. • Monitoring Output Locations: If your application writes output to a location (such as an Amazon S3 bucket or database), you can monitor that location for written data. Enable system rollbacks for your Managed Service for Apache Flink application With system-rollback capability, you can achieve higher availability of your running Apache Flink application on Amazon Managed Service for Apache Flink. Opting into this configuration enables the service to automatically revert the application to the previously running version when an action such as UpdateApplication or autoscaling runs into code or configurations bugs. Note To use the system rollback feature, you must opt in by updating your application. Existing applications will not automatically use system rollback by default. How it works When you initiate an application operation, such as an update or scaling action, the Amazon Managed Service for Apache Flink first attempts to run that operation. If it detects
analytics-java-api-010
analytics-java-api.pdf
10
your running Apache Flink application on Amazon Managed Service for Apache Flink. Opting into this configuration enables the service to automatically revert the application to the previously running version when an action such as UpdateApplication or autoscaling runs into code or configurations bugs. Note To use the system rollback feature, you must opt in by updating your application. Existing applications will not automatically use system rollback by default. How it works When you initiate an application operation, such as an update or scaling action, the Amazon Managed Service for Apache Flink first attempts to run that operation. If it detects issues that prevent the operation from succeeding, such as code bugs or insufficient permissions, the service automatically initiates a RollbackApplication operation. Verify your Managed Service for Apache Flink application 11 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The rollback attempts to restore the application to the previous version that ran successfully, along with the associated application state. If the rollback is successful, your application continues processing data with minimal downtime using the previous version. If the automatic rollback also fails, Amazon Managed Service for Apache Flink transitions the application to the READY status, so that you can take further actions, including fixing the error and retrying the operation. You must opt in to use automatic system rollbacks. You can enable it using the console or API for all operations on your application from this point forward. The following example request for the UpdateApplication action enables system rollbacks for an application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationSystemRollbackConfigurationUpdate": { "RollbackEnabledUpdate": "true" } } } Review common scenarios for automatic system rollback The following scenarios illustrate where automatic system rollbacks are beneficial: • Application updates: If you update your application with new code that has bugs when initializing the Flink job through the main method, the automatic rollback allows the previous working version to be restored. Other update scenarios where system rollbacks are helpful include: • If your application is updated to run with a parallelism higher than maxParallelism. • If your application is updated to run with incorrect subnets for a VPC application that results in a failure during the Flink job startup. • Flink version upgrades: When you upgrade to a new Apache Flink version and the upgraded application encounters a snapshot compatibility issue, system rollback lets you revert to the prior Flink version automatically. • AutoScaling: When the application scales up but runs into issues restoring from a savepoint, due to operator mismatch between the snapshot and the Flink job graph. Enable system rollbacks 12 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use operation APIs for system rollbacks To provide better visibility, Amazon Managed Service for Apache Flink has two APIs related to application operations that can help you track failures and related system rollbacks. ListApplicationOperations This API lists all operations performed on the application, including UpdateApplication, Maintenance, RollbackApplication, and others in reverse chronological order. The following example request for the ListApplicationOperations action lists the first 10 application operations for the application: { "ApplicationName": "MyApplication", "Limit": 10 } This following example request for ListApplicationOperations helps filter the list to previous updates on the application: { "ApplicationName": "MyApplication", "operation": "UpdateApplication" } DescribeApplicationOperation This API provides detailed information about a specific operation listed by ListApplicationOperations, including the reason for failure, if applicable. The following example request for the DescribeApplicationOperation action lists details for a specific application operation: { "ApplicationName": "MyApplication", "OperationId": "xyzoperation" } For troubleshooting information, see System rollback best practices. Enable system rollbacks 13 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Run a Managed Service for Apache Flink application This topic contains information about running a Managed Service for Apache Flink. When you run your Managed Service for Apache Flink application, the service creates an Apache Flink job. An Apache Flink job is the execution lifecycle of your Managed Service for Apache Flink application. The execution of the job, and the resources it uses, are managed by the Job Manager. The Job Manager separates the execution of the application into tasks. Each task is managed by a Task Manager. When you monitor your application's performance, you can examine the performance of each Task Manager, or of the Job Manager as a whole. For information about Apache Flink jobs, see Jobs and Scheduling in the Apache Flink Documentation. Identify application and job status Both your application and the application's job have a current execution status: • Application status: Your application has a current status that describes its phase of execution. Application statuses include the following: • Steady application statuses: Your application typically stays in these statuses until you make a status change: • READY: A new or stopped application is in the READY status until
analytics-java-api-011
analytics-java-api.pdf
11
performance, you can examine the performance of each Task Manager, or of the Job Manager as a whole. For information about Apache Flink jobs, see Jobs and Scheduling in the Apache Flink Documentation. Identify application and job status Both your application and the application's job have a current execution status: • Application status: Your application has a current status that describes its phase of execution. Application statuses include the following: • Steady application statuses: Your application typically stays in these statuses until you make a status change: • READY: A new or stopped application is in the READY status until you run it. • RUNNING: An application that has successfully started is in the RUNNING status. • Transient application statuses: An application in these statuses is typically in the process of transitioning to another status. If an application stays in a transient status for a length of time, you can stop the application using the StopApplication action with the Force parameter set to true. These statuses include the following: • STARTING: Occurs after the StartApplication action. The application is transitioning from the READY to the RUNNING status. • STOPPING: Occurs after the StopApplication action. The application is transitioning from the RUNNING to the READY status. • DELETING: Occurs after the DeleteApplication action. The application is in the process of being deleted. • UPDATING: Occurs after the UpdateApplication action. The application is updating, and will transition back to the RUNNING or READY status. Run an application 14 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • AUTOSCALING: The application has the AutoScalingEnabled property of the ParallelismConfiguration set to true, and the service is increasing the parallelism of the application. When the application is in this status, the only valid API action you can use is the StopApplication action with the Force parameter set to true. For information about automatic scaling, see Use automatic scaling in Managed Service for Apache Flink. • FORCE_STOPPING: Occurs after the StopApplication action is called with the Force parameter set to true. The application is in the process of being force stopped. The application transitions from the STARTING, UPDATING, STOPPING, or AUTOSCALING status to the READY status. • ROLLING_BACK: Occurs after the RollbackApplication action is called. The application is in the process of being rolled back to a previous version. The application transitions from the UPDATING or AUTOSCALING status to the RUNNING status. • MAINTENANCE: Occurs while Managed Service for Apache Flink applies patches to your application. For more information, see Manage maintenance tasks for Managed Service for Apache Flink. You can check your application's status using the console, or by using the DescribeApplication action. • Job status: When your application is in the RUNNING status, your job has a status that describes its current execution phase. A job starts in the CREATED status, and then proceeds to the RUNNING status when it has started. If error conditions occur, your application enters the following status: • For applications using Apache Flink 1.11 and later, your application enters the RESTARTING status. • For applications using Apache Flink 1.8 and prior, your application enters the FAILING status. The application then proceeds to either the RESTARTING or FAILED status, depending on whether the job can be restarted. You can check the job's status by examining your application's CloudWatch log for status changes. Run batch workloads Managed Service for Apache Flink supports running Apache Flink batch workloads. In a batch job, when an Apache Flink job gets to the FINISHED status, Managed Service for Apache Flink Run batch workloads 15 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide application status is set to READY. For more information about Flink job statuses, see Jobs and Scheduling. Review Managed Service for Apache Flink application resources This section describes the system resources that your application uses. Understanding how Managed Service for Apache Flink provisions and uses resources will help you design, create, and maintain a performant and stable Managed Service for Apache Flink application. Managed Service for Apache Flink application resources Managed Service for Apache Flink is an AWS service that creates an environment for hosting your Apache Flink application. The Managed Service for Apache Flink service provides resources using units called Kinesis Processing Units (KPUs). One KPU represents the following system resources: • One CPU core • 4 GB of memory, of which one GB is native memory and three GB are heap memory • 50 GB of disk space KPUs run applications in distinct execution units called tasks and subtasks. You can think of a subtask as the equivalent of a thread. The number of KPUs available to an application is equal to the application's Parallelism setting, divided by the application's ParallelismPerKPU setting. For more information about application parallelism, see Implement application scaling. Apache
analytics-java-api-012
analytics-java-api.pdf
12
service provides resources using units called Kinesis Processing Units (KPUs). One KPU represents the following system resources: • One CPU core • 4 GB of memory, of which one GB is native memory and three GB are heap memory • 50 GB of disk space KPUs run applications in distinct execution units called tasks and subtasks. You can think of a subtask as the equivalent of a thread. The number of KPUs available to an application is equal to the application's Parallelism setting, divided by the application's ParallelismPerKPU setting. For more information about application parallelism, see Implement application scaling. Apache Flink application resources The Apache Flink environment allocates resources for your application using units called task slots. When Managed Service for Apache Flink allocates resources for your application, it assigns one or more Apache Flink task slots to a single KPU. The number of slots assigned to a single KPU is equal to your application's ParallelismPerKPU setting. For more information about task slots, see Job Scheduling in the Apache Flink Documentation. Application resources 16 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Operator parallelism You can set the maximum number of subtasks that an operator can use. This value is called Operator Parallelism. By default, the parallelism of each operator in your application is equal to the application's parallelism. This means that by default, each operator in your application can use all of the available subtasks in the application if needed. You can set the parallelism of the operators in your application using the setParallelism method. Using this method, you can control the number of subtasks each operator can use at one time. For more information about operators, see Operators in the Apache Flink Documentation. Operator chaining Normally, each operator uses a separate subtask to execute, but if several operators always execute in sequence, the runtime can assign them all to the same task. This process is called Operator Chaining. Several sequential operators can be chained into a single task if they all operate on the same data. The following are some of the criteria needed for this to be true: • The operators do 1-to-1 simple forwarding. • The operators all have the same operator parallelism. When your application chains operators into a single subtask, it conserves system resources, because the service doesn't need to perform network operations and allocate subtasks for each operator. To determine if your application is using operator chaining, look at the job graph in the Managed Service for Apache Flink console. Each vertex in the application represents one or more operators. The graph shows operators that have been chained as a single vertex. Per second billing in Managed Service for Apache Flink Managed Service for Apache Flink is now billed in one-second increments. There is a ten-minute minimum charge per application. Per-second billing is applicable to applications that are newly launched or already running. This section describes how Managed Service for Apache Flink meters and bills you for your usage. To learn more about Managed Service for Apache Flink pricing, see Amazon Managed Service for Apache Flink Pricing. Pricing 17 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide How it works Managed Service for Apache Flink charges you for the duration and number of Kinesis Processing Units (KPUs) that are billed in one-second increments in the supported AWS Regions. A single KPU comprises 1vCPU compute and 4 GB of memory. You are charged an hourly rate based on the number of KPUs used to run your applications. For example, an application running for 20 minutes and 10 seconds will be charged for 20 minutes and 10 seconds, multiplied by the resources it used. An application that is running for 5 minutes will be charged the ten-minute minimum, multiplied by the resources it used. Managed Service for Apache Flink states usage in hours. For example, 15 minutes corresponds to 0.25 hours. For Apache Flink applications, you are charged a single additional KPU per application, used for orchestration. Applications are also charged for running storage and durable backups. Running application storage is used for stateful processing capabilities in Managed Service for Apache Flink and is charged per GB/month. Durable backups are optional and provide point-in-time recovery for applications, charged per GB/month. In streaming mode, Managed Service for Apache Flink automatically scales the number of KPUs required by your stream processing application as the demands of memory and compute fluctuate. You can choose to provision your application with the required number of KPUs. AWS Region availability Note At this time, per second billing is not available in the following Regions: AWS GovCloud (US-East), AWS GovCloud (US-West), China (Beijing), and China (Ningxia). Per second billing is available in the following AWS Regions: • US East (N. Virginia) - us-east-1 •
analytics-java-api-013
analytics-java-api.pdf
13
Durable backups are optional and provide point-in-time recovery for applications, charged per GB/month. In streaming mode, Managed Service for Apache Flink automatically scales the number of KPUs required by your stream processing application as the demands of memory and compute fluctuate. You can choose to provision your application with the required number of KPUs. AWS Region availability Note At this time, per second billing is not available in the following Regions: AWS GovCloud (US-East), AWS GovCloud (US-West), China (Beijing), and China (Ningxia). Per second billing is available in the following AWS Regions: • US East (N. Virginia) - us-east-1 • US East (Ohio) - us-east-2 • US West (N. California) - us-west-1 • US West (Oregon) - us-west-2 How it works 18 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Africa (Cape Town) - af-south-1 • Asia Pacific (Hong Kong) - ap-east-1 • Asia Pacific (Hyderabad) - ap-south-1 • Asia Pacific (Jakarta) - ap-southeast-3 • Asia Pacific (Melbourne) - ap-southeast-4 • Asia Pacific (Mumbai) - ap-south-1 • Asia Pacific (Osaka) - ap-northeast-3 • Asia Pacific (Seoul) - ap-northeast-2 • Asia Pacific (Singapore) - ap-southeast-1 • Asia Pacific (Sydney) - ap-southeast-2 • Asia Pacific (Tokyo) - ap-northeast-1 • Canada (Central) - ca-central-1 • Canada West (Calgary) - ca-west-1 • Europe (Frankfurt) - eu-central-1 • Europe (Ireland) - eu-west-1 • Europe (London) - eu-west-2 • Europe (Milan) - eu-south-1 • Europe (Paris) - eu-west-3 • Europe (Spain) - eu-south-2 • Europe (Stockholm) - eu-north-1 • Europe (Zurich) - eu-central-2 • Israel (Tel Aviv) - il-central-1 • Middle East (Bahrain) - me-south-1 • Middle East (UAE) - me-central-1 • South America (São Paulo) - sa-east-1 Pricing examples You can find pricing examples on the Managed Service for Apache Flink pricing page. For more information, see Amazon Managed Service for Apache Flink Pricing. Following are further examples with Cost Usage Report illustrations for each. Pricing examples 19 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide A long running, heavy workload You are a large Video streaming service and you would like to build a real-time video recommendation based on your users’ interactions. You use an Apache Flink application in Managed Service for Apache Flink to continuously ingest user interaction events from multiple Kinesis data streams and to process events in real time before outputting to a downstream system. User interaction events are transformed using several operators. This includes partitioning data by event type, enriching data with additional metadata, sorting data by timestamp, and buffering data for 5 minutes before delivery. The application has many transformation steps that are compute-intensive and parallelizable. Your Flink application is configured to run with 20 KPUs to accommodate the workload. Your application uses 1 GB of durable application backup every day. The monthly Managed Service for Apache Flink charges will be computed as follows: Monthly charges The price in the US East (N. Virginia) Region is $0.11 per KPU-hour. Managed Service for Apache Flink allocates 50 GB of running application storage per KPU and charges $0.10 per GB/month. • Monthly KPU charges: 24 hours * 30 days * (20 KPUs + 1 additional KPU for streaming application) * $0.11/hour = $1,584.00 • Monthly running application storage charges: 30 days * 20 KPUs * 50 GB/KPUs * $0.10/GB- month = $100.00 • Monthly durable application storage charges: 30 days * 1 GB * 0.023/GB-month = $0.03 • Total charges: $1,584.00 + $100 + $0.03 = $1,684.03 Cost usage report for Managed Service for Apache Flink on the Billing and Cost Management console for the month Kinesis Analytics • USD 1,684.03 - US East (N. Virginia) • Amazon Kinesis Analytics CreateSnapshot • $0.023 per GB-month of durable application backups • 1 GB-month - USD 0.03 • Amazon Kinesis Analytics StartApplication • $0.10 per GB-month of running application storage • 1,000 GB-month - USD 100 Pricing examples 20 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • $0.11 per Kinesis Processing Unit-hour for Apache Flink applications • 15,120 KPU-hour - USD 1,584 A batch workload that runs for ~15 minutes every day You use an Apache Flink application in Managed Service for Apache Flink to transform log data in Amazon Simple Storage Service (Amazon S3) in batch mode. The log data is transformed using several operators. This includes applying a schema to the different log events, partitioning data by event type, and sorting data by timestamp. The application has many transformation steps, but none are computationally intensive. This application ingests data at 2,000 records/second for 15 minutes every day in a 30-day month. You do not create any durable application backups. The monthly Managed Service for Apache Flink charges will be computed as follows: Monthly charges The price in the US East (N. Virginia) Region is $0.11 per KPU-hour.
analytics-java-api-014
analytics-java-api.pdf
14
Amazon Simple Storage Service (Amazon S3) in batch mode. The log data is transformed using several operators. This includes applying a schema to the different log events, partitioning data by event type, and sorting data by timestamp. The application has many transformation steps, but none are computationally intensive. This application ingests data at 2,000 records/second for 15 minutes every day in a 30-day month. You do not create any durable application backups. The monthly Managed Service for Apache Flink charges will be computed as follows: Monthly charges The price in the US East (N. Virginia) Region is $0.11 per KPU-hour. Managed Service for Apache Flink allocates 50 GB of running application storage per KPU and charges $0.10 per GB/month. • Batch Workload: During the 15 minutes per day, the Managed Service for Apache Flink application is processing 2,000 records/second, which takes 2KPUs. 30 days/month * 15 minutes/day = 450 minutes/month • Monthly KPU charges: 450 minutes/month * (2KPUs + 1 additional KPU for streaming application) * $0.11/hour = $2.48 • Monthly running application storage charges: 450 minutes/month * 2 KPUs * 50 GB/KPUs * $0.10/GB-month = $0.11 • Total charges: $2.48 + 0.11 = $2.59 Cost usage report for Managed Service for Apache Flink on the Billing and Cost Management console for the month Kinesis Analytics • USD 2.59 - US East (N. Virginia) • Amazon Kinesis Analytics StartApplication • $0.10 per GB-month of running application backups • 1.042 GB-month - USD 0.11 • $0.11 per Kinesis Processing Unit-hour for Apache Flink applications Pricing examples 21 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • 22.5 KPU-Hour - USD 2.48 A test application that stops and starts continuously in the same hour, attracting multiple minimum charges You’re a large ecommerce platform that processes millions of transactions every day. You want to develop real-time fraud detection. You use an Apache Flink application in Managed Service for Apache Flink to ingest transaction events from Kinesis Data Streams and process events in real-time with different transformation steps. This includes using a sliding window to aggregate events, partitioning events by event types, and applying specific detection rules for different event types. During development, you start and stop your application multiple times to test and debug behavior. There are occasions when your application only runs for a few minutes. There is an hour when you’re testing your application with 4 KPUs and your application does not use any durable application backups: • At 10:05 AM, you start your application, which runs for 30 minutes before it’s stopped at 10:35 AM. • At 10:40 AM, you start your application again, which runs for 5 minutes before it’s stopped at 10:45 AM. • At 10:50 AM, you start the application again, which runs for 2 minutes before it’s stopped at 10:52 AM. Managed Service for Apache Flink charges a minimum of 10 minutes of usage each time an application starts running. The monthly Managed Service for Apache Flink usage for your application will be computed as follows: • First time your application starts and stops: 30 minutes of usage • Second time your application starts and stops: 10 minutes of usage (your application runs for 5 minutes rounded up to the 10 minutes minimum charge) • Third time your application starts and stops: 10 minutes of usage (your application runs for 2 minutes, rounded up to the 10 minutes minimum charge) In total, your application would be charged for 50 minutes of usage. If there are no other times in the month your application is running, the monthly Managed Service for Apache Flink charges will be computed as follows: Pricing examples 22 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Monthly charges The price in the US East (N. Virginia) Region is $0.11 per KPU-hour. Managed Service for Apache Flink allocates 50 GB of running application storage per KPU and charges $0.10 per GB/month. • Monthly KPU charges: 50 minutes * (4KPUs + 1 additional KPU for streaming application) * $0.11/hour = $0.46 (rounded to the nearest penny) • Monthly running application storage charges: 50 minutes * 4 KPUs * 50 GB/KPUs * $0.10/GB- month = $0.03 (rounded to the nearest penny) • Total charges: $0.46 + 0.03 = $0.49 Cost usage report for Managed Service for Apache Flink on the Billing and Cost Management console for the month Kinesis Analytics • USD 0.49 - US East (N. Virginia) • Amazon Kinesis Analytics StartApplication • $0.10 per GB-month of running application storage • 0.232 GB-month - USD 0.03 • $0.11 per Kinesis Processing Unit-hour for Apache Flink applications • 4.167 KPU-Hour - USD 0.46 Review DataStream API components Your Apache Flink application uses the Apache Flink DataStream API to transform data in a data stream. This section describes the
analytics-java-api-015
analytics-java-api.pdf
15
the nearest penny) • Total charges: $0.46 + 0.03 = $0.49 Cost usage report for Managed Service for Apache Flink on the Billing and Cost Management console for the month Kinesis Analytics • USD 0.49 - US East (N. Virginia) • Amazon Kinesis Analytics StartApplication • $0.10 per GB-month of running application storage • 0.232 GB-month - USD 0.03 • $0.11 per Kinesis Processing Unit-hour for Apache Flink applications • 4.167 KPU-Hour - USD 0.46 Review DataStream API components Your Apache Flink application uses the Apache Flink DataStream API to transform data in a data stream. This section describes the different components that move, transform, and track data: • Use connectors to move data in Managed Service for Apache Flink with the DataStream API: These components move data between your application and external data sources and destinations. • Transform data using operators in Managed Service for Apache Flink with the DataStream API: These components transform or group data elements within your application. Review DataStream API components 23 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Track events in Managed Service for Apache Flink using the DataStream API: This topic describes how Managed Service for Apache Flink tracks events when using the DataStream API. Use connectors to move data in Managed Service for Apache Flink with the DataStream API In the Amazon Managed Service for Apache Flink DataStream API, connectors are software components that move data into and out of a Managed Service for Apache Flink application. Connectors are flexible integrations that let you read from files and directories. Connectors consist of complete modules for interacting with Amazon services and third-party systems. Types of connectors include the following: • Add streaming data sources: Provide data to your application from a Kinesis data stream, file, or other data source. • Write data using sinks: Send data from your application to a Kinesis data stream, Firehose stream, or other data destination. • Use Asynchronous I/O: Provides asynchronous access to a data source (such as a database) to enrich stream events. Available connectors The Apache Flink framework contains connectors for accessing data from a variety of sources. For information about connectors available in the Apache Flink framework, see Connectors in the Apache Flink documentation. Warning If you have applications running on Flink 1.6, 1.8, 1.11 or 1.13 and would like to run in Middle East (UAE), Asia Pacific (Hyderabad), Israel (Tel Aviv), Europe (Zurich), Middle East (UAE), Asia Pacific (Melbourne) or Asia Pacific (Jakarta) Regions, you might have to rebuild your application archive with an updated connector or upgrade to Flink 1.18. Apache Flink connectors are stored in their own open source repositories. If you're upgrading to version 1.18 or later, you must update your dependencies. To access the repository for Apache Flink AWS connectors, see flink-connector-aws. The former Kinesis source org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer Connectors 24 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide is discontinued and might be removed with a future release of Flink. Use Kinesis Source instead. There is no state compatibility between the FlinkKinesisConsumer and KinesisStreamsSource. For details, see Migrating existing jobs to new Kinesis Streams Source in the Apache Flink documentation. Following are the recommended guidelines: Connector upgrades Flink version Connector used Resolution 1.19, 1.20 Kinesis Source 1.19, 1.20 Kinesis Sink When upgrading to Managed Service for Apache Flink version 1.19 and 1.20, make sure that you are using the most recent Kinesis Data Streams source connector . That must be any version 5.0.0 or later. For more information, see Amazon Kinesis Data Streams Connector. When upgrading to Managed Service for Apache Flink version 1.19 and 1.20, make sure that you are using the most recent Kinesis Data Streams sink connector. That must be any version 5.0.0 or later. For more information, see Kinesis Streams Sink. Connectors 25 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Flink version Connector used Resolution 1.19, 1.20 DynamoDB Streams Source When upgrading to Managed Service for 1.19, 1.20 DynamoDB Sink 1.19, 1.20 Amazon SQS Sink Apache Flink version 1.19 and 1.20, make sure that you are using the most recent DynamoDB Streams source connector. That must be any version 5.0.0 or later. For more informati on, see Amazon DynamoDB Connector. When upgrading to Managed Service for Apache Flink version 1.19 and 1.20, make sure that you are using the most recent DynamoDB sink connector. That must be any version 5.0.0 or later. For more information, see Amazon DynamoDB Connector. When upgrading to Managed Service for Apache Flink version 1.19 and 1.20, make sure that you are using the most recent Amazon SQS sink connector. That must be any version 5.0.0 or later. For more information, see Amazon SQS Sink. Connectors 26 Managed Service for Apache Flink Managed Service for Apache Flink Developer
analytics-java-api-016
analytics-java-api.pdf
16
For more informati on, see Amazon DynamoDB Connector. When upgrading to Managed Service for Apache Flink version 1.19 and 1.20, make sure that you are using the most recent DynamoDB sink connector. That must be any version 5.0.0 or later. For more information, see Amazon DynamoDB Connector. When upgrading to Managed Service for Apache Flink version 1.19 and 1.20, make sure that you are using the most recent Amazon SQS sink connector. That must be any version 5.0.0 or later. For more information, see Amazon SQS Sink. Connectors 26 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Flink version Connector used Resolution 1.19, 1.20 Amazon Managed Service for Prometheus Sink When upgrading to Managed Service for Apache Flink version 1.19 and 1.20, make sure that you are using the most recent Amazon Managed Service for Prometheus sink connector. That must be any version 1.0.0 or later. For more informati on, see Prometheus Sink. Add streaming data sources to Managed Service for Apache Flink Apache Flink provides connectors for reading from files, sockets, collections, and custom sources. In your application code, you use an Apache Flink source to receive data from a stream. This section describes the sources that are available for Amazon services. Use Kinesis data streams The KinesisStreamsSource provides streaming data to your application from an Amazon Kinesis data stream. Create a KinesisStreamsSource The following code example demonstrates creating a KinesisStreamsSource: // Configure the KinesisStreamsSource Configuration sourceConfig = new Configuration(); sourceConfig.set(KinesisSourceConfigOptions.STREAM_INITIAL_POSITION, KinesisSourceConfigOptions.InitialPosition.TRIM_HORIZON); // This is optional, by default connector will read from LATEST // Create a new KinesisStreamsSource to read from specified Kinesis Stream. KinesisStreamsSource<String> kdsSource = KinesisStreamsSource.<String>builder() Connectors 27 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide .setStreamArn("arn:aws:kinesis:us-east-1:123456789012:stream/test- stream") .setSourceConfig(sourceConfig) .setDeserializationSchema(new SimpleStringSchema()) .setKinesisShardAssigner(ShardAssignerFactory.uniformShardAssigner()) // This is optional, by default uniformShardAssigner will be used. .build(); For more information about using a KinesisStreamsSource, see Amazon Kinesis Data Streams Connector in the Apache Flink documentation and our public KinesisConnectors example on Github. Create a KinesisStreamsSource that uses an EFO consumer The KinesisStreamsSource now supports Enhanced Fan-Out (EFO). If a Kinesis consumer uses EFO, the Kinesis Data Streams service gives it its own dedicated bandwidth, rather than having the consumer share the fixed bandwidth of the stream with the other consumers reading from the stream. For more information about using EFO with the Kinesis consumer, see FLIP-128: Enhanced Fan Out for AWS Kinesis Consumers. You enable the EFO consumer by setting the following parameters on the Kinesis consumer: • READER_TYPE: Set this parameter to EFO for your application to use an EFO consumer to access the Kinesis Data Stream data. • EFO_CONSUMER_NAME: Set this parameter to a string value that is unique among the consumers of this stream. Re-using a consumer name in the same Kinesis Data Stream will cause the previous consumer using that name to be terminated. To configure a KinesisStreamsSource to use EFO, add the following parameters to the consumer: sourceConfig.set(KinesisSourceConfigOptions.READER_TYPE, KinesisSourceConfigOptions.ReaderType.EFO); sourceConfig.set(KinesisSourceConfigOptions.EFO_CONSUMER_NAME, "my-flink-efo- consumer"); Connectors 28 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For an example of a Managed Service for Apache Flink application that uses an EFO consumer, see our public Kinesis Connectors example on Github. Use Amazon MSK The KafkaSource source provides streaming data to your application from an Amazon MSK topic. Create a KafkaSource The following code example demonstrates creating a KafkaSource: KafkaSource<String> source = KafkaSource.<String>builder() .setBootstrapServers(brokers) .setTopics("input-topic") .setGroupId("my-group") .setStartingOffsets(OffsetsInitializer.earliest()) .setValueOnlyDeserializer(new SimpleStringSchema()) .build(); env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source"); For more information about using a KafkaSource, see MSK Replication. Write data using sinks in Managed Service for Apache Flink In your application code, you can use any Apache Flink sink connector to write into external systems, including AWS services, such as Kinesis Data Streams and DynamoDB. Apache Flink also provides sinks for files and sockets, and you can implement custom sinks. Among the several supported sinks, the following are frequently used: Use Kinesis data streams Apache Flink provides information about the Kinesis Data Streams Connector in the Apache Flink documentation. For an example of an application that uses a Kinesis data stream for input and output, see Tutorial: Get started using the DataStream API in Managed Service for Apache Flink. Connectors 29 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use Apache Kafka and Amazon Managed Streaming for Apache Kafka (MSK) The Apache Flink Kafka connector provides extensive support for publishing data to Apache Kafka and Amazon MSK, including exactly once guarantees. To learn how to write to Kafka, see Kafka Connectors examples in the Apache Flink documentation. Use Amazon S3 You can use the Apache Flink StreamingFileSink to write objects to an Amazon S3 bucket. For an example about how to write objects to S3, see the section called “S3 Sink”. Use Firehose The FlinkKinesisFirehoseProducer is a reliable, scalable
analytics-java-api-017
analytics-java-api.pdf
17
for Apache Flink Managed Service for Apache Flink Developer Guide Use Apache Kafka and Amazon Managed Streaming for Apache Kafka (MSK) The Apache Flink Kafka connector provides extensive support for publishing data to Apache Kafka and Amazon MSK, including exactly once guarantees. To learn how to write to Kafka, see Kafka Connectors examples in the Apache Flink documentation. Use Amazon S3 You can use the Apache Flink StreamingFileSink to write objects to an Amazon S3 bucket. For an example about how to write objects to S3, see the section called “S3 Sink”. Use Firehose The FlinkKinesisFirehoseProducer is a reliable, scalable Apache Flink sink for storing application output using the Firehose service. This section describes how to set up a Maven project to create and use a FlinkKinesisFirehoseProducer. Topics • Create a FlinkKinesisFirehoseProducer • FlinkKinesisFirehoseProducer Code Example Create a FlinkKinesisFirehoseProducer The following code example demonstrates creating a FlinkKinesisFirehoseProducer: Properties outputProperties = new Properties(); outputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); FlinkKinesisFirehoseProducer<String> sink = new FlinkKinesisFirehoseProducer<>(outputStreamName, new SimpleStringSchema(), outputProperties); FlinkKinesisFirehoseProducer Code Example The following code example demonstrates how to create and configure a FlinkKinesisFirehoseProducer and send data from an Apache Flink data stream to the Firehose service. package com.amazonaws.services.kinesisanalytics; Connectors 30 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide import com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants; import com.amazonaws.services.kinesisanalytics.flink.connectors.producer.FlinkKinesisFirehoseProducer; import com.amazonaws.services.kinesisanalytics.runtime.KinesisAnalyticsRuntime; import org.apache.flink.api.common.serialization.SimpleStringSchema; import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer; import org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer; import org.apache.flink.streaming.connectors.kinesis.config.ConsumerConfigConstants; import java.io.IOException; import java.util.Map; import java.util.Properties; public class StreamingJob { private static final String region = "us-east-1"; private static final String inputStreamName = "ExampleInputStream"; private static final String outputStreamName = "ExampleOutputStream"; private static DataStream<String> createSourceFromStaticConfig(StreamExecutionEnvironment env) { Properties inputProperties = new Properties(); inputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); inputProperties.setProperty(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST"); return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); } private static DataStream<String> createSourceFromApplicationProperties(StreamExecutionEnvironment env) throws IOException { Map<String, Properties> applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties(); return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), applicationProperties.get("ConsumerConfigProperties"))); } Connectors 31 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide private static FlinkKinesisFirehoseProducer<String> createFirehoseSinkFromStaticConfig() { /* * com.amazonaws.services.kinesisanalytics.flink.connectors.config. * ProducerConfigConstants * lists of all of the properties that firehose sink can be configured with. */ Properties outputProperties = new Properties(); outputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); FlinkKinesisFirehoseProducer<String> sink = new FlinkKinesisFirehoseProducer<>(outputStreamName, new SimpleStringSchema(), outputProperties); ProducerConfigConstants config = new ProducerConfigConstants(); return sink; } private static FlinkKinesisFirehoseProducer<String> createFirehoseSinkFromApplicationProperties() throws IOException { /* * com.amazonaws.services.kinesisanalytics.flink.connectors.config. * ProducerConfigConstants * lists of all of the properties that firehose sink can be configured with. */ Map<String, Properties> applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties(); FlinkKinesisFirehoseProducer<String> sink = new FlinkKinesisFirehoseProducer<>(outputStreamName, new SimpleStringSchema(), applicationProperties.get("ProducerConfigProperties")); return sink; } public static void main(String[] args) throws Exception { // set up the streaming execution environment final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); /* * if you would like to use runtime configuration properties, uncomment the * lines below Connectors 32 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide * DataStream<String> input = createSourceFromApplicationProperties(env); */ DataStream<String> input = createSourceFromStaticConfig(env); // Kinesis Firehose sink input.addSink(createFirehoseSinkFromStaticConfig()); // If you would like to use runtime configuration properties, uncomment the // lines below // input.addSink(createFirehoseSinkFromApplicationProperties()); env.execute("Flink Streaming Java API Skeleton"); } } For a complete tutorial about how to use the Firehose sink, see the section called “Firehose sink”. Use Asynchronous I/O in Managed Service for Apache Flink An Asynchronous I/O operator enriches stream data using an external data source such as a database. Managed Service for Apache Flink enriches the stream events asynchronously so that requests can be batched for greater efficiency. For more information, see Asynchronous I/O in the Apache Flink Documentation. Transform data using operators in Managed Service for Apache Flink with the DataStream API To transform incoming data in a Managed Service for Apache Flink, you use an Apache Flink operator. An Apache Flink operator transforms one or more data streams into a new data stream. The new data stream contains modified data from the original data stream. Apache Flink provides more than 25 pre-built stream processing operators. For more information, see Operators in the Apache Flink Documentation. This topic contains the following sections: • Use transform operators • Use aggregation operators Operators 33 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use transform operators The following is an example of a simple text transformation on one of the fields of a JSON data stream. This code creates a transformed data stream. The new data stream has the same data as the original stream, with the string " Company" appended to the contents of the TICKER field. DataStream<ObjectNode> output = input.map( new MapFunction<ObjectNode, ObjectNode>() { @Override public ObjectNode map(ObjectNode value) throws Exception { return value.put("TICKER", value.get("TICKER").asText() + " Company"); } } ); Use aggregation operators The following is an example of an aggregation operator. The code creates an aggregated data stream. The operator creates a 5-second tumbling window and returns the sum of the PRICE values for the records in the window with the same TICKER value. DataStream<ObjectNode> output = input.keyBy(node -> node.get("TICKER").asText())
analytics-java-api-018
analytics-java-api.pdf
18
new data stream has the same data as the original stream, with the string " Company" appended to the contents of the TICKER field. DataStream<ObjectNode> output = input.map( new MapFunction<ObjectNode, ObjectNode>() { @Override public ObjectNode map(ObjectNode value) throws Exception { return value.put("TICKER", value.get("TICKER").asText() + " Company"); } } ); Use aggregation operators The following is an example of an aggregation operator. The code creates an aggregated data stream. The operator creates a 5-second tumbling window and returns the sum of the PRICE values for the records in the window with the same TICKER value. DataStream<ObjectNode> output = input.keyBy(node -> node.get("TICKER").asText()) .window(TumblingProcessingTimeWindows.of(Time.seconds(5))) .reduce((node1, node2) -> { double priceTotal = node1.get("PRICE").asDouble() + node2.get("PRICE").asDouble(); node1.replace("PRICE", JsonNodeFactory.instance.numberNode(priceTotal)); return node1; }); For more code examples, see Examples for creating and working with Managed Service for Apache Flink applications. Track events in Managed Service for Apache Flink using the DataStream API Managed Service for Apache Flink tracks events using the following timestamps: Event tracking 34 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Processing Time: Refers to the system time of the machine that is executing the respective operation. • Event Time: Refers to the time that each individual event occurred on its producing device. • Ingestion Time: Refers to the time that events enter the Managed Service for Apache Flink service. You set the time used by the streaming environment using setStreamTimeCharacteristic. env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime); env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime); env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime); For more information about timestamps, see Generating Watermarks in the Apache Flink documentation. Review Table API components Your Apache Flink application uses the Apache Flink Table API to interact with data in a stream using a relational model. You use the Table API to access data using Table sources, and then use Table functions to transform and filter table data. You can transform and filter tabular data using either API functions or SQL commands. This section contains the following topics: • Table API connectors: These components move data between your application and external data sources and destinations. • Table API time attributes: This topic describes how Managed Service for Apache Flink tracks events when using the Table API. Table API connectors In the Apache Flink programming model, connectors are components that your application uses to read or write data from external sources, such as other AWS services. With the Apache Flink Table API, you can use the following types of connectors: • Table API sources: You use Table API source connectors to create tables within your TableEnvironment using either API calls or SQL queries. Table API components 35 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Table API sinks: You use SQL commands to write table data to external sources such as an Amazon MSK topic or an Amazon S3 bucket. Table API sources You create a table source from a data stream. The following code creates a table from an Amazon MSK topic: //create the table final FlinkKafkaConsumer<StockRecord> consumer = new FlinkKafkaConsumer<StockRecord>(kafkaTopic, new KafkaEventDeserializationSchema(), kafkaProperties); consumer.setStartFromEarliest(); //Obtain stream DataStream<StockRecord> events = env.addSource(consumer); Table table = streamTableEnvironment.fromDataStream(events); For more information about table sources, see Table & SQL Connectors in the Apache Flink Documentation. Table API sinks To write table data to a sink, you create the sink in SQL, and then run the SQL-based sink on the StreamTableEnvironment object. The following code example demonstrates how to write table data to an Amazon S3 sink: final String s3Sink = "CREATE TABLE sink_table (" + "event_time TIMESTAMP," + "ticker STRING," + "price DOUBLE," + "dt STRING," + "hr STRING" + ")" + " PARTITIONED BY (ticker,dt,hr)" + " WITH" + "(" + " 'connector' = 'filesystem'," + " 'path' = '" + s3Path + "'," + " 'format' = 'json'" + Table API connectors 36 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ") "; //send to s3 streamTableEnvironment.executeSql(s3Sink); filteredTable.executeInsert("sink_table"); You can use the format parameter to control what format Managed Service for Apache Flink uses to write the output to the sink. For information about formats, see Supported Connectors in the Apache Flink Documentation. User-defined sources and sinks You can use existing Apache Kafka connectors for sending data to and from other AWS services, such as Amazon MSK and Amazon S3. For interacting with other data sources and destinations, you can define your own sources and sinks. For more information, see User-defined Sources and Sinks in the Apache Flink Documentation. Table API time attributes Each record in a data stream has several timestamps that define when events related to the record occurred: • Event Time: A user-defined timestamp that defines when the event that created the record occurred. • Ingestion Time: The time when your application retrieved the record from the data stream. • Processing Time: The time when your application processed the record. When the Apache
analytics-java-api-019
analytics-java-api.pdf
19
MSK and Amazon S3. For interacting with other data sources and destinations, you can define your own sources and sinks. For more information, see User-defined Sources and Sinks in the Apache Flink Documentation. Table API time attributes Each record in a data stream has several timestamps that define when events related to the record occurred: • Event Time: A user-defined timestamp that defines when the event that created the record occurred. • Ingestion Time: The time when your application retrieved the record from the data stream. • Processing Time: The time when your application processed the record. When the Apache Flink Table API creates windows based on record times, you define which of these timestamps it uses by using the setStreamTimeCharacteristic method. For more information about using timestamps with the Table API, see Time Attributes and Timely Stream Processing in the Apache Flink Documentation. Use Python with Managed Service for Apache Flink Note If you are developing Python Flink application on a new Mac with Apple Silicon chip, you may encounter some known issues with Python dependencies of PyFlink 1.15. In this case Table API time attributes 37 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide we recommend running the Python interpreter in Docker. For step-by-step instructions, see PyFlink 1.15 development on Apple Silicon Mac. Apache Flink version 1.20 includes support for creating applications using Python version 3.11. For more information, see Flink Python Docs. You create a Managed Service for Apache Flink application using Python by doing the following: • Create your Python application code as a text file with a main method. • Bundle your application code file and any Python or Java dependencies into a zip file, and upload it to an Amazon S3 bucket. • Create your Managed Service for Apache Flink application, specifying your Amazon S3 code location, application properties, and application settings. At a high level, the Python Table API is a wrapper around the Java Table API. For information about the Python Table API, see the Table API Tutorial in the Apache Flink Documentation. Program your Managed Service for Apache Flink Python application You code your Managed Service for Apache Flink for Python application using the Apache Flink Python Table API. The Apache Flink engine translates Python Table API statements (running in the Python VM) into Java Table API statements (running in the Java VM). You use the Python Table API by doing the following: • Create a reference to the StreamTableEnvironment. • Create table objects from your source streaming data by executing queries on the StreamTableEnvironment reference. • Execute queries on your table objects to create output tables. • Write your output tables to your destinations using a StatementSet. To get started using the Python Table API in Managed Service for Apache Flink, see Get started with Amazon Managed Service for Apache Flink for Python. Read and write streaming data To read and write streaming data, you execute SQL queries on the table environment. Program your Python application 38 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create a table The following code example demonstrates a user-defined function that creates a SQL query. The SQL query creates a table that interacts with a Kinesis stream: def create_table(table_name, stream_name, region, stream_initpos): return """ CREATE TABLE {0} ( `record_id` VARCHAR(64) NOT NULL, `event_time` BIGINT NOT NULL, `record_number` BIGINT NOT NULL, `num_retries` BIGINT NOT NULL, `verified` BOOLEAN NOT NULL ) PARTITIONED BY (record_id) WITH ( 'connector' = 'kinesis', 'stream' = '{1}', 'aws.region' = '{2}', 'scan.stream.initpos' = '{3}', 'sink.partitioner-field-delimiter' = ';', 'sink.producer.collection-max-count' = '100', 'format' = 'json', 'json.timestamp-format.standard' = 'ISO-8601' ) """.format(table_name, stream_name, region, stream_initpos) Read streaming data The following code example demonstrates how to use preceding CreateTableSQL query on a table environment reference to read data: table_env.execute_sql(create_table(input_table, input_stream, input_region, stream_initpos)) Write streaming data The following code example demonstrates how to use the SQL query from the CreateTable example to create an output table reference, and how to use a StatementSet to interact with the tables to write data to a destination Kinesis stream: table_result = table_env.execute_sql("INSERT INTO {0} SELECT * FROM {1}" .format(output_table_name, input_table_name)) Program your Python application 39 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Read runtime properties You can use runtime properties to configure your application without changing your application code. You specify application properties for your application the same way as with a Managed Service for Apache Flink for Java application. You can specify runtime properties in the following ways: • Using the CreateApplication action. • Using the UpdateApplication action. • Configuring your application by using the console. You retrieve application properties in code by reading a json file called application_properties.json that the Managed Service for Apache Flink runtime creates. The following code example demonstrates reading application properties from
analytics-java-api-020
analytics-java-api.pdf
20
Apache Flink Developer Guide Read runtime properties You can use runtime properties to configure your application without changing your application code. You specify application properties for your application the same way as with a Managed Service for Apache Flink for Java application. You can specify runtime properties in the following ways: • Using the CreateApplication action. • Using the UpdateApplication action. • Configuring your application by using the console. You retrieve application properties in code by reading a json file called application_properties.json that the Managed Service for Apache Flink runtime creates. The following code example demonstrates reading application properties from the application_properties.json file: file_path = '/etc/flink/application_properties.json' if os.path.isfile(file_path): with open(file_path, 'r') as file: contents = file.read() properties = json.loads(contents) The following user-defined function code example demonstrates reading a property group from the application properties object: retrieves: def property_map(properties, property_group_id): for prop in props: if prop["PropertyGroupId"] == property_group_id: return prop["PropertyMap"] The following code example demonstrates reading a property called INPUT_STREAM_KEY from a property group that the previous example returns: input_stream = input_property_map[INPUT_STREAM_KEY] Program your Python application 40 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create your application's code package Once you have created your Python application, you bundle your code file and dependencies into a zip file. Your zip file must contain a python script with a main method, and can optionally contain the following: • Additional Python code files • User-defined Java code in JAR files • Java libraries in JAR files Note Your application zip file must contain all of the dependencies for your application. You can't reference libraries from other sources for your application. Create your Managed Service for Apache Flink Python application Specify your code files Once you have created your application's code package, you upload it to an Amazon S3 bucket. You then create your application using either the console or the CreateApplication action. When you create your application using the CreateApplication action, you specify the code files and archives in your zip file using a special application property group called kinesis.analytics.flink.run.options. You can define the following types files: • python: A text file containing a Python main method. • jarfile: A Java JAR file containing Java user-defined functions. • pyFiles: A Python resource file containing resources to be used by the application. • pyArchives: A zip file containing resource files for the application. For more information about Apache Flink Python code file types, see Command-Line Interface in the Apache Flink Documentation. Create your Python application 41 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Managed Service for Apache Flink does not support the pyModule, pyExecutable, or pyRequirements file types. All of the code, requirements, and dependencies must be in your zip file. You can't specify dependencies to be installed using pip. The following example json snippet demonstrates how to specify file locations within your application's zip file: "ApplicationConfiguration": { "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "kinesis.analytics.flink.run.options", "PropertyMap": { "python": "MyApplication/main.py", "jarfile": "MyApplication/lib/myJarFile.jar", "pyFiles": "MyApplication/lib/myDependentFile.py", "pyArchives": "MyApplication/lib/myArchive.zip" } }, Monitor your Managed Service for Apache Flink Python application You use your application's CloudWatch log to monitor your Managed Service for Apache Flink Python application. Managed Service for Apache Flink logs the following messages for Python applications: • Messages written to the console using print() in the application's main method. • Messages sent in user-defined functions using the logging package. The following code example demonstrates writing to the application log from a user-defined function: import logging @udf(input_types=[DataTypes.BIGINT()], result_type=DataTypes.BIGINT()) def doNothingUdf(i): logging.info("Got {} in the doNothingUdf".format(str(i))) return i Monitor your Python application 42 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Error messages thrown by the application. If the application throws an exception in the main function, it will appear in your application's logs. The following example demonstrates a log entry for an exception thrown from Python code: 2021-03-15 16:21:20.000 --------------------------- Python Process Started -------------------------- 2021-03-15 16:21:21.000 Traceback (most recent call last): 2021-03-15 16:21:21.000 " File ""/tmp/flink- web-6118109b-1cd2-439c-9dcd-218874197fa9/flink-web-upload/4390b233-75cb-4205- a532-441a2de83db3_code/PythonKinesisSink/PythonUdfUndeclared.py"", line 101, in <module>" 2021-03-15 16:21:21.000 main() 2021-03-15 16:21:21.000 " File ""/tmp/flink- web-6118109b-1cd2-439c-9dcd-218874197fa9/flink-web-upload/4390b233-75cb-4205- a532-441a2de83db3_code/PythonKinesisSink/PythonUdfUndeclared.py"", line 54, in main" 2021-03-15 16:21:21.000 " table_env.register_function(""doNothingUdf"", doNothingUdf)" 2021-03-15 16:21:21.000 NameError: name 'doNothingUdf' is not defined 2021-03-15 16:21:21.000 --------------------------- Python Process Exited --------------------------- 2021-03-15 16:21:21.000 Run python process failed 2021-03-15 16:21:21.000 Error occurred when trying to start the job Note Due to performance issues, we recommend that you only use custom log messages during application development. Query logs with CloudWatch Insights The following CloudWatch Insights query searches for logs created by the Python entrypoint while executing the main function of your application: fields @timestamp, message | sort @timestamp asc | filter logger like /PythonDriver/ Monitor your Python application 43 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide | limit 1000 Use runtime properties in Managed
analytics-java-api-021
analytics-java-api.pdf
21
Python Process Exited --------------------------- 2021-03-15 16:21:21.000 Run python process failed 2021-03-15 16:21:21.000 Error occurred when trying to start the job Note Due to performance issues, we recommend that you only use custom log messages during application development. Query logs with CloudWatch Insights The following CloudWatch Insights query searches for logs created by the Python entrypoint while executing the main function of your application: fields @timestamp, message | sort @timestamp asc | filter logger like /PythonDriver/ Monitor your Python application 43 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide | limit 1000 Use runtime properties in Managed Service for Apache Flink You can use runtime properties to configure your application without recompiling your application code. This topic contains the following sections: • Manage runtime properties using the console • Manage runtime properties using the CLI • Access runtime properties in a Managed Service for Apache Flink application Manage runtime properties using the console You can add, update, or remove runtime properties from your Managed Service for Apache Flink application using the AWS Management Console. Note If you are using an earlier supported version of Apache Flink and want to upgrade your existing applications to Apache Flink 1.19.1, you can do so using in-place Apache Flink version upgrades. With in-place version upgrades, you retain application traceability against a single ARN across Apache Flink versions, including snapshots, logs, metrics, tags, Flink configurations, and more. You can use this feature in RUNNING and READY state. For more information, see Use in-place version upgrades for Apache Flink. Update Runtime Properties for a Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. Choose your Managed Service for Apache Flink application. Choose Application details. 3. On the page for your application, choose Configure. 4. Expand the Properties section. 5. Use the controls in the Properties section to define a property group with key-value pairs. Use these controls to add, update, or remove property groups and runtime properties. 6. Choose Update. Use runtime properties 44 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Manage runtime properties using the CLI You can add, update, or remove runtime properties using the AWS CLI. This section includes example requests for API actions for configuring runtime properties for an application. For information about how to use a JSON file for input for an API action, see Managed Service for Apache Flink API example code. Note Replace the sample account ID (012345678901) in the examples following with your account ID. Add runtime properties when creating an application The following example request for the CreateApplication action adds two runtime property groups (ProducerConfigProperties and ConsumerConfigProperties) when you create an application: { "ApplicationName": "MyApplication", "ApplicationDescription": "my java test app", "RuntimeEnvironment": "FLINK-1_19", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "java-getting-started-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", Manage runtime properties using the CLI 45 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } } Add and update runtime properties in an existing application The following example request for the UpdateApplication action adds or updates runtime properties for an existing application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 2, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } Manage runtime properties using the CLI 46 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } Note If you use a key that has no corresponding runtime property in a property group, Managed Service for Apache Flink adds the key-value pair as a new property. If you use a key for an existing runtime property in a property group, Managed Service for Apache Flink updates the property value. Remove runtime properties The following example request for the UpdateApplication action removes all runtime properties and property groups from an existing application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 3, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [] } } } Important If you omit an existing property group or an existing property key in a property group, that property group or property is removed. Access runtime properties in a Managed Service for Apache Flink application You retrieve runtime properties in your Java application code using the static KinesisAnalyticsRuntime.getApplicationProperties() method, which returns a Map<String, Properties> object. The following Java code example retrieves runtime properties for your application: Access runtime properties in a Managed Service for Apache Flink application 47 Managed Service for Apache Flink Managed Service for Apache Flink
analytics-java-api-022
analytics-java-api.pdf
22
"MyApplication", "CurrentApplicationVersionId": 3, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [] } } } Important If you omit an existing property group or an existing property key in a property group, that property group or property is removed. Access runtime properties in a Managed Service for Apache Flink application You retrieve runtime properties in your Java application code using the static KinesisAnalyticsRuntime.getApplicationProperties() method, which returns a Map<String, Properties> object. The following Java code example retrieves runtime properties for your application: Access runtime properties in a Managed Service for Apache Flink application 47 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Map<String, Properties> applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties(); You retrieve a property group (as a Java.Util.Properties object) as follows: Properties consumerProperties = applicationProperties.get("ConsumerConfigProperties"); You typically configure an Apache Flink source or sink by passing in the Properties object without needing to retrieve the individual properties. The following code example demonstrates how to create an Flink source by passing in a Properties object retrieved from runtime properties: private static FlinkKinesisProducer<String> createSinkFromApplicationProperties() throws IOException { Map<String, Properties> applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties(); FlinkKinesisProducer<String> sink = new FlinkKinesisProducer<String>(new SimpleStringSchema(), applicationProperties.get("ProducerConfigProperties")); sink.setDefaultStream(outputStreamName); sink.setDefaultPartition("0"); return sink; } For code examples, see Examples for creating and working with Managed Service for Apache Flink applications. Use Apache Flink connectors with Managed Service for Apache Flink Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. Connectors are flexible integrations that let you read from files and directories. Connectors consist of complete modules for interacting with Amazon services and third-party systems. Types of connectors include the following: Use Apache Flink connectors 48 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Sources: Provide data to your application from a Kinesis data stream, file, Apache Kafka topic, file, or other data sources. • Sinks: Send data from your application to a Kinesis data stream, Firehose stream, Apache Kafka topic, or other data destinations. • Asynchronous I/O: Provides asynchronous access to a data source such as a database to enrich streams. Apache Flink connectors are stored in their own source repositories. The version and artifact for Apache Flink connectors changes depending on the Apache Flink version you are using, and whether you are using the DataStream, Table, or SQL API. Amazon Managed Service for Apache Flink supports over 40 pre-built Apache Flink source and sink connectors. The following table provides a summary of the most popular connectors and their associated versions. You can also build custom sinks using the Async-sink framework. For more information, see The Generic Asynchronous Base Sink in the Apache Flink documentation. To access the repository for Apache Flink AWS connectors, see flink-connector-aws. Connectors for Flink versions Connector Flink version 1.15 Flink version 1.18 Flink versions 1.19 Flink versions 1.20 Kinesis Data Stream - Source flink-connector- kinesis, 1.15.4 flink-connector- kinesis, 4.3.0-1.1 flink-connector- kinesis, 5.0.0-1.1 flink-connector- kinesis, 5.0.0-1.2 - DataStream and Table API Kinesis Data Stream - Sink - DataStream and Table API Kinesis Data Streams - Source/Sink - SQL 8 9 0 flink-connector- aws-kinesis- streams, 1.15.4 flink-connector- aws-kinesis -streams, 4.3.0-1.18 flink-connector- aws-kinesis -streams, 5.0.0-1.19 flink-connector- aws-kinesis -streams, 5.0.0-1.20 flink-sql- connector- kinesis, 1.15.4 flink-sql- connector- kinesis, 4.3.0-1.1 flink-sql- connector- kinesis, 5.0.0-1.1 flink-sql- connector- kinesis-streams, 8 9 5.0.0-1.20 Use Apache Flink connectors 49 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Connector Flink version 1.15 Flink version 1.18 Flink versions 1.19 Flink versions 1.20 Kafka - DataStream and flink-connector- kafka, 1.15.4 flink-connector- kafka, 3.2.0-1.18 flink-connector- kafka, 3.3.0-1.19 flink-connector- kafka, 3.3.0-1.20 Table API Kafka - SQL flink-sql- connector-kafka, flink-sql- connector-kafka, flink-sql- connector-kafka, flink-sql- connector-kafka, 1.15.4 3.2.0-1.18 3.3.0-1.19 3.3.0-1.20 Firehose - DataStream and flink-connector- aws-kinesis- flink-connector- aws-firehose, flink-connector- aws-firehose, flink-connector- aws-firehose, Table API firehose, 1.15.4 4.3.0-1.18 5.0.0-1.19 5.0.0-1.20 Firehose - SQL flink-sql- connector-aws- flink-sql- connector- flink-sql- connector- flink-sql- connector- kinesis-firehose, aws-firehose, aws-firehose, aws-firehose, 1.15.4 4.3.0-1.18 5.0.0-1.19 5.0.0-1.20 DynamoDB - DataStream and flink-connector- dynamodb, flink-connector- dynamodb, flink-connector- dynamodb, flink-connector- dynamodb, Table API 3.0.0-1.15 4.3.0-1.18 5.0.0-1.19 5.0.0-1.20 DynamoDB - SQL flink-sql- connector- dynamodb, 3.0.0-1.15 flink-sql- connector- dynamodb, 4.3.0-1.18 flink-sql- connector- dynamodb, 5.0.0-1.19 flink-sql- connector- dynamodb, 5.0.0-1.20 OpenSearch - DataStream and Table API OpenSearch - SQL - - flink-connector- opensearch, 1.2.0-1.18 flink-connector- opensearch, 1.2.0-1.19 flink-connector- opensearch, 1.2.0-1.19 flink-sql- connector- opensearch, 1.2.0-1.18 flink-sql- connector- opensearch, 1.2.0-1.19 flink-sql- connector- opensearch, 1.2.0-1.19 Use Apache Flink connectors 50 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Connector Flink version 1.15 Flink version 1.18 Flink versions 1.19 Flink versions 1.20 Amazon Managed Service for Prometheus DataStream Amazon SQS DataStream and Table API - - flink-sql- connector- flink-connector- prometheus, flink-connector- prometheus, opensearch, 1.0.0-1.19 1.0.0-1.20 1.2.0-1.18 flink-sql- connector- opensearch, 1.2.0-1.18 flink-connector- sqs, 5.0.0-1.19 flink-connector- sqs, 5.0.0-1.20 To learn more about connectors in Amazon Managed Service for Apache Flink, see: • DataStream API connectors • Table API connectors Known issues There is
analytics-java-api-023
analytics-java-api.pdf
23
connector- opensearch, 1.2.0-1.18 flink-sql- connector- opensearch, 1.2.0-1.19 flink-sql- connector- opensearch, 1.2.0-1.19 Use Apache Flink connectors 50 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Connector Flink version 1.15 Flink version 1.18 Flink versions 1.19 Flink versions 1.20 Amazon Managed Service for Prometheus DataStream Amazon SQS DataStream and Table API - - flink-sql- connector- flink-connector- prometheus, flink-connector- prometheus, opensearch, 1.0.0-1.19 1.0.0-1.20 1.2.0-1.18 flink-sql- connector- opensearch, 1.2.0-1.18 flink-connector- sqs, 5.0.0-1.19 flink-connector- sqs, 5.0.0-1.20 To learn more about connectors in Amazon Managed Service for Apache Flink, see: • DataStream API connectors • Table API connectors Known issues There is a known open source Apache Flink issue with the Apache Kafka connector in Apache Flink 1.15. This issue is resolved in later versions of Apache Flink. For more information, see the section called “Known issues”. Implement fault tolerance in Managed Service for Apache Flink Checkpointing is the method that is used for implementing fault tolerance in Amazon Managed Service for Apache Flink. A checkpoint is an up-to-date backup of a running application that is used to recover immediately from an unexpected application disruption or failover. For details on checkpointing in Apache Flink applications, see Checkpoints in the Apache Flink Documentation. A snapshot is a manually created and managed backup of application state. Snapshots let you restore your application to a previous state by calling UpdateApplication. For more information, see Manage application backups using snapshots. Known issues 51 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide If checkpointing is enabled for your application, then the service provides fault tolerance by creating and loading backups of application data in the event of unexpected application restarts. These unexpected application restarts could be caused by unexpected job restarts, instance failures, etc. This gives the application the same semantics as failure-free execution during these restarts. If snapshots are enabled for the application, and configured using the application's ApplicationRestoreConfiguration, then the service provides exactly-once processing semantics during application updates, or during service-related scaling or maintenance. Configure checkpointing in Managed Service for Apache Flink You can configure your application's checkpointing behavior. You can define whether it persists the checkpointing state, how often it saves its state to checkpoints, and the minimum interval between the end of one checkpoint operation and the beginning of another. You configure the following settings using the CreateApplication or UpdateApplication API operations: • CheckpointingEnabled — Indicates whether checkpointing is enabled in the application. • CheckpointInterval — Contains the time in milliseconds between checkpoint (persistence) operations. • ConfigurationType — Set this value to DEFAULT to use the default checkpointing behavior. Set this value to CUSTOM to configure other values. Note The default checkpoint behavior is as follows: • CheckpointingEnabled: true • CheckpointInterval: 60000 • MinPauseBetweenCheckpoints: 5000 If ConfigurationType is set to DEFAULT, the preceding values will be used, even if they are set to other values using either using the AWS Command Line Interface, or by setting the values in the application code. Configure checkpointing in Managed Service for Apache Flink 52 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note For Flink 1.15 onward, Managed Service for Apache Flink will use stop-with- savepoint during Automatic Snapshot Creation, that is, application update, scaling or stopping. • MinPauseBetweenCheckpoints — The minimum time in milliseconds between the end of one checkpoint operation and the start of another. Setting this value prevents the application from checkpointing continuously when a checkpoint operation takes longer than the CheckpointInterval. Review checkpointing API examples This section includes example requests for API actions for configuring checkpointing for an application. For information about how to use a JSON file for input for an API action, see Managed Service for Apache Flink API example code. Configure checkpointing for a new application The following example request for the CreateApplication action configures checkpointing when you are creating an application: { "ApplicationName": "MyApplication", "RuntimeEnvironment":"FLINK-1_19", "ServiceExecutionRole":"arn:aws:iam::123456789123:role/myrole", "ApplicationConfiguration": { "ApplicationCodeConfiguration":{ "CodeContent":{ "S3ContentLocation":{ "BucketARN":"arn:aws:s3:::amzn-s3-demo-bucket", "FileKey":"myflink.jar", "ObjectVersion":"AbCdEfGhIjKlMnOpQrStUvWxYz12345" } }, "FlinkApplicationConfiguration": { "CheckpointConfiguration": { "CheckpointingEnabled": "true", "CheckpointInterval": 20000, Review checkpointing API examples 53 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ConfigurationType": "CUSTOM", "MinPauseBetweenCheckpoints": 10000 } } } Disable checkpointing for a new application The following example request for the CreateApplication action disables checkpointing when you are creating an application: { "ApplicationName": "MyApplication", "RuntimeEnvironment":"FLINK-1_19", "ServiceExecutionRole":"arn:aws:iam::123456789123:role/myrole", "ApplicationConfiguration": { "ApplicationCodeConfiguration":{ "CodeContent":{ "S3ContentLocation":{ "BucketARN":"arn:aws:s3:::amzn-s3-demo-bucket", "FileKey":"myflink.jar", "ObjectVersion":"AbCdEfGhIjKlMnOpQrStUvWxYz12345" } }, "FlinkApplicationConfiguration": { "CheckpointConfiguration": { "CheckpointingEnabled": "false" } } } Configure checkpointing for an existing application The following example request for the UpdateApplication action configures checkpointing for an existing application: { "ApplicationName": "MyApplication", "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "CheckpointConfigurationUpdate": { "CheckpointingEnabledUpdate": true, Review checkpointing API examples 54 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "CheckpointIntervalUpdate": 20000, "ConfigurationTypeUpdate": "CUSTOM", "MinPauseBetweenCheckpointsUpdate": 10000 } } } } Disable checkpointing for an existing application The following
analytics-java-api-024
analytics-java-api.pdf
24
CreateApplication action disables checkpointing when you are creating an application: { "ApplicationName": "MyApplication", "RuntimeEnvironment":"FLINK-1_19", "ServiceExecutionRole":"arn:aws:iam::123456789123:role/myrole", "ApplicationConfiguration": { "ApplicationCodeConfiguration":{ "CodeContent":{ "S3ContentLocation":{ "BucketARN":"arn:aws:s3:::amzn-s3-demo-bucket", "FileKey":"myflink.jar", "ObjectVersion":"AbCdEfGhIjKlMnOpQrStUvWxYz12345" } }, "FlinkApplicationConfiguration": { "CheckpointConfiguration": { "CheckpointingEnabled": "false" } } } Configure checkpointing for an existing application The following example request for the UpdateApplication action configures checkpointing for an existing application: { "ApplicationName": "MyApplication", "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "CheckpointConfigurationUpdate": { "CheckpointingEnabledUpdate": true, Review checkpointing API examples 54 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "CheckpointIntervalUpdate": 20000, "ConfigurationTypeUpdate": "CUSTOM", "MinPauseBetweenCheckpointsUpdate": 10000 } } } } Disable checkpointing for an existing application The following example request for the UpdateApplication action disables checkpointing for an existing application: { "ApplicationName": "MyApplication", "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "CheckpointConfigurationUpdate": { "CheckpointingEnabledUpdate": false, "CheckpointIntervalUpdate": 20000, "ConfigurationTypeUpdate": "CUSTOM", "MinPauseBetweenCheckpointsUpdate": 10000 } } } } Manage application backups using snapshots A snapshot is the Managed Service for Apache Flink implementation of an Apache Flink Savepoint. A snapshot is a user- or service-triggered, created, and managed backup of the application state. For information about Apache Flink Savepoints, see Savepoints in the Apache Flink Documentation. Using snapshots, you can restart an application from a particular snapshot of the application state. Note We recommend that your application create a snapshot several times a day to restart properly with correct state data. The correct frequency for your snapshots depends on your application's business logic. Taking frequent snapshots lets you recover more recent data, but increases cost and requires more system resources. Manage application backups using snapshots 55 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide In Managed Service for Apache Flink, you manage snapshots using the following API actions: • CreateApplicationSnapshot • DeleteApplicationSnapshot • DescribeApplicationSnapshot • ListApplicationSnapshots For the per-application limit on the number of snapshots, see Managed Service for Apache Flink and Studio notebook quota. If your application reaches the limit on snapshots, then manually creating a snapshot fails with a LimitExceededException. Managed Service for Apache Flink never deletes snapshots. You must manually delete your snapshots using the DeleteApplicationSnapshot action. To load a saved snapshot of application state when starting an application, use the ApplicationRestoreConfiguration parameter of the StartApplication or UpdateApplication action. This topic contains the following sections: • Manage automatic snapshot creation • Restore from a snapshot that contains incompatible state data • Review snapshot API examples Manage automatic snapshot creation If SnapshotsEnabled is set to true in the ApplicationSnapshotConfiguration for the application, Managed Service for Apache Flink automatically creates and uses snapshots when the application is updated, scaled, or stopped to provide exactly-once processing semantics. Note Setting ApplicationSnapshotConfiguration::SnapshotsEnabled to false will lead to data loss during application updates. Manage automatic snapshot creation 56 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Managed Service for Apache Flink triggers intermediate savepoints during snapshot creation. For Flink version 1.15 or greater, intermediate savepoints no longer commit any side effects. See Triggering savepoints. Automatically created snapshots have the following qualities: • The snapshot is managed by the service, but you can see the snapshot using the ListApplicationSnapshots action. Automatically created snapshots count against your snapshot limit. • If your application exceeds the snapshot limit, manually created snapshots will fail, but the Managed Service for Apache Flink service will still successfully create snapshots when the application is updated, scaled, or stopped. You must manually delete snapshots using the DeleteApplicationSnapshot action before creating more snapshots manually. Restore from a snapshot that contains incompatible state data Because snapshots contain information about operators, restoring state data from a snapshot for an operator that has changed since the previous application version may have unexpected results. An application will fault if it attempts to restore state data from a snapshot that does not correspond to the current operator. The faulted application will be stuck in either the STOPPING or UPDATING state. To allow an application to restore from a snapshot that contains incompatible state data, set the AllowNonRestoredState parameter of the FlinkRunConfiguration to true using the UpdateApplication action. You will see the following behavior when an application is restored from an obsolete snapshot: • Operator added: If a new operator is added, the savepoint has no state data for the new operator. No fault will occur, and it is not necessary to set AllowNonRestoredState. • Operator deleted: If an existing operator is deleted, the savepoint has state data for the missing operator. A fault will occur unless AllowNonRestoredState is set to true. • Operator modified: If compatible changes are made, such as changing a parameter's type to a compatible type, the application can restore from the obsolete snapshot. For more information Restore from a snapshot that contains incompatible state data 57 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide about restoring from snapshots, see Savepoints in the Apache Flink Documentation.
analytics-java-api-025
analytics-java-api.pdf
25
will occur, and it is not necessary to set AllowNonRestoredState. • Operator deleted: If an existing operator is deleted, the savepoint has state data for the missing operator. A fault will occur unless AllowNonRestoredState is set to true. • Operator modified: If compatible changes are made, such as changing a parameter's type to a compatible type, the application can restore from the obsolete snapshot. For more information Restore from a snapshot that contains incompatible state data 57 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide about restoring from snapshots, see Savepoints in the Apache Flink Documentation. An application that uses Apache Flink version 1.8 or later can possibly be restored from a snapshot with a different schema. An application that uses Apache Flink version 1.6 cannot be restored. For two-phase-commit sinks, we recommend using system snapshot (SwS) instead of user- created snapshot (CreateApplicationSnapshot). For Flink, Managed Service for Apache Flink triggers intermediate savepoints during snapshot creation. For Flink 1.15 onward, intermediate savepoints no longer commit any side effects. See Triggering Savepoints. If you need to resume an application that is incompatible with existing savepoint data, we recommend that you skip restoring from the snapshot by setting the ApplicationRestoreType parameter of the StartApplication action to SKIP_RESTORE_FROM_SNAPSHOT. For more information about how Apache Flink deals with incompatible state data, see State Schema Evolution in the Apache Flink Documentation. Review snapshot API examples This section includes example requests for API actions for using snapshots with an application. For information about how to use a JSON file for input for an API action, see Managed Service for Apache Flink API example code. Enable snapshots for an application The following example request for the UpdateApplication action enables snapshots for an application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationSnapshotConfigurationUpdate": { "SnapshotsEnabledUpdate": "true" } } } Review snapshot API examples 58 Managed Service for Apache Flink Create a snapshot Managed Service for Apache Flink Developer Guide The following example request for the CreateApplicationSnapshot action creates a snapshot of the current application state: { "ApplicationName": "MyApplication", "SnapshotName": "MyCustomSnapshot" } List snapshots for an application The following example request for the ListApplicationSnapshots action lists the first 50 snapshots for the current application state: { "ApplicationName": "MyApplication", "Limit": 50 } List details for an application snapshot The following example request for the DescribeApplicationSnapshot action lists details for a specific application snapshot: { "ApplicationName": "MyApplication", "SnapshotName": "MyCustomSnapshot" } Delete a snapshot The following example request for the DeleteApplicationSnapshot action deletes a previously saved snapshot. You can get the SnapshotCreationTimestamp value using either ListApplicationSnapshots or DeleteApplicationSnapshot: { "ApplicationName": "MyApplication", "SnapshotName": "MyCustomSnapshot", Review snapshot API examples 59 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "SnapshotCreationTimestamp": 12345678901.0, } Restart an application using a named snapshot The following example request for the StartApplication action starts the application using the saved state from a specific snapshot: { "ApplicationName": "MyApplication", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_CUSTOM_SNAPSHOT", "SnapshotName": "MyCustomSnapshot" } } } Restart an application using the most recent snapshot The following example request for the StartApplication action starts the application using the most recent snapshot: { "ApplicationName": "MyApplication", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } Restart an application using no snapshot The following example request for the StartApplication action starts the application without loading application state, even if a snapshot is present: { "ApplicationName": "MyApplication", "RunConfiguration": { "ApplicationRestoreConfiguration": { Review snapshot API examples 60 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationRestoreType": "SKIP_RESTORE_FROM_SNAPSHOT" } } } Use in-place version upgrades for Apache Flink With in-place version upgrades for Apache Flink, you retain application traceability against a single ARN across Apache Flink versions. This includes snapshots, logs, metrics, tags, Flink configurations, resource limit increases, VPCs, and more. You can perform in-place version upgrades for Apache Flink to upgrade existing applications to a new Flink version in Amazon Managed Service for Apache Flink. To perform this task, you can use the AWS CLI, AWS CloudFormation, AWS SDK, or the AWS Management Console. Note You can't use in-place version upgrades for Apache Flink with Amazon Managed Service for Apache Flink Studio. This topic contains the following sections: • Upgrade applications using in-place version upgrades for Apache Flink • Upgrade your application to a new Apache Flink version • Roll back application upgrades • General best practices and recommendations for application upgrades • Precautions and known issues with application upgrades Upgrade applications using in-place version upgrades for Apache Flink Before you begin, we recommend that you watch this video: In-Place Version Upgrades. To perform in-place version upgrades for Apache Flink, you can use the AWS CLI, AWS CloudFormation, AWS SDK, or the AWS Management Console. You can use this feature with any existing applications that you use with Managed Service for Apache Flink in
analytics-java-api-026
analytics-java-api.pdf
26
upgrades for Apache Flink • Upgrade your application to a new Apache Flink version • Roll back application upgrades • General best practices and recommendations for application upgrades • Precautions and known issues with application upgrades Upgrade applications using in-place version upgrades for Apache Flink Before you begin, we recommend that you watch this video: In-Place Version Upgrades. To perform in-place version upgrades for Apache Flink, you can use the AWS CLI, AWS CloudFormation, AWS SDK, or the AWS Management Console. You can use this feature with any existing applications that you use with Managed Service for Apache Flink in a READY or RUNNING state. It uses the UpdateApplication API to add the ability to change the Flink runtime. Use in-place version upgrades for Apache Flink 61 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Before upgrading: Update your Apache Flink application When you write your Flink applications, you bundle them with their dependencies into an application JAR and upload the JAR to your Amazon S3 bucket. From there, Amazon Managed Service for Apache Flink runs the job in the new Flink runtime that you've selected. You might have to update your applications to achieve compatibility with the Flink runtime you want to upgrade to. There can be inconsistencies between Flink versions that cause the version upgrade to fail. Most commonly, this will be with connectors for sources (ingress) or destinations (sinks, egress) and Scala dependencies. Flink 1.15 and later versions in Managed Service for Apache Flink are Scala- agnostic, and your JAR must contain the version of Scala you plan to use. To update your application 1. Read the advice from the Flink community on upgrading applications with state. See Upgrading Applications and Flink Versions. 2. Read the list of knowing issues and limitations. See Precautions and known issues with application upgrades. 3. Update your dependencies and test your applications locally. These dependencies typically are: 1. The Flink runtime and API. 2. Connectors recommended for the new Flink runtime. You can find these on Release versions for the specific runtime you want to update to. 3. Scala – Apache Flink is Scala-agnostic starting with and including Flink 1.15. You must include the Scala dependencies you want to use in your application JAR. 4. Build a new application JAR on zipfile and upload it to Amazon S3. We recommend that you use a different name from the previous JAR/zipfile. If you need to roll back, you will use this information. 5. If you are running stateful applications, we strongly recommend that you take a snapshot of your current application. This lets you roll back statefully if you encounter issues during or after the upgrade. Upgrade your application to a new Apache Flink version You can upgrade your Flink application by using the UpdateApplication action. You can call the UpdateApplication API in multiple ways: Upgrade to a new version 62 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Use the existing Configuration workflow on the AWS Management Console. • Go to your app page on the AWS Management Console. • Choose Configure. • Select the new runtime and the snapshot that you want to start from, also known as restore configuration. Use the latest setting as the restore configuration to start the app from the latest snapshot. Point to the new upgraded application JAR/zip on Amazon S3. • Use the AWS CLI update-application action. • Use AWS CloudFormation (CFN). • Update the RuntimeEnvironment field. Previously, AWS CloudFormation deleted the application and created a new one, causing your snapshots and other app history to be lost. Now AWS CloudFormation updates your RuntimeEnvironment in place and does not delete your application. • Use the AWS SDK. • Consult the SDK documentation for the programming language of your choice. See UpdateApplication. You can perform the upgrade while the application is in RUNNING state or while the application is stopped in READY state. Amazon Managed Service for Apache Flink validates to verify the compatibility between the original runtime version and the target runtime version. This compatibility check runs when you perform UpdateApplication while in RUNNING state or at the next StartApplication if you upgrade while in READY state. Upgrade an application in RUNNING state The following example shows upgrading an app in RUNNING state named UpgradeTest to Flink 1.18 in US East (N. Virginia) using the AWS CLI and starting the upgraded app from the latest snapshot. aws --region us-east-1 kinesisanalyticsv2 update-application \ --application-name UpgradeTest --runtime-environment-update "FLINK-1_18" \ --application-configuration-update '{"ApplicationCodeConfigurationUpdate": '\ '{"CodeContentUpdate": {"S3ContentLocationUpdate": '\ '{"FileKeyUpdate": "flink_1_18_app.jar"}}}}' \ --run-configuration-update '{"ApplicationRestoreConfiguration": '\ '{"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"}}' \ Upgrade to a new version 63 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide --current-application-version-id ${current_application_version} • If you enabled service snapshots and want to continue the application from
analytics-java-api-027
analytics-java-api.pdf
27
READY state. Upgrade an application in RUNNING state The following example shows upgrading an app in RUNNING state named UpgradeTest to Flink 1.18 in US East (N. Virginia) using the AWS CLI and starting the upgraded app from the latest snapshot. aws --region us-east-1 kinesisanalyticsv2 update-application \ --application-name UpgradeTest --runtime-environment-update "FLINK-1_18" \ --application-configuration-update '{"ApplicationCodeConfigurationUpdate": '\ '{"CodeContentUpdate": {"S3ContentLocationUpdate": '\ '{"FileKeyUpdate": "flink_1_18_app.jar"}}}}' \ --run-configuration-update '{"ApplicationRestoreConfiguration": '\ '{"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"}}' \ Upgrade to a new version 63 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide --current-application-version-id ${current_application_version} • If you enabled service snapshots and want to continue the application from the latest snapshot, Amazon Managed Service for Apache Flink verifies that the current RUNNING application's runtime is compatible with the selected target runtime. • If you have specified a snapshot from which to continue the target runtime, Amazon Managed Service for Apache Flink verifies that the target runtime is compatible with the specified snapshot. If the compatibility check fails, your update request is rejected and your application remains untouched in the RUNNING state. • If you choose to start your application without a snapshot, Amazon Managed Service for Apache Flink doesn't run any compatibility checks. • If your upgraded application fails or gets stuck in a transitive UPDATING state, follow the instructions in the Roll back application upgrades section to return to the healthy state. Process flow for running state applications Upgrade to a new version 64 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Upgrade to a new version 65 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Upgrade an application in READY state The following example shows upgrading an app in READY state named UpgradeTest to Flink 1.18 in US East (N. Virginia) using the AWS CLI. There is no specified snapshot to start the app because the application is not running. You can specify a snapshot when you issue the start application request. aws --region us-east-1 kinesisanalyticsv2 update-application \ --application-name UpgradeTest --runtime-environment-update "FLINK-1_18" \ --application-configuration-update '{"ApplicationCodeConfigurationUpdate": '\ '{"CodeContentUpdate": {"S3ContentLocationUpdate": '\ '{"FileKeyUpdate": "flink_1_18_app.jar"}}}}' \ --current-application-version-id ${current_application_version} • You can update the runtime of your applications in READY state to any Flink version. Amazon Managed Service for Apache Flink does not run any checks until you start your application. • Amazon Managed Service for Apache Flink only runs compatibility checks against the snapshot you selected to start the app. These are basic compatibility checks following the Flink Compatibility Table. They only check the Flink version with which the snapshot was taken and the Flink version you are targeting. If the Flink runtime of the selected snapshot is incompatible with the app's new runtime, the start request might be rejected. Process flow for ready state applications Upgrade to a new version 66 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Upgrade to a new version 67 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Roll back application upgrades If you have issues with your application or find inconsistencies in your application code between Flink versions, you can roll back using the AWS CLI, AWS CloudFormation, AWS SDK, or the AWS Management Console. The following examples show what rolling back looks like in different failure scenarios. Runtime upgrade succeeded, the application is in RUNNING state, but the job is failing and continuously restarting Assume you are trying to upgrade a stateful application named TestApplication from Flink 1.15 to Flink 1.18 in US East (N. Virginia). However, the upgraded Flink 1.18 application is failing to start or is constantly restarting, even though the application is in RUNNING state. This is a common failure scenario. To avoid further downtime, we recommend that you roll back your application immediately to the previous running version (Flink 1.15), and diagnose the issue later. To roll back the application to the previous running version, use the rollback-application AWS CLI command or the RollbackApplication API action. This API action rolls back the changes you've made that resulted in the latest version. Then it restarts your application using the latest successful snapshot. We strongly recommend that you take a snapshot with your existing app before you attempt to upgrade. This will help to avoid data loss or having to reprocess data. In this failure scenario, AWS CloudFormation will not roll back the application for you. You must update the CloudFormation template to point to the previous runtime and to the previous code to force CloudFormation to update the application. Otherwise, CloudFormation assumes that your application has been updated when it transitions to the RUNNING state. Rolling back an application that is stuck in UPDATING If your application gets stuck in the UPDATING or AUTOSCALING state after an upgrade attempt, Amazon Managed Service for Apache Flink offers the rollback-applications AWS CLI command, or the
analytics-java-api-028
analytics-java-api.pdf
28
avoid data loss or having to reprocess data. In this failure scenario, AWS CloudFormation will not roll back the application for you. You must update the CloudFormation template to point to the previous runtime and to the previous code to force CloudFormation to update the application. Otherwise, CloudFormation assumes that your application has been updated when it transitions to the RUNNING state. Rolling back an application that is stuck in UPDATING If your application gets stuck in the UPDATING or AUTOSCALING state after an upgrade attempt, Amazon Managed Service for Apache Flink offers the rollback-applications AWS CLI command, or the RollbackApplications API action that can roll back the application to the version before the stuck UPDATING or AUTOSCALING state. This API rolls back the changes that you’ve made that caused the application to get stuck in UPDATING or AUTOSCALING transitive state. Roll back application upgrades 68 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide General best practices and recommendations for application upgrades • Test the new job/runtime without state on a non-production environment before attempting a production upgrade. • Consider testing the stateful upgrade with a non-production application first. • Make sure that your new job graph has a compatible state with the snapshot you will be using to start your upgraded application. • Make sure that the types stored in operator states stay the same. If the type has changed, Apache Flink can't restore the operator state. • Make sure that the Operator IDs you set using the uid method remain the same. Apache Flink has a strong recommendation for assigning unique IDs to operators. For more information, see Assigning Operator IDs in the Apache Flink documentation. If you don't assign IDs to your operators, Flink automatically generates them. In that case, they might depend on the program structure and, if changed, can cause compatibility issues. Flink uses Operator IDs to match state in snapshot to operator. Changing Operator IDs results in the application not starting, or state stored in the snapshot being dropped, and the new operator starting without state. • Don't change the key used to store the keyed state. • Don't modify the input type of stateful operators like window or join. This implicitly changes the type of the internal state of the operator, causing a state incompatibility. Precautions and known issues with application upgrades Kafka Commit on checkpointing fails repeatedly after a broker restart There is a known open source Apache Flink issue with the Apache Kafka connector in Flink version 1.15 caused by a critical open source Kafka Client bug in Kafka Client 2.8.1. For more information, see Kafka Commit on checkpointing fails repeatedly after a broker restart and KafkaConsumer is unable to recover connection to group coordinator after commitOffsetAsync exception. To avoid this issue, we recommend that you use Apache Flink 1.18 or later in Amazon Managed Service for Apache Flink. Best practices 69 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Known limitations of state compatibility • If you are using the Table API, Apache Flink doesn't guarantee state compatibility between Flink versions. For more information, see Stateful Upgrades and Evolution in the Apache Flink documentation. • Flink 1.6 states are not compatible with Flink 1.18. The API rejects your request if you try to upgrade from 1.6 to 1.18 and later with state. You can upgrade to 1.8, 1.11, 1.13 and 1.15 and take a snapshot, and then upgrade to 1.18 and later. For more information, see Upgrading Applications and Flink Versions in the Apache Flink documentation. Known issues with the Flink Kinesis Connector • If you are using Flink 1.11 or earlier and using the amazon-kinesis-connector-flink connector for Enhanced-fan-out (EFO) support, you must take extra steps for a stateful upgrade to Flink 1.13 or later. This is because of the change in the package name of the connector. For more information, see amazon-kinesis-connector-flink. The amazon-kinesis-connector-flink connector for Flink 1.11 and earlier uses the packaging software.amazon.kinesis, whereas the Kinesis connector for Flink 1.13 and later uses org.apache.flink.streaming.connectors.kinesis. Use this tool to support your migration: amazon-kinesis-connector-flink-state-migrator. • If you are using Flink 1.13 or earlier with FlinkKinesisProducer and upgrading to Flink 1.15 or later, for a stateful upgrade you must continue to use FlinkKinesisProducer in Flink 1.15 or later, instead of the newer KinesisStreamsSink. However, if you already have a custom uid set on your sink, you should be able to switch to KinesisStreamsSink because FlinkKinesisProducer doesn't keep state. Flink will treat it as the same operator because a custom uid is set. Flink applications written in Scala • As of Flink 1.15, Apache Flink doesn't include Scala in the runtime. You must include the version of Scala you want to use and other Scala dependencies in your code JAR/zip when upgrading to Flink
analytics-java-api-029
analytics-java-api.pdf
29
for a stateful upgrade you must continue to use FlinkKinesisProducer in Flink 1.15 or later, instead of the newer KinesisStreamsSink. However, if you already have a custom uid set on your sink, you should be able to switch to KinesisStreamsSink because FlinkKinesisProducer doesn't keep state. Flink will treat it as the same operator because a custom uid is set. Flink applications written in Scala • As of Flink 1.15, Apache Flink doesn't include Scala in the runtime. You must include the version of Scala you want to use and other Scala dependencies in your code JAR/zip when upgrading to Flink 1.15 or later. For more information, see Amazon Managed Service for Apache Flink for Apache Flink 1.15.2 release. Known issues 70 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • If your application uses Scala and you are upgrading it from Flink 1.11 or earlier (Scala 2.11) to Flink 1.13 (Scala 2.12), make sure that your code uses Scala 2.12. Otherwise, your Flink 1.13 application may fail to find Scala 2.11 classes in the Flink 1.13 runtime. Things to consider when downgrading Flink application • Downgrading Flink applications is possible, but limited to cases when the application was previously running with the older Flink version. For a stateful upgrade Managed Service for Apache Flink will require using a snapshot taken with matching or earlier version for the downgrade • If you are updating your runtime from Flink 1.13 or later to Flink 1.11 or earlier, and if your app uses the HashMap state backend, your application will continuously fail. Implement application scaling in Managed Service for Apache Flink You can configure the parallel execution of tasks and the allocation of resources for Amazon Managed Service for Apache Flink to implement scaling. For information about how Apache Flink schedules parallel instances of tasks, see Parallel Execution in the Apache Flink Documentation. Topics • Configure application parallelism and ParallelismPerKPU • Allocate Kinesis Processing Units • Update your application's parallelism • Use automatic scaling in Managed Service for Apache Flink • maxParallelism considerations Configure application parallelism and ParallelismPerKPU You configure the parallel execution for your Managed Service for Apache Flink application tasks (such as reading from a source or executing an operator) using the following ParallelismConfiguration properties: Implement application scaling 71 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Parallelism — Use this property to set the default Apache Flink application parallelism. All operators, sources, and sinks execute with this parallelism unless they are overridden in the application code. The default is 1, and the default maximum is 256. • ParallelismPerKPU — Use this property to set the number of parallel tasks that can be scheduled per Kinesis Processing Unit (KPU) of your application. The default is 1, and the maximum is 8. For applications that have blocking operations (for example, I/O), a higher value of ParallelismPerKPU leads to full utilization of KPU resources. Note The limit for Parallelism is equal to ParallelismPerKPU times the limit for KPUs (which has a default of 64). The KPUs limit can be increased by requesting a limit increase. For instructions on how to request a limit increase, see "To request a limit increase" in Service Quotas. For information about setting task parallelism for a specific operator, see Setting the Parallelism: Operator in the Apache Flink Documentation. Allocate Kinesis Processing Units Managed Service for Apache Flink provisions capacity as KPUs. A single KPU provides you with 1 vCPU and 4 GB of memory. For every KPU allocated, 50 GB of running application storage is also provided. Managed Service for Apache Flink calculates the KPUs that are needed to run your application using the Parallelism and ParallelismPerKPU properties, as follows: Allocated KPUs for the application = Parallelism/ParallelismPerKPU Managed Service for Apache Flink quickly gives your applications resources in response to spikes in throughput or processing activity. It removes resources from your application gradually after the activity spike has passed. To disable the automatic allocation of resources, set the AutoScalingEnabled value to false, as described later in Update your application's parallelism. The default limit for KPUs for your application is 64. For instructions on how to request an increase to this limit, see "To request a limit increase" in Service Quotas. Allocate Kinesis Processing Units 72 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note An additional KPU is charged for orchestrations purposes. For more information, see Managed Service for Apache Flink pricing. Update your application's parallelism This section contains sample requests for API actions that set an application's parallelism. For more examples and instructions for how to use request blocks with API actions, see Managed Service for Apache Flink API example code. The following example request for the CreateApplication action sets parallelism when you are
analytics-java-api-030
analytics-java-api.pdf
30
this limit, see "To request a limit increase" in Service Quotas. Allocate Kinesis Processing Units 72 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note An additional KPU is charged for orchestrations purposes. For more information, see Managed Service for Apache Flink pricing. Update your application's parallelism This section contains sample requests for API actions that set an application's parallelism. For more examples and instructions for how to use request blocks with API actions, see Managed Service for Apache Flink API example code. The following example request for the CreateApplication action sets parallelism when you are creating an application: { "ApplicationName": "string", "RuntimeEnvironment":"FLINK-1_18", "ServiceExecutionRole":"arn:aws:iam::123456789123:role/myrole", "ApplicationConfiguration": { "ApplicationCodeConfiguration":{ "CodeContent":{ "S3ContentLocation":{ "BucketARN":"arn:aws:s3:::amzn-s3-demo-bucket", "FileKey":"myflink.jar", "ObjectVersion":"AbCdEfGhIjKlMnOpQrStUvWxYz12345" } }, "CodeContentType":"ZIPFILE" }, "FlinkApplicationConfiguration": { "ParallelismConfiguration": { "AutoScalingEnabled": "true", "ConfigurationType": "CUSTOM", "Parallelism": 4, "ParallelismPerKPU": 4 } } } } Update your application's parallelism 73 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The following example request for the UpdateApplication action sets parallelism for an existing application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 4, "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "ParallelismConfigurationUpdate": { "AutoScalingEnabledUpdate": "true", "ConfigurationTypeUpdate": "CUSTOM", "ParallelismPerKPUUpdate": 4, "ParallelismUpdate": 4 } } } } The following example request for the UpdateApplication action disables parallelism for an existing application: { "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 4, "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "ParallelismConfigurationUpdate": { "AutoScalingEnabledUpdate": "false" } } } } Use automatic scaling in Managed Service for Apache Flink Managed Service for Apache Flink elastically scales your application’s parallelism to accommodate the data throughput of your source and your operator complexity for most scenarios. Automatic scaling is enabled by default. Managed Service for Apache Flink monitors the resource (CPU) usage of your application, and elastically scales your application's parallelism up or down accordingly: Use automatic scaling 74 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Your application scales up (increases parallelism) if CloudWatch metric maximum containerCPUUtilization is larger than 75 percent or above for 15 minutes. That means the ScaleUp action is initiated when there are 15 consecutive datapoints with 1 minute period equal to or over 75 percent. A ScaleUp action doubles the CurrentParallelism of your application. ParallelismPerKPU is not modified. As a consequence, the number of allocated KPUs also doubles. • Your application scales down (decreases parallelism) when your CPU usage remains below 10 percent for six hours. That means the ScaleDown action is initiated when there are 360 consecutive datapoints with 1 minute period less than 10 percent. A ScaleDown action halves (rounded up) the parallelism of the application. ParallelismPerKPU is not modified, and the number of allocated KPUs also halves (rounded up). Note Max of containerCPUUtilization over 1 minute period can be referenced to find the correlation with a datapoint used for Scaling action, but it’s not necessary to reflect the exact moment when the action is initialized. Managed Service for Apache Flink will not reduce your application's CurrentParallelism value to less than your application's Parallelism setting. When the Managed Service for Apache Flink service is scaling your application, it will be in the AUTOSCALING status. You can check your current application status using the DescribeApplication or ListApplications actions. While the service is scaling your application, the only valid API action you can use is StopApplication with the Force parameter set to true. You can use the AutoScalingEnabled property (part of FlinkApplicationConfiguration ) to enable or disable auto scaling behavior. Your AWS account is charged for KPUs that Managed Service for Apache Flink provisions which is a function of your application's parallelism and parallelismPerKPU settings. An activity spike increases your Managed Service for Apache Flink costs. For information about pricing, see Amazon Managed Service for Apache Flink pricing. Note the following about application scaling: • Automatic scaling is enabled by default. Use automatic scaling 75 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Scaling doesn't apply to Studio notebooks. However, if you deploy a Studio notebook as an application with durable state, then scaling will apply to the deployed application. • Your application has a default limit of 64 KPUs. For more information, see Managed Service for Apache Flink and Studio notebook quota. • When autoscaling updates application parallelism, the application experiences downtime. To avoid this downtime, do the following: • Disable automatic scaling • Configure your application's parallelism and parallelismPerKPU with the UpdateApplication action. For more information about setting your application's parallelism settings, see the section called “Update your application's parallelism”. • Periodically monitor your application's resource usage to verify that your application has the correct parallelism settings for its workload. For information about monitoring allocation resource usage, see the section called “Metrics and dimensions in Managed Service for Apache Flink”. Implement custom autoscaling If you want finer grained control on autoscaling or use trigger metrics other than containerCPUUtilization, you can use this example:
analytics-java-api-031
analytics-java-api.pdf
31
this downtime, do the following: • Disable automatic scaling • Configure your application's parallelism and parallelismPerKPU with the UpdateApplication action. For more information about setting your application's parallelism settings, see the section called “Update your application's parallelism”. • Periodically monitor your application's resource usage to verify that your application has the correct parallelism settings for its workload. For information about monitoring allocation resource usage, see the section called “Metrics and dimensions in Managed Service for Apache Flink”. Implement custom autoscaling If you want finer grained control on autoscaling or use trigger metrics other than containerCPUUtilization, you can use this example: • AutoScaling This examples illustrates how to scale your Managed Service for Apache Flink application using a different CloudWatch metric from the Apache Flink application, including metrics from Amazon MSK and Amazon Kinesis Data Streams, used as sources or sink. For additional information, see Enhanced monitoring and automatic scaling for Apache Flink. Implement scheduled autoscaling If your workload follows a predictable profile over time, you might prefer to scale your Apache Flink application preemptively. This scales your application at a scheduled time, as opposed to scaling reactively based on a metric. To set up scaling up and down at fixed hours of the day, you can use this example: • ScheduledScaling Use automatic scaling 76 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide maxParallelism considerations The maximum parallelism a Flink job can scale is limited by the minimum maxParallelism across all operators of the job. For example, if you have a simple job with only a source and a sink, and the source has a maxParallelism of 16 and the sink has 8, the application can't scale beyond parallelism of 8. To learn how the default maxParallelism of an operator is calculated and how to override the default, refer to Setting the Maximum Parallelism in the Apache Flink docummentation. As a basic rule, be aware that that if you don't define maxParallelism for any operator and you start your application with parallelism less than or equal to 128, all operators will have a maxParallelism of 128. Note The job's maximum parallelism is the upper limit of parallelism for scaling your application retaining the state. If you modify maxParallelism of an existing application, the application won't be able to restart from a previous snapshot taken with the old maxParallelism. You can only restart the application without snapshot. If you plan to scale your application to a parallelism greater that 128, you must explicitly set the maxParallelism in your application. • Autoscaling logic will prevent scaling a Flink job to a parallelism that will exceed maximum parallelism of the job. • If you use a custom autoscaling or scheduled scaling, configure them so that they don't exceed the maximum parallelism of the job. • If you manually scale your application beyond maximum parallelism, the application fails to start. Add tags to Managed Service for Apache Flink applications This section describes how to add key-value metadata tags to Managed Service for Apache Flink applications. These tags can be used for the following purposes: maxParallelism considerations 77 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Determining billing for individual Managed Service for Apache Flink applications. For more information, see Using Cost Allocation Tags in the Billing and Cost Management Guide. • Controlling access to application resources based on tags. For more information, see Controlling Access Using Tags in the AWS Identity and Access Management User Guide. • User-defined purposes. You can define application functionality based on the presence of user tags. Note the following information about tagging: • The maximum number of application tags includes system tags. The maximum number of user- defined application tags is 50. • If an action includes a tag list that has duplicate Key values, the service throws an InvalidArgumentException. This topic contains the following sections: • Add tags when an application is created • Add or update tags for an existing application • List tags for an application • Remove tags from an application Add tags when an application is created You add tags when creating an application using the tags parameter of the CreateApplication action. The following example request shows the Tags node for a CreateApplication request: "Tags": [ { "Key": "Key1", "Value": "Value1" }, { "Key": "Key2", "Value": "Value2" } Add tags when an application is created 78 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ] Add or update tags for an existing application You add tags to an application using the TagResource action. You cannot add tags to an application using the UpdateApplication action. To update an existing tag, add a tag with the same key of the existing tag. The following example request for the TagResource action adds new tags
analytics-java-api-032
analytics-java-api.pdf
32
request shows the Tags node for a CreateApplication request: "Tags": [ { "Key": "Key1", "Value": "Value1" }, { "Key": "Key2", "Value": "Value2" } Add tags when an application is created 78 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ] Add or update tags for an existing application You add tags to an application using the TagResource action. You cannot add tags to an application using the UpdateApplication action. To update an existing tag, add a tag with the same key of the existing tag. The following example request for the TagResource action adds new tags or updates existing tags: { "ResourceARN": "string", "Tags": [ { "Key": "NewTagKey", "Value": "NewTagValue" }, { "Key": "ExistingKeyOfTagToUpdate", "Value": "NewValueForExistingTag" } ] } List tags for an application To list existing tags, you use the ListTagsForResource action. The following example request for the ListTagsForResource action lists tags for an application: { "ResourceARN": "arn:aws:kinesisanalyticsus-west-2:012345678901:application/ MyApplication" } Remove tags from an application To remove tags from an application, you use the UntagResource action. The following example request for the UntagResource action removess tags from an application: Add or update tags for an existing application 79 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "ResourceARN": "arn:aws:kinesisanalyticsus-west-2:012345678901:application/ MyApplication", "TagKeys": [ "KeyOfFirstTagToRemove", "KeyOfSecondTagToRemove" ] } Use CloudFormation with Managed Service for Apache Flink The following exercise shows how to start a Flink application created with AWS CloudFormation using a Lambda function in the same stack. Before you begin Before you begin this exercise, follow the steps on creating a Flink application using AWS CloudFormation at AWS::KinesisAnalytics::Application. Write a Lambda function To start a Flink application after creation or update, we use the kinesisanalyticsv2 start-application API. The call will be triggered by an AWS CloudFormation event after Flink application creation. We’ll discuss how to set up the stack to trigger the Lambda function later in this exercise, but first we focus on the Lambda function declaration and its code. We use Python3.8 runtime in this example. StartApplicationLambda: Type: AWS::Lambda::Function DependsOn: StartApplicationLambdaRole Properties: Description: Starts an application when invoked. Runtime: python3.8 Role: !GetAtt StartApplicationLambdaRole.Arn Handler: index.lambda_handler Timeout: 30 Code: ZipFile: | import logging import cfnresponse import boto3 logger = logging.getLogger() logger.setLevel(logging.INFO) Use CloudFormation 80 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide def lambda_handler(event, context): logger.info('Incoming CFN event {}'.format(event)) try: application_name = event['ResourceProperties']['ApplicationName'] # filter out events other than Create or Update, # you can also omit Update in order to start an application on Create only. if event['RequestType'] not in ["Create", "Update"]: logger.info('No-op for Application {} because CFN RequestType {} is filtered'.format(application_name, event['RequestType'])) cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) return # use kinesisanalyticsv2 API to start an application. client_kda = boto3.client('kinesisanalyticsv2', region_name=event['ResourceProperties']['Region']) # get application status. describe_response = client_kda.describe_application(ApplicationName=application_name) application_status = describe_response['ApplicationDetail'] ['ApplicationStatus'] # an application can be started from 'READY' status only. if application_status != 'READY': logger.info('No-op for Application {} because ApplicationStatus {} is filtered'.format(application_name, application_status)) cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) return # create RunConfiguration. run_configuration = { 'ApplicationRestoreConfiguration': { 'ApplicationRestoreType': 'RESTORE_FROM_LATEST_SNAPSHOT', } } logger.info('RunConfiguration for Application {}: {}'.format(application_name, run_configuration)) Write a Lambda function 81 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide # this call doesn't wait for an application to transfer to 'RUNNING' state. client_kda.start_application(ApplicationName=application_name, RunConfiguration=run_configuration) logger.info('Started Application: {}'.format(application_name)) cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) except Exception as err: logger.error(err) cfnresponse.send(event,context, cfnresponse.FAILED, {"Data": str(err)}) In the preceding code, Lambda processes incoming AWS CloudFormation events, filters out everything besides Create and Update, gets the application state and start it if the state is READY. To get the application state, you must create the Lambda role, as shown following. Create a Lambda role You create a role for Lambda to successfully “talk” to the application and write logs. This role uses default managed policies, but you might want to narrow it down to using custom policies. StartApplicationLambdaRole: Type: AWS::IAM::Role DependsOn: TestFlinkApplication Properties: Description: A role for lambda to use while interacting with an application. AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/Amazonmanaged-flinkFullAccess - arn:aws:iam::aws:policy/CloudWatchLogsFullAccess Path: / Note that the Lambda resources will be created after creation of the Flink application in the same stack because they depend on it. Create a Lambda role 82 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Invoke the Lambda function Now all that is left is to invoke the Lambda function. You do this by using a custom resource. StartApplicationLambdaInvoke: Description: Invokes StartApplicationLambda to start an application. Type: AWS::CloudFormation::CustomResource DependsOn: StartApplicationLambda Version: "1.0" Properties: ServiceToken: !GetAtt StartApplicationLambda.Arn Region: !Ref AWS::Region ApplicationName: !Ref TestFlinkApplication This is all you need to start your Flink application using Lambda. You are now ready to create your own stack or use the full example below to see how all those steps work in
analytics-java-api-033
analytics-java-api.pdf
33
it. Create a Lambda role 82 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Invoke the Lambda function Now all that is left is to invoke the Lambda function. You do this by using a custom resource. StartApplicationLambdaInvoke: Description: Invokes StartApplicationLambda to start an application. Type: AWS::CloudFormation::CustomResource DependsOn: StartApplicationLambda Version: "1.0" Properties: ServiceToken: !GetAtt StartApplicationLambda.Arn Region: !Ref AWS::Region ApplicationName: !Ref TestFlinkApplication This is all you need to start your Flink application using Lambda. You are now ready to create your own stack or use the full example below to see how all those steps work in practice. Review an extended example The following example is a slightly extended version of the previous steps with an additional RunConfiguration adjusting done via template parameters. This is a working stack for you to try. Be sure to read the accompanying notes: stack.yaml Description: 'kinesisanalyticsv2 CloudFormation Test Application' Parameters: ApplicationRestoreType: Description: ApplicationRestoreConfiguration option, can be SKIP_RESTORE_FROM_SNAPSHOT, RESTORE_FROM_LATEST_SNAPSHOT or RESTORE_FROM_CUSTOM_SNAPSHOT. Type: String Default: SKIP_RESTORE_FROM_SNAPSHOT AllowedValues: [ SKIP_RESTORE_FROM_SNAPSHOT, RESTORE_FROM_LATEST_SNAPSHOT, RESTORE_FROM_CUSTOM_SNAPSHOT ] SnapshotName: Description: ApplicationRestoreConfiguration option, name of a snapshot to restore to, used with RESTORE_FROM_CUSTOM_SNAPSHOT ApplicationRestoreType. Type: String Default: '' AllowNonRestoredState: Description: FlinkRunConfiguration option, can be true or false. Invoke the Lambda function 83 Managed Service for Apache Flink Default: true Type: String AllowedValues: [ true, false ] CodeContentBucketArn: Managed Service for Apache Flink Developer Guide Description: ARN of a bucket with application code. Type: String CodeContentFileKey: Description: A jar filename with an application code inside a bucket. Type: String Conditions: IsSnapshotNameEmpty: !Equals [ !Ref SnapshotName, '' ] Resources: TestServiceExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - kinesisanlaytics.amazonaws.com Action: sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonKinesisFullAccess - arn:aws:iam::aws:policy/AmazonS3FullAccess Path: / InputKinesisStream: Type: AWS::Kinesis::Stream Properties: ShardCount: 1 OutputKinesisStream: Type: AWS::Kinesis::Stream Properties: ShardCount: 1 TestFlinkApplication: Type: 'AWS::kinesisanalyticsv2::Application' Properties: ApplicationName: 'CFNTestFlinkApplication' ApplicationDescription: 'Test Flink Application' RuntimeEnvironment: 'FLINK-1_18' ServiceExecutionRole: !GetAtt TestServiceExecutionRole.Arn ApplicationConfiguration: EnvironmentProperties: Review an extended example 84 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide PropertyGroups: - PropertyGroupId: 'KinesisStreams' PropertyMap: INPUT_STREAM_NAME: !Ref InputKinesisStream OUTPUT_STREAM_NAME: !Ref OutputKinesisStream AWS_REGION: !Ref AWS::Region FlinkApplicationConfiguration: CheckpointConfiguration: ConfigurationType: 'CUSTOM' CheckpointingEnabled: True CheckpointInterval: 1500 MinPauseBetweenCheckpoints: 500 MonitoringConfiguration: ConfigurationType: 'CUSTOM' MetricsLevel: 'APPLICATION' LogLevel: 'INFO' ParallelismConfiguration: ConfigurationType: 'CUSTOM' Parallelism: 1 ParallelismPerKPU: 1 AutoScalingEnabled: True ApplicationSnapshotConfiguration: SnapshotsEnabled: True ApplicationCodeConfiguration: CodeContent: S3ContentLocation: BucketARN: !Ref CodeContentBucketArn FileKey: !Ref CodeContentFileKey CodeContentType: 'ZIPFILE' StartApplicationLambdaRole: Type: AWS::IAM::Role DependsOn: TestFlinkApplication Properties: Description: A role for lambda to use while interacting with an application. AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: Review an extended example 85 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide - arn:aws:iam::aws:policy/Amazonmanaged-flinkFullAccess - arn:aws:iam::aws:policy/CloudWatchLogsFullAccess Path: / StartApplicationLambda: Type: AWS::Lambda::Function DependsOn: StartApplicationLambdaRole Properties: Description: Starts an application when invoked. Runtime: python3.8 Role: !GetAtt StartApplicationLambdaRole.Arn Handler: index.lambda_handler Timeout: 30 Code: ZipFile: | import logging import cfnresponse import boto3 logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): logger.info('Incoming CFN event {}'.format(event)) try: application_name = event['ResourceProperties']['ApplicationName'] # filter out events other than Create or Update, # you can also omit Update in order to start an application on Create only. if event['RequestType'] not in ["Create", "Update"]: logger.info('No-op for Application {} because CFN RequestType {} is filtered'.format(application_name, event['RequestType'])) cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) return # use kinesisanalyticsv2 API to start an application. client_kda = boto3.client('kinesisanalyticsv2', region_name=event['ResourceProperties']['Region']) # get application status. describe_response = client_kda.describe_application(ApplicationName=application_name) Review an extended example 86 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide application_status = describe_response['ApplicationDetail'] ['ApplicationStatus'] # an application can be started from 'READY' status only. if application_status != 'READY': logger.info('No-op for Application {} because ApplicationStatus {} is filtered'.format(application_name, application_status)) cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) return # create RunConfiguration from passed parameters. run_configuration = { 'FlinkRunConfiguration': { 'AllowNonRestoredState': event['ResourceProperties'] ['AllowNonRestoredState'] == 'true' }, 'ApplicationRestoreConfiguration': { 'ApplicationRestoreType': event['ResourceProperties'] ['ApplicationRestoreType'], } } # add SnapshotName to RunConfiguration if specified. if event['ResourceProperties']['SnapshotName'] != '': run_configuration['ApplicationRestoreConfiguration']['SnapshotName'] = event['ResourceProperties']['SnapshotName'] logger.info('RunConfiguration for Application {}: {}'.format(application_name, run_configuration)) # this call doesn't wait for an application to transfer to 'RUNNING' state. client_kda.start_application(ApplicationName=application_name, RunConfiguration=run_configuration) logger.info('Started Application: {}'.format(application_name)) cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) except Exception as err: logger.error(err) cfnresponse.send(event,context, cfnresponse.FAILED, {"Data": str(err)}) StartApplicationLambdaInvoke: Description: Invokes StartApplicationLambda to start an application. Type: AWS::CloudFormation::CustomResource Review an extended example 87 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide DependsOn: StartApplicationLambda Version: "1.0" Properties: ServiceToken: !GetAtt StartApplicationLambda.Arn Region: !Ref AWS::Region ApplicationName: !Ref TestFlinkApplication ApplicationRestoreType: !Ref ApplicationRestoreType SnapshotName: !Ref SnapshotName AllowNonRestoredState: !Ref AllowNonRestoredState Again, you might want to adjust the roles for Lambda as well as an application itself. Before creating the stack above, don’t forget to specify your parameters. parameters.json [ { "ParameterKey": "CodeContentBucketArn", "ParameterValue": "YOUR_BUCKET_ARN" }, { "ParameterKey": "CodeContentFileKey", "ParameterValue": "YOUR_JAR" }, { "ParameterKey": "ApplicationRestoreType", "ParameterValue": "SKIP_RESTORE_FROM_SNAPSHOT" }, { "ParameterKey": "AllowNonRestoredState", "ParameterValue": "true" } ] Replace YOUR_BUCKET_ARN and YOUR_JAR with your specific requirements. You can follow this guide to create an Amazon S3 bucket and an application jar. Now
analytics-java-api-034
analytics-java-api.pdf
34
StartApplicationLambda Version: "1.0" Properties: ServiceToken: !GetAtt StartApplicationLambda.Arn Region: !Ref AWS::Region ApplicationName: !Ref TestFlinkApplication ApplicationRestoreType: !Ref ApplicationRestoreType SnapshotName: !Ref SnapshotName AllowNonRestoredState: !Ref AllowNonRestoredState Again, you might want to adjust the roles for Lambda as well as an application itself. Before creating the stack above, don’t forget to specify your parameters. parameters.json [ { "ParameterKey": "CodeContentBucketArn", "ParameterValue": "YOUR_BUCKET_ARN" }, { "ParameterKey": "CodeContentFileKey", "ParameterValue": "YOUR_JAR" }, { "ParameterKey": "ApplicationRestoreType", "ParameterValue": "SKIP_RESTORE_FROM_SNAPSHOT" }, { "ParameterKey": "AllowNonRestoredState", "ParameterValue": "true" } ] Replace YOUR_BUCKET_ARN and YOUR_JAR with your specific requirements. You can follow this guide to create an Amazon S3 bucket and an application jar. Now create the stack (replace YOUR_REGION with a region of your choice, e.g. us-east-1): aws cloudformation create-stack --region YOUR_REGION --template-body "file:// stack.yaml" --parameters "file://parameters.json" --stack-name "TestManaged Service for Apache FlinkStack" --capabilities CAPABILITY_NAMED_IAM Review an extended example 88 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide You can now navigate to https://console.aws.amazon.com/cloudformation and view the progress. Once created you should see your Flink application in Starting state. It may take a few minutes until it will start Running. For more information, see the following: • Four ways to retrieve any AWS service property using AWS CloudFormation (Part 1 of 3). • Walkthrough: Looking up Amazon Machine Image IDs. Use the Apache Flink Dashboard with Managed Service for Apache Flink You can use your application's Apache Flink Dashboard to monitor your Managed Service for Apache Flink application's health. Your application's dashboard shows the following information: • Resources in use, including Task Managers and Task Slots. • Information about Jobs, including those that are running, completed, canceled, and failed. For information about Apache Flink Task Managers, Task Slots, and Jobs, see Apache Flink Architecture on the Apache Flink website. Note the following about using the Apache Flink Dashboard with Managed Service for Apache Flink applications: • The Apache Flink Dashboard for Managed Service for Apache Flink applications is read-only. You can't make changes to your Managed Service for Apache Flink application using the Apache Flink Dashboard. • The Apache Flink Dashboard is not compatible with Microsoft Internet Explorer. Access your application's Apache Flink Dashboard You can access your application's Apache Flink Dashboard either through the Managed Service for Apache Flink console, or by requesting a secure URL endpoint using the CLI. Use the Apache Flink Dashboard 89 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Access your application's Apache Flink Dashboard using the Managed Service for Apache Flink console To access your application's Apache Flink Dashboard from the console, choose Apache Flink Dashboard on your application's page. Note When you open the dashboard from the Managed Service for Apache Flink console, the URL that the console generates will be valid for 12 hours. Access your application's Apache Flink Dashboard using the Managed Service for Apache Flink CLI You can use the Managed Service for Apache Flink CLI to generate a URL to access your application dashboard. The URL that you generate is valid for a specified amount of time. Note If you don't access the generated URL within three minutes, it will no longer be valid. You generate your dashboard URL using the CreateApplicationPresignedUrl action. You specify the following parameters for the action: • The application name • The time in seconds that the URL will be valid • You specify FLINK_DASHBOARD_URL as the URL type. Access your application's Apache Flink Dashboard 90 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Release versions This topic contains information about the features supported and component versions recommended for the each release of Managed Service for Apache Flink. Note If you are using a version of Apache Flink that is deprecating, we recommend that you upgrade your application to the most recent supported Flink version using the Use in-place version upgrades for Apache Flink feature in Managed Service for Apache Flink. Apache Flink version Status - Amazon Managed Service for Status - Apache Flink community Link Apache Flink 1.20.0 Supported Supported 1.19.1 Supported Supported 1.18.1 Supported Supported 1.15.2 Supported Unsupported 1.13.1 Supported Unsupported 1.11.1 Deprecating Unsupported Amazon Managed Service for Apache Flink 1.20 Amazon Managed Service for Apache Flink 1.19 Amazon Managed Service for Apache Flink 1.18 Amazon Managed Service for Apache Flink 1.15 Getting started: Flink 1.13.2 Earlier version information for 91 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apache Flink version Status - Amazon Managed Service for Status - Apache Flink community Link Apache Flink 1.8.2 Deprecating Unsupported 1.6.2 Deprecating Unsupported Topics • Amazon Managed Service for Apache Flink 1.20 • Amazon Managed Service for Apache Flink 1.19 • Amazon Managed Service for Apache Flink 1.18 • Amazon Managed Service for Apache Flink 1.15 • Earlier version information for Managed Service for Apache Flink Managed Service for Apache
analytics-java-api-035
analytics-java-api.pdf
35
Flink 1.18 Amazon Managed Service for Apache Flink 1.15 Getting started: Flink 1.13.2 Earlier version information for 91 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apache Flink version Status - Amazon Managed Service for Status - Apache Flink community Link Apache Flink 1.8.2 Deprecating Unsupported 1.6.2 Deprecating Unsupported Topics • Amazon Managed Service for Apache Flink 1.20 • Amazon Managed Service for Apache Flink 1.19 • Amazon Managed Service for Apache Flink 1.18 • Amazon Managed Service for Apache Flink 1.15 • Earlier version information for Managed Service for Apache Flink Managed Service for Apache Flink (This version will not be supported from February 2025) Earlier version information for Managed Service for Apache Flink (This version will not be supported from February 2025) Earlier version information for Managed Service for Apache Flink (This version will not be supported from February 2025) 92 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Amazon Managed Service for Apache Flink 1.20 Managed Service for Apache Flink now supports Apache Flink version 1.20.0. This section introduces you to the key new features and changes introduced with Managed Service for Apache Flink support of Apache Flink 1.20.0. Apache Flink 1.20 is expected to be the last 1.x release and a Flink long-term support (LTS) version. For more information, see FLIP-458: Long-Term Support for the Final Release of Apache Flink 1.x Line. Note If you are using an earlier supported version of Apache Flink and want to upgrade your existing applications to Apache Flink 1.20.0, you can do so using in-place Apache Flink version upgrades. For more information, see Use in-place version upgrades for Apache Flink. With in-place version upgrades, you retain application traceability against a single ARN across Apache Flink versions, including snapshots, logs, metrics, tags, Flink configurations, and more. Supported features Apache Flink 1.20.0 introduces improvements in the SQL APIs, in the DataStream APIs, and in the Flink dashboard. Supported features and related documentation Supported features Description Add DISTRIBUTED BY clause Many SQL engines expose the concepts of Partitioning , Bucketing , or Clusterin g . Flink 1.20 introduces the concept of Bucketing to Flink. Apache Flink documentation reference FLIP-376: Add DISTRIBUTED BY clause DataStream API: Support Full Partition Proessing Flink 1.20 introduces built-in support for aggregations on FLIP-380: Support Full Partition Processing on Non- non-keyed streams through keyed DataStream Amazon Managed Service for Apache Flink 1.20 93 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Supported features Description Apache Flink documentation reference the FullPartitionWindo w API. Show data skew score on Flink Dashboard The Flink 1.20 dashboard now shows data skew infrmation. FLIP-418: Show data skew score on Flink Dashboard Each operator on the Flink job graph UI shows an additional data skew score. For the Apache Flink 1.20.0 release documentation, see Apache Flink Documentation v1.20.0. For Flink 1.20 release notes, see Release notes - Flink 1.20 Components Flink 1.20 components Component Java Python Kinesis Data Analytics Flink Runtime (aws-kine sisanalytics-runtime) Connectors Apache Beam (Beam applications only) Version 11 (recommended) 3.11 1.2.0 For information about available connectors, see Apache Flink connectors. There is no compatible Apache Flink Runner for Flink 1.20. For more information, see Flink Version Compatibility. Known issues Apache Beam Components 94 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide There is presently no compatible Apache Flink Runner for Flink 1.20 in Apache Beam. For more information, see Flink Version Compatibility. Amazon Managed Service for Apache Flink Studio Amazon Managed Service for Apache Flink Studio uses Apache Zeppelin notebooks to provide a single-interface development experience for developing, debugging code, and running Apache Flink stream processing applications. An upgrade is required to Zeppelin's Flink Interpreter to enable support of Flink 1.20. This work is scheduled with the Zeppelin community. We will update these notes when that work is complete. You can continue to use Flink 1.15 with Amazon Managed Service for Apache Flink Studio. For more information, see Creating a Studio notebook. Backported bug fixes Amazon Managed Service for Apache Flink backports fixes from the Flink community for critical issues. Following is a list of bug fixes that we have backported: Backported bug fixes Apache Flink JIRA link Description FLINK-35886 This fix addresses an issue causing incorrect accounting of watermark idleness timeouts when a subtask is backpressured/blocked. Amazon Managed Service for Apache Flink 1.19 Managed Service for Apache Flink now supports Apache Flink version 1.19.1. This section introduces you to the key new features and changes introduced with Managed Service for Apache Flink support of Apache Flink 1.19.1. Note If you are using an earlier supported version of Apache Flink and want to upgrade your existing applications to Apache Flink 1.19.1, you can do so using in-place Apache Flink version upgrades. For more information, see Use
analytics-java-api-036
analytics-java-api.pdf
36
JIRA link Description FLINK-35886 This fix addresses an issue causing incorrect accounting of watermark idleness timeouts when a subtask is backpressured/blocked. Amazon Managed Service for Apache Flink 1.19 Managed Service for Apache Flink now supports Apache Flink version 1.19.1. This section introduces you to the key new features and changes introduced with Managed Service for Apache Flink support of Apache Flink 1.19.1. Note If you are using an earlier supported version of Apache Flink and want to upgrade your existing applications to Apache Flink 1.19.1, you can do so using in-place Apache Flink version upgrades. For more information, see Use in-place version upgrades for Apache Flink. With in-place version upgrades, you retain application traceability against a single ARN across Apache Flink versions, including snapshots, logs, metrics, tags, Flink configurations, and more. Amazon Managed Service for Apache Flink 1.19 95 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Supported features Apache Flink 1.19.1 introduces improvements in the SQL API, such as named parameters, custom source parallelism, and different state TTLs for various Flink operators. Supported features and related documentation Supported features Description Apache Flink documentation reference SQL API: Support Configuri ng Different State TTLs using Users can now configure state TTL on stream regular joins FLIP-373: Configuring Different State TTLs using SQL Hint and group aggregate. SQL Hint SQL API: Support named parameters for functions and Users can now use named parameters in functions, FLIP-378: Support named parameters for functions and call procedures rather than relying on the call procedures order of parameters. SQL API: Setting parallelism for SQL sources Users can now specify parallelism for SQL sources. FLIP-367: Support Setting Parallelism for Table/SQL Sources SQL API: Support Session Window TVF Users can now use session window Table-Valued FLINK-24024: Support session Window TVF SQL API: Window TVF Aggregation Supports Changelog Inputs Support Python 3.11 Functions. Users can now perform window aggregation on changelog inputs. FLINK-20281: Window aggregation supports changelog stream input Flink now supports Python 3.11, which is 10-60% faster compared to Python 3.10. For more information, see What's New in Python 3.11. FLINK-33030: Add python 3.11 support Provide metrics for TwoPhaseCommitting sink Users can view statistic s around the status of FLIP-371: Provide initializ ation context for Committer Supported features 96 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Supported features Description Apache Flink documentation reference committers in two phase creation in TwoPhaseC committing sinks. ommittingSink Trace Reporters for job restart and checkpointing Users can now monitor traces around checkpoint duration FLIP-384: Introduce TraceReporter and use it to and recvery trends. In create checkpointing and Amazon Managed Service for recovery traces Apache Flink, we enable Slf4j trace reporters by default, so users can monitor checkpoin t and job traces through application CloudWatch Logs. Note You can opt into the following features by submitting a support case: Opt-in features and related documentation Opt-in features Description Apache Flink documentation reference Support using larger checkpointing interval when source is processing backlog This is an opt-in feature, because users must tune the configuration for their specific job requirements. FLIP-309: Support using larger checkpointing interval when source is processing backlog Redirect System.out and System.err to Java logs This is an opt-in feature. On Amazon Managed Service for Apache Flink, the default behavior is to ignore output from System.ou t and System.err because best FLIP-390: Support System out and err to be redirected to LOG or discarded Supported features 97 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Opt-in features Description Apache Flink documentation reference practice in production is to use the native Java logger. For the Apache Flink 1.19.1 release documentation, see Apache Flink Documentation v1.19.1. Changes in Amazon Managed Service for Apache Flink 1.19.1 Logging Trace Reporter enabled by default Apache Flink 1.19.1 introduced checkpoint and recovery traces, enabling users to better debug checkpoint and job recovery issues. In Amazon Managed Service for Apache Flink, these traces are logged into the CloudWatch log stream, allowing users to break down the time spent on job initialization, and record the historical size of checkpoints. Default restart strategy is now exponential-delay In Apache Flink 1.19.1, there are significant improvements to the exponential-delay restart strategy. In Amazon Managed Service for Apache Flink from Flink 1.19.1 onwards, Flink jobs use the exponential-delay restart strategy by default. This means that user jobs will recover quicker from transient errors, but will not overload external systems if job restarts persist. Backported bug fixes Amazon Managed Service for Apache Flink backports fixes from the Flink community for critical issues. This means that the runtime differs from the Apache Flink 1.19.1 release. Following is a list of bug fixes that we have backported: Backported bug fixes Apache Flink JIRA link Description FLINK-35531 This fix addresses the performance regression introduced in 1.17.0 that
analytics-java-api-037
analytics-java-api.pdf
37
Amazon Managed Service for Apache Flink from Flink 1.19.1 onwards, Flink jobs use the exponential-delay restart strategy by default. This means that user jobs will recover quicker from transient errors, but will not overload external systems if job restarts persist. Backported bug fixes Amazon Managed Service for Apache Flink backports fixes from the Flink community for critical issues. This means that the runtime differs from the Apache Flink 1.19.1 release. Following is a list of bug fixes that we have backported: Backported bug fixes Apache Flink JIRA link Description FLINK-35531 This fix addresses the performance regression introduced in 1.17.0 that causes slower writes to HDFS. Changes in Amazon Managed Service for Apache Flink 1.19.1 98 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apache Flink JIRA link Description FLINK-35157 FLINK-34252 FLINK-34252 FLINK-33936 FLINK-35498 FLINK-33192 FLINK-35069 FLINK-35832 FLINK-35886 This fix addresses the issue of stuck Flink jobs when sources with watermark alignment encounter finished subtasks. This fix addresses the issue in watermark generation that results in an erroneous IDLE watermark state. This fix addresses the performance regressio n during watermark generation by reducing system calls. This fix addresses the issue with duplicate records during mini-batch aggregation on Table API. This fix addresses the issue with argument name conflicts when defining named parameters in Table API UDFs. This fix addresses the issue of a state memory leak in window operators due to improper timer cleanup. This fix addresses the issue when a Flink job gets stuck triggering a timer at the end of a window. This fix addresses the issue when IFNULL returns incorrect results. This fix addresses the issue when backpress ured tasks are considered as idle. Changes in Amazon Managed Service for Apache Flink 1.19.1 99 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Components Component Java Python Kinesis Data Analytics Flink Runtime (aws-kine sisanalytics-runtime) Connectors Apache Beam (Beam applications only) Version 11 (recommended) 3.11 1.2.0 For information about available connectors, see Apache Flink connectors. From version 2.61.0. For more information, see Flink Version Compatibility. Known issues Amazon Managed Service for Apache Flink Studio Studio uses Apache Zeppelin notebooks to provide a single-interface development experience for developing, debugging code, and running Apache Flink stream processing applications. An upgrade is required to Zeppelin’s Flink Interpreter to enable support of Flink 1.19. This work is scheduled with the Zeppelin community and we will update these notes when it is complete. You can continue to use Flink 1.15 with Amazon Managed Service for Apache Flink Studio. For more information, see Creating a Studio notebook. Amazon Managed Service for Apache Flink 1.18 Managed Service for Apache Flink now supports Apache Flink version 1.18.1. Learn about the key new features and changes introduced with Managed Service for Apache Flink support of Apache Flink 1.18.1. Note If you are using an earlier supported version of Apache Flink and want to upgrade your existing applications to Apache Flink 1.18.1, you can do so using in-place Apache Flink Components 100 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide version upgrades. With in-place version upgrades, you retain application traceability against a single ARN across Apache Flink versions, including snapshots, logs, metrics, tags, Flink configurations, and more. You can use this feature in RUNNING and READY state. For more information, see Use in-place version upgrades for Apache Flink. Supported features with Apache Flink documentation references Supported Features Description Apache Flink documentation reference Opensearch connector This connector includes a sink that provides at-least-once github: Opensearch Connector guarantees. Amazon DynamoDB connector This connector includes a sink that provides at-least-once Amazon DynamoDB Sink MongoDB connector guarantees. This connector includes a source and sink that provide at-least-once guarantees. MongoDB Connector Decouple Hive with Flink planner You can use the Hive dialect directly without the extra JAR FLINK-26603: Decouple Hive with Flink planner swapping. Disable WAL in RocksDBWr iteBatchWrapper by default This provides faster recovery times. Improve the watermark aggregation performance when enabling the watermark alignment Improves the watermark aggregation performance when enabling the watermark alignment, and adds the related benchmark. FLINK-32326: Disable WAL in RocksDBWriteBatchWrapper by default FLINK-32524: Watermark aggregation performance Make watermark alignment ready for production use Removes risk of large jobs overloading JobManager FLINK-32548: Make watermark alignment ready Amazon Managed Service for Apache Flink 1.18 101 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Supported Features Description Apache Flink documentation reference Configurable RateLimit ingStratey for Async Sink RateLimitingStrategy lets you configure the decision of FLIP-242: Introduce configura ble RateLimitingStrategy for what to scale, when to scale, Async Sink and how much to scale. Bulk fetch table and column statistics Improved query performance. FLIP-247: Bulk fetch of table and column statistics for given partitions For the Apache Flink 1.18.1 release documentation, see Apache Flink 1.18.1 Release Announcement.
analytics-java-api-038
analytics-java-api.pdf
38
jobs overloading JobManager FLINK-32548: Make watermark alignment ready Amazon Managed Service for Apache Flink 1.18 101 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Supported Features Description Apache Flink documentation reference Configurable RateLimit ingStratey for Async Sink RateLimitingStrategy lets you configure the decision of FLIP-242: Introduce configura ble RateLimitingStrategy for what to scale, when to scale, Async Sink and how much to scale. Bulk fetch table and column statistics Improved query performance. FLIP-247: Bulk fetch of table and column statistics for given partitions For the Apache Flink 1.18.1 release documentation, see Apache Flink 1.18.1 Release Announcement. Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.18 Akka replaced with Pekko Apache Flink replaced Akka with Pekko in Apache Flink 1.18. This change is fully supported in Managed Service for Apache Flink from Apache Flink 1.18.1 and later. You don't need to modify your applications as a result of this change. For more information, see FLINK-32468: Replace Akka by Pekko. Support PyFlink Runtime execution in Thread Mode This Apache Flink change introduces a new execution mode for the Pyflink Runtime framework, Process Mode. Process Mode can now execute Python user-defined functions in the same thread instead of a separate process. Backported bug fixes Amazon Managed Service for Apache Flink backports fixes from the Flink community for critical issues. This means that the runtime differs from the Apache Flink 1.18.1 release. Following is a list of bug fixes that we have backported: Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.15 102 Managed Service for Apache Flink Backported bug fixes Managed Service for Apache Flink Developer Guide Apache Flink JIRA link Description FLINK-33863 FLINK-34063 FLINK-35069 FLINK-35097 FLINK-34379 FLINK-28693 FLINK-35217 This fix addresses the issue when a state restore fails for compressed snapshots. This fix addresses the issue when source operators lose splits when snapshot compressi on is enabled. Apache Flink offers optional compression (default: off) for all checkpoin ts and savepoints. Apache Flink identified a bug in Flink 1.18.1 where the operator state couldn't be properly restored when snapshot compression was enabled. This could result in either data loss or inability to restore from checkpoint. This fix addresses the issue when a Flink job gets stuck triggering a timer at the end of a window. This fix addresses the pissue of duplicate records in a Table API Filesystem connector with the raw format. This fix addresses the issue of an OutOfMemo ryError when enabling dynamic table filtering. This fix addresses the issue of the Table API being unable to generate a graph if the watermark has a columnBy expression. This fix addresses the issue of a corrupted checkpoint during a specific Flink job failure mode. Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.15 103 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Components Component Version Java Scala 11 (recommended) Since version 1.15, Flink is Scala-agnostic. For reference, MSF Flink 1.18 has been verified against Scala 3.3 (LTS). Managed Service for Apache Flink Flink Runtime (aws-kinesisanalytics-runtime) 1.2.0 AWS Kinesis Connector (flink-connector-k inesis)[Source] AWS Kinesis Connector (flink-connector-k inesis)[Sink] 4.2.0-1.18 4.2.0-1.18 Apache Beam (Beam applications only) From version 2.57.0. For more information, see Flink Version Compatibility. Known issues Amazon Managed Service for Apache Flink Studio Studio uses Apache Zeppelin notebooks to provide a single-interface development experience for developing, debugging code, and running Apache Flink stream processing applications. An upgrade is required to Zeppelin’s Flink Interpreter to enable support of Flink 1.18. This work is scheduled with the Zeppelin community and we will update these notes when it is complete. You can continue to use Flink 1.15 with Amazon Managed Service for Apache Flink Studio. For more information, see Creating a Studio notebook. Incorrect watermark idleness when subtask is backpressured There is a known issue in watermark generation when a subtask is backpressured, which has been fixed from Flink 1.19 and later. This can show up as a spike in the number of late records when a Flink job graph is backpressured. We recommend that you upgrade to the latest Flink version to Components 104 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide pull in this fix. For more information, see Incorrect watermark idleness timeout accounting when subtask is backpressured/blocked. Amazon Managed Service for Apache Flink 1.15 Managed Service for Apache Flink supports the following new features in Apache 1.15.2: Feature Async Sink Description Apache FLIP reference FLIP-171: Async Sink. An AWS contributed framework for building async destinations that allows developers to build custom AWS connectors with less than half the previous effort. For more information, see The Generic Asynchronous Base Sink. Kinesis Data Firehose Sink AWS has contributed a new Amazon Kinesis Firehose Sink Amazon Kinesis Data Firehose Sink. using the Async framework. Stop
analytics-java-api-039
analytics-java-api.pdf
39
this fix. For more information, see Incorrect watermark idleness timeout accounting when subtask is backpressured/blocked. Amazon Managed Service for Apache Flink 1.15 Managed Service for Apache Flink supports the following new features in Apache 1.15.2: Feature Async Sink Description Apache FLIP reference FLIP-171: Async Sink. An AWS contributed framework for building async destinations that allows developers to build custom AWS connectors with less than half the previous effort. For more information, see The Generic Asynchronous Base Sink. Kinesis Data Firehose Sink AWS has contributed a new Amazon Kinesis Firehose Sink Amazon Kinesis Data Firehose Sink. using the Async framework. Stop with Savepoint Stop with Savepoint ensures a clean stop operation, most FLIP-34: Terminate/Suspend Job with Savepoint. Scala Decoupling importantly supporting exactly-once semantics for customers that rely on them. Users can now leverage the Java API from any Scala version, including Scala 3. Customers will need to bundle the Scala standard library of their choice in their Scala applications. FLIP-28: Long-term goal of making flink-table Scala-free. Amazon Managed Service for Apache Flink 1.15 105 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Feature Scala Unified Connector Metrics Description Apache FLIP reference See Scala decoupling above FLIP-28: Long-term goal of making flink-table Scala-free. Flink has defined standard metrics for jobs, tasks and FLIP-33: Standardize Connector Metrics and operators. Managed Service FLIP-179: Expose Standardi for Apache Flink will continue zed Operator Metrics. to support sink and source metrics and in 1.15 introduce numRestarts in parallel with fullRestarts for Availability Metrics. Checkpointing finished tasks This feature is enabled by default in Flink 1.15 and FLIP-147: Support Checkpoin ts After Tasks Finished. makes it possible to continue performing checkpoints even if parts of the job graph have finished processing all data, which might happen if it contains bounded (batch) sources. Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.15 Studio notebooks Managed Service for Apache Flink Studio now supports Apache Flink 1.15. Managed Service for Apache Flink Studio utilizes Apache Zeppelin notebooks to provide a single-interface development experience for developing, debugging code, and running Apache Flink stream processing applications. You can learn more about Managed Service for Apache Flink Studio and how to get started at Use a Studio notebook with Managed Service for Apache Flink. EFO connector Changes in Amazon Managed Service for Apache Flink with Apache Flink 1.15 106 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide When upgrading to Managed Service for Apache Flink version 1.15, ensure that you are using the most recent EFO Connector, that is any version 1.15.3 or newer. For more information as to why, see FLINK-29324. Scala Decoupling Starting with Flink 1.15.2, you will need to bundle the Scala standard library of your choice in your Scala applications. Kinesis Data Firehose Sink When upgrading to Managed Service for Apache Flink version 1.15, ensure that you are using the most recent Amazon Kinesis Data Firehose Sink. Kafka Connectors When upgrading to Amazon Managed Service for Apache Flink for Apache Flink version 1.15, ensure that you are using the most recent Kafka connector APIs. Apache Flink has deprecated FlinkKafkaConsumer and FlinkKafkaProducer These APIs for the Kafka sink cannot commit to Kafka for Flink 1.15. Ensure you are using KafkaSource and KafkaSink. Components Component Version Java Scala Managed Service for Apache Flink Flink Runtime (aws-kinesisanalytics-runtime) AWS Kinesis Connector (flink-connector-k inesis) 11 (recommended) 2.12 1.2.0 1.15.4 Apache Beam (Beam applications only) 2.33.0, with Jackson version 2.12.2 Known issues Kafka Commit on checkpointing fails repeatedly after a broker restart Components 107 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide There is a known open source Apache Flink issue with the Apache Kafka connector in Flink version 1.15 caused by a critical open source Kafka Client bug in Kafka Client 2.8.1. For more information, see Kafka Commit on checkpointing fails repeatedly after a broker restart and KafkaConsumer is unable to recover connection to group coordinator after commitOffsetAsync exception. To avoid this issue, we recommend that you use Apache Flink 1.18 or later in Amazon Managed Service for Apache Flink. Earlier version information for Managed Service for Apache Flink Note Apache Flink versions 1.6, 1.8, and 1.11 have not been supported by the Apache Flink community for over three years. We now plan to end support for these versions in Amazon Managed Service for Apache Flink. From November 5, 2024, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. For all Regions with exception of the China Regions and the AWS GovCloud (US) Regions, from February 24, 2025, you will no longer be able to create, start, or run applications using these versions of Apache Flink in Amazon Managed Service for Apache Flink. For the China
analytics-java-api-040
analytics-java-api.pdf
40
been supported by the Apache Flink community for over three years. We now plan to end support for these versions in Amazon Managed Service for Apache Flink. From November 5, 2024, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. For all Regions with exception of the China Regions and the AWS GovCloud (US) Regions, from February 24, 2025, you will no longer be able to create, start, or run applications using these versions of Apache Flink in Amazon Managed Service for Apache Flink. For the China Regions and the AWS GovCloud (US) Regions, from March 19, 2025, you will no longer be able to create, start, or run applications using these versions of Apache Flink in Amazon Managed Service for Apache Flink. You can upgrade your applications statefully using the in-place version upgrades feature in Managed Service for Apache Flink. For more information, see Use in-place version upgrades for Apache Flink. Note Apache Flink version 1.13 has not been supported by the Apache Flink community for over three years. We now plan to end support for this version in Amazon Managed Service for Apache Flink on October 16, 2025. After this date, you will no longer be able to create, start, or run applications using Apache Flink version 1.13 in Amazon Managed Service for Apache Flink. Earlier versions 108 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide You can upgrade your applications statefully using the in-place version upgrades feature in Managed Service for Apache Flink. For more information, see Use in-place version upgrades for Apache Flink. Version 1.15.2 is supported by Managed Service for Apache Flink, but is no longer supported by the Apache Flink community. This topic contains the following sections: • Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions • Building applications with Apache Flink 1.8.2 • Building applications with Apache Flink 1.6.2 • Upgrading applications • Available connectors in Apache Flink 1.6.2 and 1.8.2 • Getting started: Flink 1.13.2 • Getting started: Flink 1.11.1 - deprecating • Getting started: Flink 1.8.2 - deprecating • Getting started: Flink 1.6.2 - deprecating • Earlier version (legacy) examples for Managed Service for Apache Flink Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions The Apache Flink Kinesis Streams connector was not included in Apache Flink prior to version 1.11. In order for your application to use the Apache Flink Kinesis connector with previous versions of Apache Flink, you must download, compile, and install the version of Apache Flink that your application uses. This connector is used to consume data from a Kinesis stream used as an application source, or to write data to a Kinesis stream used for application output. Note Ensure that you are building the connector with KPL version 0.14.0 or higher. To download and install the Apache Flink version 1.8.2 source code, do the following: Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions 109 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Ensure that you have Apache Maven installed, and your JAVA_HOME environment variable points to a JDK rather than a JRE. You can test your Apache Maven install with the following command: mvn -version 2. Download the Apache Flink version 1.8.2 source code: wget https://archive.apache.org/dist/flink/flink-1.8.2/flink-1.8.2-src.tgz 3. Uncompress the Apache Flink source code: tar -xvf flink-1.8.2-src.tgz 4. Change to the Apache Flink source code directory: cd flink-1.8.2 5. Compile and install Apache Flink: mvn clean install -Pinclude-kinesis -DskipTests Note If you are compiling Flink on Microsoft Windows, you need to add the - Drat.skip=true parameter. Building applications with Apache Flink 1.8.2 This section contains information about components that you use for building Managed Service for Apache Flink applications that work with Apache Flink 1.8.2. Use the following component versions for Managed Service for Apache Flink applications: Component Java Version 1.8 (recommended) Building applications with Apache Flink 1.8.2 110 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Component Apache Flink Managed Service for Apache Flink for Flink Runtime (aws-kinesisanalytics-runtime) Managed Service for Apache Flink Flink Connectors (aws-kinesisanalytics-flink) Apache Maven Version 1.8.2 1.0.1 1.0.1 3.1 To compile an application using Apache Flink 1.8.2, run Maven with the following parameter: mvn package -Dflink.version=1.8.2 For an example of a pom.xml file for a Managed Service for Apache Flink application that uses Apache Flink version 1.8.2, see the Managed Service for Apache Flink 1.8.2 Getting Started Application. For information about how to build and use application code for a Managed Service for Apache Flink application, see Create an application. Building applications with Apache Flink 1.6.2 This section contains information about components that you use for building Managed Service for Apache Flink applications that work
analytics-java-api-041
analytics-java-api.pdf
41
1.8.2 1.0.1 1.0.1 3.1 To compile an application using Apache Flink 1.8.2, run Maven with the following parameter: mvn package -Dflink.version=1.8.2 For an example of a pom.xml file for a Managed Service for Apache Flink application that uses Apache Flink version 1.8.2, see the Managed Service for Apache Flink 1.8.2 Getting Started Application. For information about how to build and use application code for a Managed Service for Apache Flink application, see Create an application. Building applications with Apache Flink 1.6.2 This section contains information about components that you use for building Managed Service for Apache Flink applications that work with Apache Flink 1.6.2. Use the following component versions for Managed Service for Apache Flink applications: Component Java AWS Java SDK Apache Flink Version 1.8 (recommended) 1.11.379 1.6.2 Building applications with Apache Flink 1.6.2 111 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Component Managed Service for Apache Flink for Flink Runtime (aws-kinesisanalytics-runtime) Managed Service for Apache Flink Flink Connectors (aws-kinesisanalytics-flink) Apache Maven Apache Beam Note Version 1.0.1 1.0.1 3.1 Not supported with Apache Flink 1.6.2. When using Managed Service for Apache Flink Runtime version 1.0.1, you specify the version of Apache Flink in your pom.xml file rather than using the -Dflink.version parameter when compiling your application code. For an example of a pom.xml file for a Managed Service for Apache Flink application that uses Apache Flink version 1.6.2, see the Managed Service for Apache Flink 1.6.2 Getting Started Application. For information about how to build and use application code for a Managed Service for Apache Flink application, see Create an application. Upgrading applications To upgrade the Apache Flink version of an Amazon Managed Service for Apache Flink application, use the in-place Apache Flink version upgrade feature using the AWS CLI, AWS SDK, AWS CloudFormation, or the AWS Management Console. For more information, see Use in-place version upgrades for Apache Flink. You can use this feature with any existing applications you use with Amazon Managed Service for Apache Flink in READY or RUNNING state. Available connectors in Apache Flink 1.6.2 and 1.8.2 The Apache Flink framework contains connectors for accessing data from a variety of sources. Upgrading applications 112 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • For information about connectors available in the Apache Flink 1.6.2 framework, see Connectors (1.6.2) in the Apache Flink documentation (1.6.2). • For information about connectors available in the Apache Flink 1.8.2 framework, see Connectors (1.8.2) in the Apache Flink documentation (1.8.2). Getting started: Flink 1.13.2 This section introduces you to the fundamental concepts of Managed Service for Apache Flink and the DataStream API. It describes the available options for creating and testing your applications. It also provides instructions for installing the necessary tools to complete the tutorials in this guide and to create your first application. Topics • Components of a Managed Service for Apache Flink application • Prerequisites for completing the exercises • Step 1: Set up an AWS account and create an administrator user • Next step • Step 2: Set up the AWS Command Line Interface (AWS CLI) • Step 3: Create and run a Managed Service for Apache Flink application • Step 4: Clean up AWS resources • Step 5: Next steps Components of a Managed Service for Apache Flink application To process data, your Managed Service for Apache Flink application uses a Java/Apache Maven or Scala application that processes input and produces output using the Apache Flink runtime. Managed Service for Apache Flink application has the following components: • Runtime properties: You can use runtime properties to configure your application without recompiling your application code. • Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. For more information, see Add streaming data sources. Getting Started: Flink 1.13.2 113 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators. • Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks. After you create, compile, and package your application code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data. Prerequisites for completing the exercises To complete the steps in this guide, you must have the following: • Java
analytics-java-api-042
analytics-java-api.pdf
42
Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks. After you create, compile, and package your application code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data. Prerequisites for completing the exercises To complete the steps in this guide, you must have the following: • Java Development Kit (JDK) version 11. Set the JAVA_HOME environment variable to point to your JDK install location. • We recommend that you use a development environment (such as Eclipse Java Neon or IntelliJ Idea) to develop and compile your application. • Git client. Install the Git client if you haven't already. • Apache Maven Compiler Plugin. Maven must be in your working path. To test your Apache Maven installation, enter the following: $ mvn -version To get started, go to Set up an AWS account and create an administrator user. Step 1: Set up an AWS account and create an administrator user Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. Getting Started: Flink 1.13.2 114 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. Getting Started: Flink 1.13.2 115 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center) Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or Following the instructions for the interface that you want to use. AWS APIs. Getting Started: Flink 1.13.2 116 Managed
analytics-java-api-043
analytics-java-api.pdf
43
AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center) Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or Following the instructions for the interface that you want to use. AWS APIs. Getting Started: Flink 1.13.2 116 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Which user needs programmatic access? To By • For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide. • For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide. IAM Use temporary credentials to sign programmatic requests Following the instructions in Using temporary credentia to the AWS CLI, AWS SDKs, or ls with AWS resources in the AWS APIs. IAM User Guide. Getting Started: Flink 1.13.2 117 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Which user needs programmatic access? To By IAM (Not recommended) Use long-term credentials to Following the instructions for the interface that you want to sign programmatic requests use. to the AWS CLI, AWS SDKs, or AWS APIs. • For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide. • For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide. • For AWS APIs, see Managing access keys for IAM users in the IAM User Guide. Next step Set up the AWS Command Line Interface (AWS CLI) Next step Step 2: Set up the AWS Command Line Interface (AWS CLI) Step 2: Set up the AWS Command Line Interface (AWS CLI) In this step, you download and configure the AWS CLI to use with Managed Service for Apache Flink. Getting Started: Flink 1.13.2 118 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The getting started exercises in this guide assume that you are using administrator credentials (adminuser) in your account to perform the operations. Note If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command: aws --version The exercises in this tutorial require the following AWS CLI version or later: aws-cli/1.16.63 To set up the AWS CLI 1. Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide: • Installing the AWS Command Line Interface • Configuring the AWS CLI 2. Add a named profile for the administrator user in the AWS CLI config file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide. [profile adminuser] aws_access_key_id = adminuser access key ID aws_secret_access_key = adminuser secret access key region = aws-region Getting Started: Flink 1.13.2 119 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference. Note The example code and commands in this tutorial use the US West (Oregon) Region. To use a different Region, change the Region in the code and commands for this tutorial to the Region you want to use. 3. Verify the setup by entering the following help command at the command prompt: aws help After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup. Next step Step 3: Create and run a Managed Service for Apache Flink application Step 3: Create and run a Managed Service for Apache Flink application In this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink. This section contains the following steps: • Create two Amazon Kinesis data streams • Write sample records to the input stream • Download and examine the Apache Flink streaming Java code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Next step Getting Started: Flink 1.13.2 120 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide
analytics-java-api-044
analytics-java-api.pdf
44
for Apache Flink application In this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink. This section contains the following steps: • Create two Amazon Kinesis data streams • Write sample records to the input stream • Download and examine the Apache Flink streaming Java code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Next step Getting Started: Flink 1.13.2 120 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create two Amazon Kinesis data streams Before you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream and ExampleOutputStream). Your application uses these streams for the application source and destination streams. You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. To create the data streams (AWS CLI) 1. To create the first stream (ExampleInputStream), use the following Amazon Kinesis create-stream AWS CLI command. $ aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser 2. To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream. $ aws kinesis create-stream \ --stream-name ExampleOutputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). Getting Started: Flink 1.13.2 121 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Later in the tutorial, you run the stock.py script to send data to the application. $ python stock.py Download and examine the Apache Flink streaming Java code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Clone the remote repository using the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 2. Navigate to the amazon-kinesis-data-analytics-java-examples/GettingStarted directory. Note the following about the application code: Getting Started: Flink 1.13.2 122 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • A Project Object Model (pom.xml) file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries. • The BasicStreamingJob.java file contains the main method that defines the application's functionality. • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); • Your application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object. • The application creates source and sink connectors using static properties. To use dynamic application properties, use the createSourceFromApplicationProperties and createSinkFromApplicationProperties methods to create the connectors. These methods read the application's properties to configure the connectors. For more information about runtime properties, see Use runtime properties. Compile the application code In this section, you use the Apache Maven compiler to create the Java code for the application. For information about installing Apache Maven and the Java Development Kit (JDK), see Fulfill the prerequisites for completing the exercises. To compile the application code 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code in one of two ways: • Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml file: mvn package -Dflink.version=1.13.2 • Use your development environment. See your development environment documentation for details. Getting Started: Flink 1.13.2 123 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The provided source code relies on libraries from Java 11. You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP). 2. If there are errors while compiling, verify that your JAVA_HOME environment variable is correctly set. If the application compiles successfully, the following file is created: target/aws-kinesis-analytics-java-apps-1.0.jar Upload the Apache Flink streaming Java code In this section, you create an Amazon Simple Storage Service
analytics-java-api-045
analytics-java-api.pdf
45
Apache Flink Developer Guide Note The provided source code relies on libraries from Java 11. You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP). 2. If there are errors while compiling, verify that your JAVA_HOME environment variable is correctly set. If the application compiles successfully, the following file is created: target/aws-kinesis-analytics-java-apps-1.0.jar Upload the Apache Flink streaming Java code In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application code. To upload the application code 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose Create bucket. 3. 4. 5. Enter ka-app-code-<username> in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next. In the Configure options step, keep the settings as they are, and choose Next. In the Set permissions step, keep the settings as they are, and choose Next. 6. Choose Create bucket. 7. 8. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. Choose Next. 9. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Getting Started: Flink 1.13.2 124 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create and run the Managed Service for Apache Flink application You can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI. Note When you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately. Topics • Create and run the application (console) • Create and run the application (AWS CLI) Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the Application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Description, enter My java test app. • For Runtime, choose Apache Flink. • Leave the version pulldown as Apache Flink version 1.13. 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Getting Started: Flink 1.13.2 125 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/aws-kinesis-analytics-java- apps-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", Getting Started: Flink 1.13.2 126 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] Getting Started: Flink 1.13.2 127 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Enter the
analytics-java-api-046
analytics-java-api.pdf
46
"arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] Getting Started: Flink 1.13.2 127 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Enter the following: Group ID Key Value ProducerConfigProp flink.inputstream. LATEST erties initpos ProducerConfigProp aws.region us-west-2 erties ProducerConfigProp AggregationEnabled false erties 5. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 6. For CloudWatch logging, select the Enable check box. 7. Choose Update. Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication Getting Started: Flink 1.13.2 128 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Log stream: kinesis-analytics-log-stream Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Stop the application On the MyApplication page, choose Stop. Confirm the action. Update the application Using the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application JAR from the Amazon S3 bucket if you need to update the application code. On the MyApplication page, choose Configure. Update the application settings and choose Update. Create and run the application (AWS CLI) In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. Managed Service for Apache Flink uses the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Create a permissions policy Note You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Getting Started: Flink 1.13.2 129 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "S3", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": ["arn:aws:s3:::ka-app-code-username", "arn:aws:s3:::ka-app-code-username/*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Getting Started: Flink 1.13.2 130 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note To access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles, Create Role. 3. Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis
analytics-java-api-047
analytics-java-api.pdf
47
role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles, Create Role. 3. Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics. Choose Next: Permissions. 4. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. 5. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role. 6. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data Getting Started: Flink 1.13.2 131 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide stream. So you attach the policy that you created in the previous step, the section called “Create a permissions policy”. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the Managed Service for Apache Flink application 1. Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. { "ApplicationName": "test", "ApplicationDescription": "my java test app", "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "aws-kinesis-analytics-java-apps-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ Getting Started: Flink 1.13.2 132 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } } 2. Execute the CreateApplication action with the preceding request to create the application: aws kinesisanalyticsv2 create-application --cli-input-json file:// create_request.json The application is now created. You start the application in the next step. Start the Application In this section, you use the StartApplication action to start the application. To start the application 1. Save the following JSON code to a file named start_request.json. { "ApplicationName": "test", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } Getting Started: Flink 1.13.2 133 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the Application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "test" } 2. Execute the StopApplication action with the following request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json The application is now stopped. Add a CloudWatch Logging Option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see the section called “Set up application logging in Managed Service for Apache Flink”. Update Environment Properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. Getting Started: Flink 1.13.2 134 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties",
analytics-java-api-048
analytics-java-api.pdf
48
Properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. Getting Started: Flink 1.13.2 134 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } } 2. Execute the UpdateApplication action with the preceding request to update environment properties: aws kinesisanalyticsv2 update-application --cli-input-json file:// update_properties_request.json Update the Application Code When you need to update your application code with a new version of your code package, you use the UpdateApplication AWS CLI action. Getting Started: Flink 1.13.2 135 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the the section called “Create two Amazon Kinesis data streams” section. { "ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-username", "FileKeyUpdate": "aws-kinesis-analytics-java-apps-1.0.jar", "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU" } } } } } Next step Step 4: Clean up AWS resources Step 4: Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Getting Started tutorial. Getting Started: Flink 1.13.2 136 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources • Next step Delete your Managed Service for Apache Flink application 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. In the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation bar, choose Policies. Getting Started: Flink 1.13.2 137 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Next step Step 5: Next steps Step 5: Next steps Now that you've created and run a basic Managed Service for Apache Flink application, see the following resources for more advanced Managed Service for Apache Flink solutions. • The AWS Streaming Data Solution for Amazon Kinesis: The AWS Streaming Data Solution for Amazon Kinesis automatically configures the AWS services necessary to easily capture, store, process, and deliver streaming data. The solution provides multiple options for solving streaming data use cases. The Managed Service for Apache Flink option provides an end-to-end streaming ETL example demonstrating a real-world application that runs analytical operations on simulated New York taxi data. The solution sets up all necessary AWS resources such as IAM roles and policies, a CloudWatch dashboard, and CloudWatch alarms. • AWS Streaming Data Solution for Amazon
analytics-java-api-049
analytics-java-api.pdf
49
Apache Flink solutions. • The AWS Streaming Data Solution for Amazon Kinesis: The AWS Streaming Data Solution for Amazon Kinesis automatically configures the AWS services necessary to easily capture, store, process, and deliver streaming data. The solution provides multiple options for solving streaming data use cases. The Managed Service for Apache Flink option provides an end-to-end streaming ETL example demonstrating a real-world application that runs analytical operations on simulated New York taxi data. The solution sets up all necessary AWS resources such as IAM roles and policies, a CloudWatch dashboard, and CloudWatch alarms. • AWS Streaming Data Solution for Amazon MSK: The AWS Streaming Data Solution for Amazon MSK provides AWS CloudFormation templates where data flows through producers, streaming storage, consumers, and destinations. • Clickstream Lab with Apache Flink and Apache Kafka: An end to end lab for clickstream use cases using Amazon Managed Streaming for Apache Kafka for streaming storage and Managed Service for Apache Flink for Apache Flink applications for stream processing. Getting Started: Flink 1.13.2 138 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Amazon Managed Service for Apache Flink Workshop: In this workshop, you build an end-to- end streaming architecture to ingest, analyze, and visualize streaming data in near real-time. You set out to improve the operations of a taxi company in New York City. You analyze the telemetry data of a taxi fleet in New York City in near real-time to optimize their fleet operations. • Learn Flink: Hands On Training: Offical introductory Apache Flink training that gets you started writing scalable streaming ETL, analytics, and event-driven applications. Note Be aware that Managed Service for Apache Flink does not support the Apache Flink version (1.12) used in this training. You can use Flink 1.15.2 in Flink Managed Service for Apache Flink. Getting started: Flink 1.11.1 - deprecating Note Apache Flink versions 1.6, 1.8, and 1.11 have not been supported by the Apache Flink community for over three years. We plan to deprecate these versions in Amazon Managed Service for Apache Flink on November 5, 2024. Starting from this date, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. You can upgrade your applications statefully using the in-place version upgrades feature in Amazon Managed Service for Apache Flink For more information, see Use in-place version upgrades for Apache Flink. This topic contains a version of the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial that uses Apache Flink 1.11.1. This section introduces you to the fundamental concepts of Managed Service for Apache Flink and the DataStream API. It describes the available options for creating and testing your applications. It also provides instructions for installing the necessary tools to complete the tutorials in this guide and to create your first application. Topics • Components of a Managed Service for Apache Flink application Getting Started: Flink 1.11.1 139 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Prerequisites for completing the exercises • Step 1: Set up an AWS account and create an administrator user • Step 2: Set up the AWS Command Line Interface (AWS CLI) • Step 3: Create and run a Managed Service for Apache Flink application • Step 4: Clean up AWS resources • Step 5: Next steps Components of a Managed Service for Apache Flink application To process data, your Managed Service for Apache Flink application uses a Java/Apache Maven or Scala application that processes input and produces output using the Apache Flink runtime. An Managed Service for Apache Flink application has the following components: • Runtime properties: You can use runtime properties to configure your application without recompiling your application code. • Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. For more information, see Add streaming data sources. • Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators. • Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks. After you create, compile, and package your application code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data. Prerequisites for completing the exercises To complete the steps in this guide, you must have
analytics-java-api-050
analytics-java-api.pdf
50
writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks. After you create, compile, and package your application code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data. Prerequisites for completing the exercises To complete the steps in this guide, you must have the following: Getting Started: Flink 1.11.1 140 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Java Development Kit (JDK) version 11. Set the JAVA_HOME environment variable to point to your JDK install location. • We recommend that you use a development environment (such as Eclipse Java Neon or IntelliJ Idea) to develop and compile your application. • Git client. Install the Git client if you haven't already. • Apache Maven Compiler Plugin. Maven must be in your working path. To test your Apache Maven installation, enter the following: $ mvn -version To get started, go to Set up an AWS account and create an administrator user. Step 1: Set up an AWS account and create an administrator user Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Getting Started: Flink 1.11.1 141 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Getting Started: Flink 1.11.1 142 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center) Use temporary credentials to sign programmatic requests Following the instructions for the
analytics-java-api-051
analytics-java-api.pdf
51
Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center) Use temporary credentials to sign programmatic requests Following the instructions for the interface that you want to to the AWS CLI, AWS SDKs, or use. AWS APIs. • For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide. • For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide. Getting Started: Flink 1.11.1 143 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Which user needs programmatic access? To By IAM IAM Use temporary credentials to sign programmatic requests Following the instructions in Using temporary credentia to the AWS CLI, AWS SDKs, or ls with AWS resources in the AWS APIs. IAM User Guide. (Not recommended) Use long-term credentials to Following the instructions for the interface that you want to sign programmatic requests use. to the AWS CLI, AWS SDKs, or AWS APIs. • For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide. • For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide. • For AWS APIs, see Managing access keys for IAM users in the IAM User Guide. Next step Set up the AWS Command Line Interface (AWS CLI) Step 2: Set up the AWS Command Line Interface (AWS CLI) In this step, you download and configure the AWS CLI to use with Managed Service for Apache Flink. Getting Started: Flink 1.11.1 144 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The getting started exercises in this guide assume that you are using administrator credentials (adminuser) in your account to perform the operations. Note If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command: aws --version The exercises in this tutorial require the following AWS CLI version or later: aws-cli/1.16.63 To set up the AWS CLI 1. Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide: • Installing the AWS Command Line Interface • Configuring the AWS CLI 2. Add a named profile for the administrator user in the AWS CLI config file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide. [profile adminuser] aws_access_key_id = adminuser access key ID aws_secret_access_key = adminuser secret access key region = aws-region Getting Started: Flink 1.11.1 145 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference. Note The example code and commands in this tutorial use the US West (Oregon) Region. To use a different Region, change the Region in the code and commands for this tutorial to the Region you want to use. 3. Verify the setup by entering the following help command at the command prompt: aws help After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup. Next step Step 3: Create and run a Managed Service for Apache Flink application Step 3: Create and run a Managed Service for Apache Flink application In this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink. This section contains the following steps: • Create two Amazon Kinesis data streams • Write sample records to the input stream • Download and examine the Apache Flink streaming Java code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Next step Getting Started: Flink 1.11.1 146 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create two Amazon Kinesis data streams Before you create a Managed Service for Apache Flink application
analytics-java-api-052
analytics-java-api.pdf
52
with data streams as a source and a sink. This section contains the following steps: • Create two Amazon Kinesis data streams • Write sample records to the input stream • Download and examine the Apache Flink streaming Java code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Next step Getting Started: Flink 1.11.1 146 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create two Amazon Kinesis data streams Before you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream and ExampleOutputStream). Your application uses these streams for the application source and destination streams. You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. To create the data streams (AWS CLI) 1. To create the first stream (ExampleInputStream), use the following Amazon Kinesis create-stream AWS CLI command. $ aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser 2. To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream. $ aws kinesis create-stream \ --stream-name ExampleOutputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). Getting Started: Flink 1.11.1 147 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { "EVENT_TIME": datetime.datetime.now().isoformat(), "TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]), "PRICE": round(random.random() * 100, 2), } def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey" ) if __name__ == "__main__": generate(STREAM_NAME, boto3.client("kinesis")) 2. Later in the tutorial, you run the stock.py script to send data to the application. $ python stock.py Download and examine the Apache Flink streaming Java code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Clone the remote repository using the following command: Getting Started: Flink 1.11.1 148 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 2. Navigate to the amazon-kinesis-data-analytics-java-examples/GettingStarted directory. Note the following about the application code: • A Project Object Model (pom.xml) file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries. • The BasicStreamingJob.java file contains the main method that defines the application's functionality. • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); • Your application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object. • The application creates source and sink connectors using static properties. To use dynamic application properties, use the createSourceFromApplicationProperties and createSinkFromApplicationProperties methods to create the connectors. These methods read the application's properties to configure the connectors. For more information about runtime properties, see Use runtime properties. Compile the application code In this section, you use the Apache Maven compiler to create the Java code for the application. For information about installing Apache Maven and the Java Development Kit (JDK), see Fulfill the prerequisites for completing the exercises. To compile the application code 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code in one of two ways: Getting Started: Flink 1.11.1 149 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml file: mvn package -Dflink.version=1.11.3 • Use your development environment. See your development environment documentation for details. Note The provided source code relies on libraries from Java 11. Ensure that your project's Java version is 11. You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP). 2. If there are errors while compiling, verify that your JAVA_HOME environment variable is correctly set. If the application compiles successfully, the following file is created: target/aws-kinesis-analytics-java-apps-1.0.jar Upload the Apache Flink streaming Java code In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application
analytics-java-api-053
analytics-java-api.pdf
53
Ensure that your project's Java version is 11. You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP). 2. If there are errors while compiling, verify that your JAVA_HOME environment variable is correctly set. If the application compiles successfully, the following file is created: target/aws-kinesis-analytics-java-apps-1.0.jar Upload the Apache Flink streaming Java code In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application code. To upload the application code 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose Create bucket. 3. 4. 5. Enter ka-app-code-<username> in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next. In the Configure options step, keep the settings as they are, and choose Next. In the Set permissions step, keep the settings as they are, and choose Next. Getting Started: Flink 1.11.1 150 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 6. Choose Create bucket. 7. 8. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. Choose Next. 9. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application You can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI. Note When you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately. Topics • Create and run the application (console) • Create and run the application (AWS CLI) Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Description, enter My java test app. Getting Started: Flink 1.11.1 151 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • For Runtime, choose Apache Flink. • Leave the version pulldown as Apache Flink version 1.11 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" Getting Started: Flink 1.11.1 152 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ], "Resource": [ "arn:aws:s3:::ka-app-code-username/aws-kinesis-analytics-java- apps-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" Getting Started: Flink 1.11.1 153 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, for Group ID, enter ProducerConfigProperties.
analytics-java-api-054
analytics-java-api.pdf
54
"Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" Getting Started: Flink 1.11.1 153 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, for Group ID, enter ProducerConfigProperties. 5. Enter the following application properties and values: Group ID Key Value ProducerConfigProp flink.inputstream. LATEST erties initpos ProducerConfigProp aws.region us-west-2 erties ProducerConfigProp AggregationEnabled false erties 6. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 7. For CloudWatch logging, select the Enable check box. 8. Choose Update. Getting Started: Flink 1.11.1 154 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Stop the application On the MyApplication page, choose Stop. Confirm the action. Update the application Using the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application JAR from the Amazon S3 bucket if you need to update the application code. On the MyApplication page, choose Configure. Update the application settings and choose Update. Create and run the application (AWS CLI) In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. a Managed Service for Apache Flink uses the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Create a Permissions Policy Note You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. Getting Started: Flink 1.11.1 155 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "S3", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": ["arn:aws:s3:::ka-app-code-username", "arn:aws:s3:::ka-app-code-username/*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Getting Started: Flink 1.11.1 156 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Note To access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed. Create an IAM Role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles, Create Role. 3. Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose
analytics-java-api-055
analytics-java-api.pdf
55
these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles, Create Role. 3. Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics. Choose Next: Permissions. 4. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. 5. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role. 6. Attach the permissions policy to the role. Getting Started: Flink 1.11.1 157 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, the section called “Create a Permissions Policy”. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the Managed Service for Apache Flink application 1. Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. { "ApplicationName": "test", "ApplicationDescription": "my java test app", "RuntimeEnvironment": "FLINK-1_11", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "aws-kinesis-analytics-java-apps-1.0.jar" Getting Started: Flink 1.11.1 158 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } } 2. Execute the CreateApplication action with the preceding request to create the application: aws kinesisanalyticsv2 create-application --cli-input-json file:// create_request.json The application is now created. You start the application in the next step. Start the application In this section, you use the StartApplication action to start the application. To start the application 1. Save the following JSON code to a file named start_request.json. { Getting Started: Flink 1.11.1 159 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationName": "test", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "test" } 2. Execute the StopApplication action with the following request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see the section called “Set up application logging in Managed Service for Apache Flink”. Getting Started: Flink 1.11.1 160 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update environment properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2",
analytics-java-api-056
analytics-java-api.pdf
56
in Managed Service for Apache Flink”. Getting Started: Flink 1.11.1 160 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update environment properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } } 2. Execute the UpdateApplication action with the preceding request to update environment properties: aws kinesisanalyticsv2 update-application --cli-input-json file:// update_properties_request.json Getting Started: Flink 1.11.1 161 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication AWS CLI action. Note To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the the section called “Create two Amazon Kinesis data streams” section. { "ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-username", "FileKeyUpdate": "aws-kinesis-analytics-java-apps-1.0.jar", "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU" } } } } } Next step Step 4: Clean up AWS resources Getting Started: Flink 1.11.1 162 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Step 4: Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Getting Started tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete rour IAM resources • Delete your CloudWatch resources • Next step Delete your Managed Service for Apache Flink application 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. In the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Getting Started: Flink 1.11.1 163 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete rour IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Next step Step 5: Next steps Step 5: Next steps Now that you've created and run a basic Managed Service for Apache Flink application, see the following resources for more advanced Managed Service for Apache Flink solutions. • The AWS Streaming Data Solution for Amazon Kinesis: The AWS Streaming Data Solution for Amazon Kinesis automatically configures the AWS services necessary to easily capture, store, process, and deliver streaming data. The solution provides multiple options for solving streaming data use cases. The Managed Service for Apache Flink option provides an end-to-end streaming ETL example demonstrating a real-world application that runs analytical operations on simulated New York taxi data. The solution sets up all necessary AWS resources such as IAM roles and policies, a CloudWatch dashboard, and CloudWatch
analytics-java-api-057
analytics-java-api.pdf
57
following resources for more advanced Managed Service for Apache Flink solutions. • The AWS Streaming Data Solution for Amazon Kinesis: The AWS Streaming Data Solution for Amazon Kinesis automatically configures the AWS services necessary to easily capture, store, process, and deliver streaming data. The solution provides multiple options for solving streaming data use cases. The Managed Service for Apache Flink option provides an end-to-end streaming ETL example demonstrating a real-world application that runs analytical operations on simulated New York taxi data. The solution sets up all necessary AWS resources such as IAM roles and policies, a CloudWatch dashboard, and CloudWatch alarms. Getting Started: Flink 1.11.1 164 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • AWS Streaming Data Solution for Amazon MSK: The AWS Streaming Data Solution for Amazon MSK provides AWS CloudFormation templates where data flows through producers, streaming storage, consumers, and destinations. • Clickstream Lab with Apache Flink and Apache Kafka: An end to end lab for clickstream use cases using Amazon Managed Streaming for Apache Kafka for streaming storage and Managed Service for Apache Flink for Apache Flink applications for stream processing. • Amazon Managed Service for Apache Flink Workshop: In this workshop, you build an end-to- end streaming architecture to ingest, analyze, and visualize streaming data in near real-time. You set out to improve the operations of a taxi company in New York City. You analyze the telemetry data of a taxi fleet in New York City in near real-time to optimize their fleet operations. • Learn Flink: Hands On Training: Offical introductory Apache Flink training that gets you started writing scalable streaming ETL, analytics, and event-driven applications. Note Be aware that Managed Service for Apache Flink does not support the Apache Flink version (1.12) used in this training. You can use Flink 1.15.2 in Flink Managed Service for Apache Flink. • Apache Flink Code Examples: A GitHub repository of a wide variety of Apache Flink application examples. Getting started: Flink 1.8.2 - deprecating Note Apache Flink versions 1.6, 1.8, and 1.11 have not been supported by the Apache Flink community for over three years. We plan to deprecate these versions in Amazon Managed Service for Apache Flink on November 5, 2024. Starting from this date, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. You can upgrade your applications statefully using the in-place version upgrades feature in Amazon Managed Service for Apache Flink For more information, see Use in-place version upgrades for Apache Flink. Getting started: Flink 1.8.2 - deprecating 165 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide This topic contains a version of the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial that uses Apache Flink 1.8.2. Topics • Components of Managed Service for Apache Flink application • Prerequisites for completing the exercises • Step 1: Set up an AWS account and create an administrator user • Step 2: Set up the AWS Command Line Interface (AWS CLI) • Step 3: Create and run a Managed Service for Apache Flink application • Step 4: Clean up AWS resources Components of Managed Service for Apache Flink application To process data, your Managed Service for Apache Flink application uses a Java/Apache Maven or Scala application that processes input and produces output using the Apache Flink runtime. An Managed Service for Apache Flink application has the following components: • Runtime properties: You can use runtime properties to configure your application without recompiling your application code. • Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. For more information, see Add streaming data sources. • Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators. • Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks. After you create, compile, and package your application code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data. Getting started: Flink 1.8.2 - deprecating 166 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Prerequisites for completing the exercises To complete the steps in this guide, you must have the following: • Java Development Kit (JDK) version 8. Set the JAVA_HOME
analytics-java-api-058
analytics-java-api.pdf
58
code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data. Getting started: Flink 1.8.2 - deprecating 166 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Prerequisites for completing the exercises To complete the steps in this guide, you must have the following: • Java Development Kit (JDK) version 8. Set the JAVA_HOME environment variable to point to your JDK install location. • To use the Apache Flink Kinesis connector in this tutorial, you must download and install Apache Flink. For details, see Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions. • We recommend that you use a development environment (such as Eclipse Java Neon or IntelliJ Idea) to develop and compile your application. • Git client. Install the Git client if you haven't already. • Apache Maven Compiler Plugin. Maven must be in your working path. To test your Apache Maven installation, enter the following: $ mvn -version To get started, go to Step 1: Set up an AWS account and create an administrator user. Step 1: Set up an AWS account and create an administrator user Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. Getting started: Flink 1.8.2 - deprecating 167 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. Getting started: Flink 1.8.2 - deprecating 168 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center) Use temporary
analytics-java-api-059
analytics-java-api.pdf
59
set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center) Use temporary credentials to sign programmatic requests Following the instructions for the interface that you want to to the AWS CLI, AWS SDKs, or AWS APIs. use. • For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide. • For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in Getting started: Flink 1.8.2 - deprecating 169 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Which user needs programmatic access? To By IAM IAM the AWS SDKs and Tools Reference Guide. Use temporary credentials to sign programmatic requests Following the instructions in Using temporary credentia to the AWS CLI, AWS SDKs, or ls with AWS resources in the AWS APIs. IAM User Guide. (Not recommended) Use long-term credentials to Following the instructions for the interface that you want to sign programmatic requests to the AWS CLI, AWS SDKs, or use. AWS APIs. • For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide. • For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide. • For AWS APIs, see Managing access keys for IAM users in the IAM User Guide. Step 2: Set up the AWS Command Line Interface (AWS CLI) In this step, you download and configure the AWS CLI to use with Managed Service for Apache Flink. Getting started: Flink 1.8.2 - deprecating 170 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The getting started exercises in this guide assume that you are using administrator credentials (adminuser) in your account to perform the operations. Note If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command: aws --version The exercises in this tutorial require the following AWS CLI version or later: aws-cli/1.16.63 To set up the AWS CLI 1. Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide: • Installing the AWS Command Line Interface • Configuring the AWS CLI 2. Add a named profile for the administrator user in the AWS CLI config file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide. [profile adminuser] aws_access_key_id = adminuser access key ID aws_secret_access_key = adminuser secret access key region = aws-region Getting started: Flink 1.8.2 - deprecating 171 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For a list of available Regions, see Regions and Endpoints in the Amazon Web Services General Reference. Note The example code and commands in this tutorial use the US West (Oregon) Region. To use a different AWS Region, change the Region in the code and commands for this tutorial to the Region you want to use. 3. Verify the setup by entering the following help command at the command prompt: aws help After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup. Next step Step 3: Create and run a Managed Service for Apache Flink application Step 3: Create and run a Managed Service for Apache Flink application In this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink. This section contains the following steps: • Create two Amazon Kinesis data streams • Write sample records to the input stream • Download and examine the Apache Flink streaming Java code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Next step Getting started: Flink 1.8.2 - deprecating 172 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create two Amazon Kinesis data streams Before you create
analytics-java-api-060
analytics-java-api.pdf
60
Service for Apache Flink application with data streams as a source and a sink. This section contains the following steps: • Create two Amazon Kinesis data streams • Write sample records to the input stream • Download and examine the Apache Flink streaming Java code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Next step Getting started: Flink 1.8.2 - deprecating 172 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create two Amazon Kinesis data streams Before you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream and ExampleOutputStream). Your application uses these streams for the application source and destination streams. You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. To create the data streams (AWS CLI) 1. To create the first stream (ExampleInputStream), use the following Amazon Kinesis create-stream AWS CLI command. $ aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser 2. To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream. $ aws kinesis create-stream \ --stream-name ExampleOutputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). Getting started: Flink 1.8.2 - deprecating 173 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { "EVENT_TIME": datetime.datetime.now().isoformat(), "TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]), "PRICE": round(random.random() * 100, 2), } def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey" ) if __name__ == "__main__": generate(STREAM_NAME, boto3.client("kinesis")) 2. Later in the tutorial, you run the stock.py script to send data to the application. $ python stock.py Download and examine the Apache Flink streaming Java code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Clone the remote repository using the following command: Getting started: Flink 1.8.2 - deprecating 174 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 2. Navigate to the amazon-kinesis-data-analytics-java-examples/ GettingStarted_1_8 directory. Note the following about the application code: • A Project Object Model (pom.xml) file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries. • The BasicStreamingJob.java file contains the main method that defines the application's functionality. • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); • Your application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object. • The application creates source and sink connectors using static properties. To use dynamic application properties, use the createSourceFromApplicationProperties and createSinkFromApplicationProperties methods to create the connectors. These methods read the application's properties to configure the connectors. For more information about runtime properties, see Use runtime properties. Compile the application code In this section, you use the Apache Maven compiler to create the Java code for the application. For information about installing Apache Maven and the Java Development Kit (JDK), see Prerequisites for completing the exercises. Note In order to use the Kinesis connector with versions of Apache Flink prior to 1.11, you need to download, build, and install Apache Maven. For more information, see the Getting started: Flink 1.8.2 - deprecating 175 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide section called “Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions”. To compile the application code 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code in one of two ways: • Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml file: mvn package -Dflink.version=1.8.2 • Use your development environment. See your development environment documentation for details. Note The provided source code relies on libraries from Java 1.8. Ensure that your project's Java version is 1.8. You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code