id
stringlengths
8
78
source
stringclasses
743 values
chunk_id
int64
1
5.05k
text
stringlengths
593
49.7k
amazon-location-developer-guide-138
amazon-location-developer-guide.pdf
138
--data '{ "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] }, { "Position": [-123.11179620827039, 49.3014761219] } ], "Destinations": [ { "Position": [-123.112317039, 49.28897192166] } ], "DepartureTime": "2024-05-28T21:27:56Z", "RoutingBoundary": { "Geometry": { "AutoCircle": { "Margin": 10000, Use CalculateRouteMatrix 723 Amazon Location Service Developer Guide "MaxRadius": 30000 } } } }' AWS CLI aws geo-routes calculate-route-matrix --key ${YourKey} \ --origins '[{"Position": [-123.11679620827039, 49.28147612192166]}, {"Position": [-123.11179620827039, 49.3014761219]}]' \ --destinations '[{"Position": [-123.11179620827039, 49.28897192166]}]' \ --departure-time "2024-05-28T21:27:56Z" \ --routing-boundary '{"Geometry": {"AutoCircle": {"Margin": 10000, "MaxRadius": 30000}}}' How to calculate route matrix with avoidance The CalculateRouteMatrix API computes routes and returns travel time and distance from each origin to each destination in the specified lists. The API can be used to set avoidance options for specific areas or road features, ensuring routes avoid specified zones or conditions. If an alternative route is not feasible, the avoidance preference may be bypassed. Potential use cases • Route planning and optimization: Use route matrix as input for software that requires optimized travel routes while avoiding certain areas or road features. Examples CalculateRouteMatrix with an avoidance area Sample request { "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] } ], "Destinations": [ Use CalculateRouteMatrix 724 Amazon Location Service { "Position": [-123.112317039, 49.28897192166] } Developer Guide ], "Avoid": { "Areas": [ { "Geometry": { "BoundingBox": [ -123.116561, 49.281517, -123.110165, 49.285689 ] } } ] }, "RoutingBoundary": { "Unbounded": true } } Sample response { "ErrorCount": 0, "RouteMatrix": [ [ { "Distance": 1855, "Duration": 295 } ] ], "RoutingBoundary": { "Unbounded": true } } cURL curl --request POST \ Use CalculateRouteMatrix 725 Amazon Location Service Developer Guide --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/route-matrix?key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] } ], "Destinations": [ { "Position": [-123.112317039, 49.28897192166] } ], "Avoid": { "Areas": [ { "Geometry": { "BoundingBox": [ -123.116561, 49.281517, -123.110165, 49.285689 ] } } ] }, "RoutingBoundary": { "Unbounded": true } }' AWS CLI aws geo-routes calculate-route-matrix --key ${YourKey} \ --origins '[{"Position": [-123.11679620827039, 49.28147612192166]}]' \ --destinations '[{"Position": [-123.112317039, 49.28897192166]}]' \ --avoid '{"Areas": [{"Geometry": {"BoundingBox": [-123.116561, 49.281517, -123.110165, 49.285689]}}]}' \ --routing-boundary '{"Unbounded": true}' Use CalculateRouteMatrix 726 Amazon Location Service Developer Guide CalculateRouteMatrix avoiding toll roads, highways, and ferries Sample request { "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] } ], "Destinations": [ { "Position": [-123.112317039, 49.28897192166] } ], "Avoid": { "TollRoads": true, "ControlledAccessHighways": true, "Ferries": true }, "RoutingBoundary": { "Unbounded": true } } Sample response { "ErrorCount": 0, "RouteMatrix": [ [ { "Distance": 1855, "Duration": 295 } ] ], "RoutingBoundary": { "Unbounded": true } } Use CalculateRouteMatrix 727 Amazon Location Service cURL Developer Guide curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/route-matrix?key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] } ], "Destinations": [ { "Position": [-123.112317039, 49.28897192166] } ], "Avoid": { "TollRoads": true, "ControlledAccessHighways": true, "Ferries": true }, "RoutingBoundary": { "Unbounded": true } }' AWS CLI aws geo-routes calculate-route-matrix --key ${YourKey} \ --origins '[{"Position": [-123.11679620827039, 49.28147612192166]}]' \ --destinations '[{"Position": [-123.112317039, 49.28897192166]}]' \ --avoid '{"TollRoads": true, "ControlledAccessHighways": true, "Ferries": true}' \ --routing-boundary '{"Unbounded": true}' Learn how to use OptimizeWaypoints Learn how to use OptimizeWaypoints to find the best routes for minimizing travel time or distance. Topics Use OptimizeWaypoints 728 Amazon Location Service Developer Guide • How to optimize waypoints for a route • How to optimize waypoints for a route with traffic awareness • How to optimize waypoints for a route with access hours awareness How to optimize waypoints for a route The OptimizeWaypoints API calculates the most efficient route between a series of waypoints, minimizing either travel time or total distance. This API solves the Traveling Salesman Problem by considering road networks and traffic conditions to determine the optimal path. Potential use cases • Analyze service area patterns: Use waypoint optimization to make informed decisions about business service areas and improve logistics efficiency. Examples Optimize waypoints using Car TravelMode Sample Request { "Origin": [ -123.095740, 49.274426 ], "Waypoints": [ { "Position": [ -123.115193, 49.280596 ] }, { "Position": [ -123.089557, 49.271774 ] } ], "DepartureTime": "2024-10-25T18:13:42Z", Use OptimizeWaypoints 729 Amazon Location Service Developer Guide "Destination": [ -123.095185, 49.263728 ], "TravelMode": "Car" } Sample Response { "Connections": [ { "Distance": 1989, "From": "Origin", "RestDuration": 0, "To": "Waypoint0", "TravelDuration": 258, "WaitDuration": 0 }, { "Distance": 3010, "From": "Waypoint0", "RestDuration": 0, "To": "Waypoint1", "TravelDuration": 298, "WaitDuration": 0 }, { "Distance": 2371, "From": "Waypoint1", "RestDuration": 0, "To": "Destination", "TravelDuration": 311, "WaitDuration": 0 } ], "Distance": 7370, "Duration": 867, "ImpedingWaypoints": [], "OptimizedWaypoints": [ { "DepartureTime": "2024-10-25T18:13:42Z", "Id": "Origin", Use OptimizeWaypoints 730 Amazon Location Service Developer Guide "Position": [ -123.09574, 49.274426 ] }, { "DepartureTime": "2024-10-25T18:18:00Z", "Id": "Waypoint0", "Position": [ -123.115193, 49.280596 ] }, { "DepartureTime": "2024-10-25T18:22:58Z", "Id": "Waypoint1", "Position": [ -123.089557, 49.271774 ] }, { "ArrivalTime": "2024-10-25T18:28:09Z", "Id": "Destination", "Position": [ -123.095185, 49.263728 ] } ], "TimeBreakdown": { "RestDuration": 0, "ServiceDuration": 0, "TravelDuration": 867, "WaitDuration": 0 } } cURL curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/optimize-waypoints? key=Your_key'
amazon-location-developer-guide-139
amazon-location-developer-guide.pdf
139
298, "WaitDuration": 0 }, { "Distance": 2371, "From": "Waypoint1", "RestDuration": 0, "To": "Destination", "TravelDuration": 311, "WaitDuration": 0 } ], "Distance": 7370, "Duration": 867, "ImpedingWaypoints": [], "OptimizedWaypoints": [ { "DepartureTime": "2024-10-25T18:13:42Z", "Id": "Origin", Use OptimizeWaypoints 730 Amazon Location Service Developer Guide "Position": [ -123.09574, 49.274426 ] }, { "DepartureTime": "2024-10-25T18:18:00Z", "Id": "Waypoint0", "Position": [ -123.115193, 49.280596 ] }, { "DepartureTime": "2024-10-25T18:22:58Z", "Id": "Waypoint1", "Position": [ -123.089557, 49.271774 ] }, { "ArrivalTime": "2024-10-25T18:28:09Z", "Id": "Destination", "Position": [ -123.095185, 49.263728 ] } ], "TimeBreakdown": { "RestDuration": 0, "ServiceDuration": 0, "TravelDuration": 867, "WaitDuration": 0 } } cURL curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/optimize-waypoints? key=Your_key' \ Use OptimizeWaypoints 731 Amazon Location Service Developer Guide --header 'Content-Type: application/json' \ --data '{ "Origin": [ -123.095740, 49.274426 ], "Waypoints": [ { "Position": [ -123.115193, 49.280596 ] }, { "Position": [ -123.089557, 49.271774 ] } ], "DepartureTime": "2024-10-25T18:13:42Z", "Destination": [ -123.095185, 49.263728 ], "TravelMode": "Car" }' AWS CLI aws geo-routes optimize-waypoints --key ${YourKey} \ --origin -123.095740 49.274426 \ --waypoints '[{"Position": [-123.115193 , 49.280596]}, {"Position": [-123.089557 , 49.271774]}]' \ --destination -123.095185 49.263728 \ --departure-time "2024-10-25T18:13:42Z" \ --travel-mode "Car" How to optimize waypoints for a route with traffic awareness The OptimizeWaypoints API calculates the optimal route between multiple waypoints to minimize travel time or total distance. It utilizes advanced algorithms to solve the Traveling Salesman Use OptimizeWaypoints 732 Amazon Location Service Developer Guide Problem, determining the most efficient path while accounting for factors such as road networks and real-time traffic conditions. Potential use cases • Optimize multi-stop routes for delivery efficiency: Improve delivery operations by calculating the shortest or fastest route among several stops. This is useful for reducing operational costs, fuel consumption, and travel time in logistics and delivery services. Examples Optimize waypoints with traffic awareness using car TravelMode Sample request { "Origin": [ -123.095740, 49.274426 ], "Waypoints": [ { "Position": [ -123.115193, 49.280596 ] }, { "Position": [ -123.089557, 49.271774 ] } ], "DepartureTime": "2024-10-25T18:13:42Z", "Destination": [ -123.095185, 49.263728 ], "TravelMode": "Car", "Traffic": { "Usage": "UseTrafficData" Use OptimizeWaypoints 733 Developer Guide Amazon Location Service } } Sample response { "Connections": [ { "Distance": 1989, "From": "Origin", "RestDuration": 0, "To": "Waypoint0", "TravelDuration": 324, "WaitDuration": 0 }, { "Distance": 2692, "From": "Waypoint0", "RestDuration": 0, "To": "Waypoint1", "TravelDuration": 338, "WaitDuration": 0 }, { "Distance": 2371, "From": "Waypoint1", "RestDuration": 0, "To": "Destination", "TravelDuration": 395, "WaitDuration": 0 } ], "Distance": 7052, "Duration": 1057, "ImpedingWaypoints": [], "OptimizedWaypoints": [ { "DepartureTime": "2024-10-25T18:13:42Z", "Id": "Origin", "Position": [ -123.09574, 49.274426 ] Use OptimizeWaypoints 734 Amazon Location Service }, { "ArrivalTime": "2024-10-25T18:19:06Z", "DepartureTime": "2024-10-25T18:19:06Z", Developer Guide "Id": "Waypoint0", "Position": [ -123.115193, 49.280596 ] }, { "ArrivalTime": "2024-10-25T18:24:44Z", "DepartureTime": "2024-10-25T18:24:44Z", "Id": "Waypoint1", "Position": [ -123.089557, 49.271774 ] }, { "ArrivalTime": "2024-10-25T18:31:19Z", "Id": "Destination", "Position": [ -123.095185, 49.263728 ] } ], "TimeBreakdown": { "RestDuration": 0, "ServiceDuration": 0, "TravelDuration": 1057, "WaitDuration": 0 } } cURL curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/optimize-waypoints? key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ Use OptimizeWaypoints 735 Amazon Location Service Developer Guide "Origin": [ -123.095740, 49.274426 ], "Waypoints": [ { "Position": [ -123.115193, 49.280596 ] }, { "Position": [ -123.089557, 49.271774 ] } ], "DepartureTime": "2024-10-25T18:13:42Z", "Destination": [ -123.095185, 49.263728 ], "TravelMode": "Car", "Traffic": { "Usage": "UseTrafficData" } }' AWS CLI aws geo-routes optimize-waypoints --key ${YourKey} \ --origin -123.095740 49.274426 \ --waypoints '[{"Position": [-123.115193 , 49.280596]}, {"Position": [-123.089557 , 49.271774]}]' \ --destination -123.095185 49.263728 \ --departure-time "2024-10-25T18:13:42Z" \ --travel-mode "Car" \ --traffic '{"Usage": "UseTrafficData"}' Use OptimizeWaypoints 736 Amazon Location Service Developer Guide How to optimize waypoints for a route with access hours awareness The OptimizeWaypoints API also calculates the optimal route between a set of waypoints, with the goal of minimizing either the travel time or the total distance covered. It solves the Traveling Salesman Problem of determining the most efficient path, taking into account factors such as the road network and traffic conditions. Potential use cases • Analyze customer access hours: Plan for efficiency around your customer’s access hours. Examples Optimize waypoints with access hours awareness using Car TravelMode Sample Request { "Origin": [ -123.095740, 49.274426 ], "Waypoints": [ { "Position": [ -123.115193, 49.280596 ], "SideOfStreet": { "Position": [ -123.089557, 49.271774 ], "UseWith": "AnyStreet" }, "AccessHours": { "From": { "DayOfWeek": "Saturday", "TimeOfDay": "00:02:42Z" }, "To": { "DayOfWeek": "Friday", Use OptimizeWaypoints 737 Amazon Location Service Developer Guide "TimeOfDay": "1:33:36+02:50" } }, "Heading": "250", "ServiceDuration": "200" }, { "Position": [ -123.089557, 49.271774 ], "AccessHours": { "From": { "DayOfWeek": "Monday", "TimeOfDay": "00:02:42Z" }, "To": { "DayOfWeek": "Tuesday", "TimeOfDay": "1:33:36+02:50" } }, "ServiceDuration": "200" } ], "DepartureTime": "2024-10-25T18:13:42Z", "Destination": [ -123.095185, 49.263728 ], "TravelMode": "Car" } Sample Response { "Connections": [ { "Distance": 1989, "From": "Origin", "RestDuration": 0, "To": "Waypoint0", "TravelDuration": 258, "WaitDuration": 20682 Use OptimizeWaypoints 738 Amazon Location Service }, Developer Guide { "Distance": 3360, "From": "Waypoint0", "RestDuration": 0, "To": "Waypoint1", "TravelDuration": 378, "WaitDuration": 172222 }, { "Distance": 2371, "From": "Waypoint1", "RestDuration": 0, "To": "Destination", "TravelDuration": 311, "WaitDuration": 0
amazon-location-developer-guide-140
amazon-location-developer-guide.pdf
140
"TimeOfDay": "1:33:36+02:50" } }, "Heading": "250", "ServiceDuration": "200" }, { "Position": [ -123.089557, 49.271774 ], "AccessHours": { "From": { "DayOfWeek": "Monday", "TimeOfDay": "00:02:42Z" }, "To": { "DayOfWeek": "Tuesday", "TimeOfDay": "1:33:36+02:50" } }, "ServiceDuration": "200" } ], "DepartureTime": "2024-10-25T18:13:42Z", "Destination": [ -123.095185, 49.263728 ], "TravelMode": "Car" } Sample Response { "Connections": [ { "Distance": 1989, "From": "Origin", "RestDuration": 0, "To": "Waypoint0", "TravelDuration": 258, "WaitDuration": 20682 Use OptimizeWaypoints 738 Amazon Location Service }, Developer Guide { "Distance": 3360, "From": "Waypoint0", "RestDuration": 0, "To": "Waypoint1", "TravelDuration": 378, "WaitDuration": 172222 }, { "Distance": 2371, "From": "Waypoint1", "RestDuration": 0, "To": "Destination", "TravelDuration": 311, "WaitDuration": 0 } ], "Distance": 7720, "Duration": 194251, "ImpedingWaypoints": [], "OptimizedWaypoints": [ { "DepartureTime": "2024-10-25T18:13:42Z", "Id": "Origin", "Position": [ -123.09574, 49.274426 ] }, { "ArrivalTime": "2024-10-25T18:18:00Z", "DepartureTime": "2024-10-26T00:06:02Z", "Id": "Waypoint0", "Position": [ -123.115193, 49.280596 ] }, { "ArrivalTime": "2024-10-26T00:12:20Z", "DepartureTime": "2024-10-28T00:06:02Z", "Id": "Waypoint1", "Position": [ Use OptimizeWaypoints 739 Amazon Location Service Developer Guide -123.089557, 49.271774 ] }, { "ArrivalTime": "2024-10-28T00:11:13Z", "Id": "Destination", "Position": [ -123.095185, 49.263728 ] } ], "TimeBreakdown": { "RestDuration": 0, "ServiceDuration": 400, "TravelDuration": 947, "WaitDuration": 192904 } } cURL curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/optimize-waypoints? key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ "Origin": [ -123.095740, 49.274426 ], "Waypoints": [ { "Position": [ -123.115193, 49.280596 ], "SideOfStreet": { "Position": [ -123.089557, 49.271774 ], Use OptimizeWaypoints 740 Amazon Location Service Developer Guide "UseWith": "AnyStreet" }, "AccessHours": { "From": { "DayOfWeek": "Saturday", "TimeOfDay": "00:02:42Z" }, "To": { "DayOfWeek": "Friday", "TimeOfDay": "1:33:36+02:50" } }, "Heading": "250", "ServiceDuration": "200" }, { "Position": [ -123.089557, 49.271774 ], "AccessHours": { "From": { "DayOfWeek": "Monday", "TimeOfDay": "00:02:42Z" }, "To": { "DayOfWeek": "Tuesday", "TimeOfDay": "1:33:36+02:50" } }, "ServiceDuration": "200" } ], "DepartureTime": "2024-10-25T18:13:42Z", "Destination": [ -123.095185, 49.263728 ], "TravelMode": "Car" }' Use OptimizeWaypoints 741 Amazon Location Service AWS CLI Developer Guide aws geo-routes optimize-waypoints --key ${YourKey} \ --origin -123.095740 49.274426 \ --waypoints '[{"Position": [-123.115193 , 49.280596], "SideOfStreet": {"Position": [-123.089557, 49.271774], "UseWith": "AnyStreet"}, "AccessHours": {"From": {"DayOfWeek": "Saturday", "TimeOfDay": "00:02:42Z"}, "To": {"DayOfWeek": "Friday", "TimeOfDay": "1:33:36+02:50"}}, "Heading": 250, "ServiceDuration": 200}, {"Position": [-123.089557, 49.271774], "AccessHours": {"From": {"DayOfWeek": "Monday", "TimeOfDay": "00:02:42Z"}, "To": {"DayOfWeek": "Tuesday", "TimeOfDay": "1:33:36+02:50"}}, "ServiceDuration": 200}]' \ --destination -123.095185 49.263728 \ --departure-time "2024-10-25T18:13:42Z" \ --travel-mode "Car" Learn how to use SnapToRoads This topic explains how to use SnapToRoads to align GPS traces with road networks, enhancing positional accuracy for navigation and fleet management applications. This API corrects GPS drift and signal loss by snapping coordinates to the nearest road segments, while also respecting travel mode restrictions. Examples illustrate practical uses, such as overlaying GPS traces, filling data gaps, and reducing noise for clearer route visualization. Topics • How to match GPS traces to a road network How to match GPS traces to a road network The SnapToRoads API allows you to match GPS traces onto the road network. A GPS trace includes positions and metadata like timestamp, speed, and heading that are recorded using a GPS device. These traces often have a margin of error, making them challenging to use for analysis and visualization directly. SnapToRoads considers legal and time restrictions for the specified travel mode while matching traces. If the trace strongly suggests a restriction violation, the actual route taken is maintained. Use SnapToRoads 742 Amazon Location Service Potential use cases Developer Guide • Overlay GPS traces onto the most likely driven roads: This feature helps align GPS data to the most accurate path on the road network, supporting clearer data visualization. • Interpolate gaps in GPS traces: SnapToRoads can fill in gaps by snapping coordinates to road segments, creating a more continuous and useful dataset for applications. • Filter noise and outliers: By snapping to the nearest road, this API can help remove outliers and reduce GPS noise, improving data reliability for analysis. Examples Match GPS trace using car mode Sample request { "TracePoints": [ { "Position": [8.53404,50.16364], "Timestamp": "2024-05-22T18:13:42Z" }, { "Position": [8.53379056,50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z" } ], "TravelMode": "Car" } Sample response { "Notices": [], "SnappedGeometry": { "Polyline": "Redacted" }, "SnappedGeometryFormat": "FlexiblePolyline", "SnappedTracePoints": [ { "Confidence": 1, Use SnapToRoads 743 Amazon Location Service Developer Guide "OriginalPosition": [8.53404, 50.16364], "SnappedPosition": [8.53402, 50.16367] }, { "Confidence": 0.86, "OriginalPosition": [8.53379056, 50.16352417], "SnappedPosition": [8.53375, 50.16356] } ] } cURL curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/snap-to-roads? key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ "TracePoints": [ { "Position": [8.53404,50.16364], "Timestamp": "2024-05-22T18:13:42Z" }, { "Position": [8.53379056,50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z" } ], "TravelMode": "Car" }' AWS CLI aws geo-routes snap-to-roads --key ${YourKey} \ --trace-points '[{"Position": [8.53404, 50.16364], "Timestamp": "2024-05-22T18:13:42Z"}, {"Position": [8.53379056, 50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z"}]' \ --travel-mode "Car" Use SnapToRoads 744 Amazon Location Service Developer Guide Match GPS trace using truck mode with options Sample request { "TracePoints": [ { "Position": [8.53404,50.16364], "Timestamp": "2024-05-22T18:13:42Z" }, { "Position": [8.53379056,50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z" } ], "TravelMode": "Truck", "TravelModeOptions": { "Truck": { "GrossWeight": 10000 } } } Sample response { "Notices": [],
amazon-location-developer-guide-141
amazon-location-developer-guide.pdf
141
application/json' \ --data '{ "TracePoints": [ { "Position": [8.53404,50.16364], "Timestamp": "2024-05-22T18:13:42Z" }, { "Position": [8.53379056,50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z" } ], "TravelMode": "Car" }' AWS CLI aws geo-routes snap-to-roads --key ${YourKey} \ --trace-points '[{"Position": [8.53404, 50.16364], "Timestamp": "2024-05-22T18:13:42Z"}, {"Position": [8.53379056, 50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z"}]' \ --travel-mode "Car" Use SnapToRoads 744 Amazon Location Service Developer Guide Match GPS trace using truck mode with options Sample request { "TracePoints": [ { "Position": [8.53404,50.16364], "Timestamp": "2024-05-22T18:13:42Z" }, { "Position": [8.53379056,50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z" } ], "TravelMode": "Truck", "TravelModeOptions": { "Truck": { "GrossWeight": 10000 } } } Sample response { "Notices": [], "SnappedGeometry": { "Polyline": "Redacted" }, "SnappedGeometryFormat": "FlexiblePolyline", "SnappedTracePoints": [ { "Confidence": 1, "OriginalPosition": [8.53404, 50.16364], "SnappedPosition": [8.53402, 50.16367] }, { "Confidence": 0.86, "OriginalPosition": [8.53379056, 50.16352417], "SnappedPosition": [8.53375, 50.16356] } Use SnapToRoads 745 Amazon Location Service Developer Guide ] } cURL curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/snap-to-roads? key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ "TracePoints": [ { "Position": [8.53404,50.16364], "Timestamp": "2024-05-22T18:13:42Z" }, { "Position": [8.53379056,50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z" } ], "TravelMode": "Truck", "TravelModeOptions": { "Truck": { "GrossWeight": 10000 } } }' AWS CLI aws geo-routes snap-to-roads --key ${YourKey} \ --trace-points '[{"Position": [8.53404, 50.16364], "Timestamp": "2024-05-22T18:13:42Z"}, {"Position": [8.53379056, 50.16352417], "Speed": 20, "Timestamp": "2024-05-22T18:13:59Z"}]' \ --travel-mode "Truck" \ --travel-mode-options '{"Truck": {"GrossWeight": 10000}}' Use SnapToRoads 746 Amazon Location Service Developer Guide Manage costs and usage As you continue learning about Amazon Location routes, it's important to understand how to manage service capacity, ensure you follow usage limits, and get the best results through quota and API optimizations. By applying best practices for performance and accuracy, you can tailor your application to handle place-related queries efficiently and maximize your API requests. Topics • Best Practices • Routes pricing • Routes Quota and Usage Best Practices This section covers best practices for using compression and choosing between Simple (GeoJSON) and FlexiblePolyline formats when interacting with the API, providing guidance on optimizing performance, bandwidth, and data handling. Compression To enhance the performance and efficiency of your applications when interacting with our API, it is recommended to enable compression for responses, especially when dealing with large text-based payloads. You can activate compression by including the Accept-Encoding header in your API requests, specifying your preferred compression method. We support gzip and deflate for their compression capabilities, with gzip typically offering better compression ratios. When to Enable Compression Large Responses Enable compression for large text-based responses to reduce bandwidth usage and improve load times. Network Constraints If your application operates over limited bandwidth or high-latency networks, compression can enhance data transfer efficiency. Manage costs and usage 747 Amazon Location Service Developer Guide How to Use Compression Effectively Set the Accept-Encoding Header Include Accept-Encoding: gzip, deflate in your HTTP requests to inform our API that you support these compression methods. The method to enable and handle compression varies by AWS SDK and programming language. For example, the AWS SDK for Java v1 uses the withGzip method in the ClientConfiguration class to enable gzip, while the AWS SDK for Go requires adding specific middleware for compression handling. For other SDKs, please refer to the AWS SDK Reference Guide for detailed instructions. Handle Decompression Properly Ensure your client application can correctly decompress the responses based on the Content- Encoding header returned by our API. Test and Monitor Regularly evaluate the impact of compression on your application's performance, balancing the benefits of reduced payload sizes against any additional CPU overhead from decompression processes. Polyline Best practices for choosing between Simple (GeoJSON) and FlexiblePolyline formats when interacting with our API, to optimize both performance and usability of your geospatial data. Use Simple (GeoJSON) Format Readability and Standardization Use when you require a widely recognized and human-readable format for ease of debugging and interoperability with various geospatial tools. Precision Choose Simple format when your application needs high precision for coordinates, as GeoJSON maintains full decimal precision without loss. Smaller Datasets Simple format is ideal when working with smaller sets of coordinate data where the size reduction benefits of compression are minimal. Best Practices 748 Amazon Location Service Use FlexiblePolyline Format Data Size Reduction Developer Guide FlexiblePolyline is ideal when you need to minimize the amount of data transmitted, especially for large lists of coordinates, by leveraging lossy compression techniques. URL Safety FlexiblePolyline provides a compact, URL-safe string that can be used directly in query parameters without additional encoding. Performance Optimization FlexiblePolyline helps reduce the payload size, leading to faster data transfer and lower bandwidth usage, making it crucial for high-performance applications or those operating over constrained networks. Routes pricing Please see below for pricing buckets for each API: Calculate Routes This price is based on the number of routes calculated. CalculateRoutes has three pricing buckets: Core, Advanced, and Premium. Core This price bucket supports the travel modes Car, Truck, and Pedestrian, without toll cost
amazon-location-developer-guide-142
amazon-location-developer-guide.pdf
142
by leveraging lossy compression techniques. URL Safety FlexiblePolyline provides a compact, URL-safe string that can be used directly in query parameters without additional encoding. Performance Optimization FlexiblePolyline helps reduce the payload size, leading to faster data transfer and lower bandwidth usage, making it crucial for high-performance applications or those operating over constrained networks. Routes pricing Please see below for pricing buckets for each API: Calculate Routes This price is based on the number of routes calculated. CalculateRoutes has three pricing buckets: Core, Advanced, and Premium. Core This price bucket supports the travel modes Car, Truck, and Pedestrian, without toll cost calculation. Advanced This price bucket supports alternative travel modes such as Scooter, without toll cost calculation. Premium This price bucket supports toll cost calculation. You will be charged at Premium price when you request toll cost calculation by setting the request parameters LegAdditionalFeatures["Tolls"] or SpanAdditionalFeatures["TollSystems"], regardless of travel mode. Routes pricing 749 Amazon Location Service Calculate Route Matrix Developer Guide This price is based on the number of routes calculated. The number of routes calculated in each request is equal to the number of origins multiplied by the number of destinations, Number of Routes = Number of origins x Number of Destinations. For example, when using a matrix size of 300 origins by 100 destinations, the total number of routes calculated is 30,000 (300 x 100 = 30,000). Note Route calculations are billed for each origin and destination pair. If you use a large matrix of origins and destinations, your costs will increase accordingly. CalculateRouteMatrix has 2 pricing buckets: Core and Advanced. Core This price bucket supports travel modes Car, Truck, and Pedestrian. Advanced This price bucket supports alternative travel modes, such as Scooter. Optimize Waypoint This price is based on the number of API requests. OptimizeWaypoint has 2 pricing buckets: Advanced and Premium. Advanced This pricing bucket supports up to 30 waypoints in a single request; travel modes for Car, Truck and Pedestrian, with the bounding box of the input points within 200km, and with no optional parameters such as Avoid, Clustering, Driver, Exclude.Countries, TravelModeOptions.Truck.HazardousCargos, TravelModeOptions.Truck.TunnelRestrictionCode, and no additional waypoints or destination constraints such as AccessHours, AppointmentTime, Before, Heading, ServiceDuration, SideOfStreet. Routes pricing 750 Amazon Location Service Developer Guide Note Automatic clustering could occur when waypoints are in close proximity, but it is still considered as advanced pricing bucket. Premium This pricing bucket supports up to 50 waypoints in a single request; with no restrictions on travel modes; bounding box of the input points within 500km; and with optional parameters such as Avoid, Clustering, Driver, Exclude.Countries, TravelModeOptions.Truck.HazardousCargos, TravelModeOptions.Truck.TunnelRestrictionCode. In addition, this pricing bucket supports optional waypoint and destination constraints , such as AccessHours, AppointmentTime, Before, Heading, ServiceDuration, SideOfStreet. Note A single request can only support up to 20 waypoints if any of the optional waypoint and destination constraints is applied. Snap-to-road This price is based on the number of API requests. SnaptoRoad has 2 pricing buckets: Advanced and Premium. Advanced This pricing bucket supports travel modes Car, Truck and Pedestrian, with a TracePoints count up to 200 and with a maximum airline distance between TracePoints of 100 kilometers. Premium This pricing bucket has no restrictions on travel modes, up to 5,000 TracePoints points. Routes pricing 751 Amazon Location Service Calculate Isoline Developer Guide This price is based on the number of Isolines calculated in the response. CalculateIsoline has 2 pricing buckets: Advanced and Premium. Advanced This pricing bucket supports travel modes Car, Truck and Pedestrian, with Thresholds.Time values up to 60 minutes or Thresholds.Distance values up to 100 kilometers. Premium This pricing bucket has no restrictions on travel modes, with Thresholds.Time values up to 180 minutes or Thresholds.Distance values up to 300KM. Routes Quota and Usage Service Quota Amazon Location Service APIs have default quotas. You can increase quotas using the service quota console. For limits exceeding 2x the default, request via the self-service console or contact support. Service Quota Limits API Name Default Max Adjustable Limit More than Adjustabl the section called “Calculate routes” 20 CalculateIsoline 20 the section called “Snap to Roads” 20 40 40 40 e Max Limit Request on service quota console or contact support team Request on service quota console or contact support team Request on service quota console or contact support team Routes Quota and Usage 752 Amazon Location Service Developer Guide API Name Default Max Adjustable Limit More than Adjustabl the section called “Calculate route matrix” the section called “Optimize waypoints” 5 5 Other Usage Limits 10 10 e Max Limit Request on service quota console or contact support team Request on service quota console or contact support team In addition to service quotas, the following API usage limits apply: Other Usage Limits API Name Limit the section called “Snap to Roads” Sum of Geodesic distance between all TracePoints the section called
amazon-location-developer-guide-143
amazon-location-developer-guide.pdf
143
Request on service quota console or contact support team Routes Quota and Usage 752 Amazon Location Service Developer Guide API Name Default Max Adjustable Limit More than Adjustabl the section called “Calculate route matrix” the section called “Optimize waypoints” 5 5 Other Usage Limits 10 10 e Max Limit Request on service quota console or contact support team Request on service quota console or contact support team In addition to service quotas, the following API usage limits apply: Other Usage Limits API Name Limit the section called “Snap to Roads” Sum of Geodesic distance between all TracePoints the section called “Optimize waypoints” Sum of Geodesic distance between the Origin, the section called “Optimize waypoints” the section called “Calculate route matrix” Waypoints in the provided ordering, and Destination Perimeter of the bounding box surrounding the Origin, Waypoints, and Destination Max Distance between Origins and Destinations for Unbounded routing (If Avoid or TravelModeOptions.Truck is used) Value 500KM 100KM 500KM 60KM Routes Quota and Usage 753 Amazon Location Service Developer Guide API Name Limit the section called “Calculate route matrix” Max Distance between Origins and Destinations for Unbounded routing Value 10000KM the section called “Calculate routes” Response payload size after compression the section called “Calculate route matrix” Response payload size after compression the section called “Calculate isolines” Response payload size after compression the section called “Optimize waypoints” Response payload size after compression the section called “Snap to Roads” Response payload size after compression 6MB 6MB 6MB 6MB 6MB Next Steps Please check the following for further details: • Attribution: Information on data attribution requirements for Amazon Location Service. • SLA: The service level agreement for Amazon Location Service, including uptime commitments and response times. • Service Terms: Terms governing the use of Amazon Location Service, including restrictions and limitations. Routes Quota and Usage 754 Amazon Location Service Developer Guide Amazon Location Service Geofences Geofence collection resources allow you to store and manage geofences - virtual boundaries on a map. You can evaluate locations against a geofence collection resource and receive notifications when the location update crosses the boundary of any of the geofences in the collection. Geofences and geofence collection A geofence is a polygon or circle geometry that defines a virtual boundary on a map. A geofence collection contains zero or more geofences. It's capable of geofence monitoring by emitting ENTER and EXIT events, when requested, to evaluate a device position against its geofences. Geofence events Locations for positions you're monitoring are referenced by an ID called a DeviceId. The positions are referred to as device positions. You can send a list of device positions to evaluate directly to the geofence collection resource, or you can use a tracker. For more information about using trackers, see Trackers. You receive events (via Amazon EventBridge) only when a device enters or exits a geofence, not for every position change. This means that you will typically receive events and have to respond to them much less frequently than every device position update. Note For the first location evaluation for a specific DeviceID, it is assumed that the device was previously not in any geofences. So the first update will generate an ENTER event, if inside a geofence in the collection, and no event if not. In order to calculate whether a device has entered or exited a geofence, Amazon Location Service must keep previous position state for the device. This position state is stored for 30 days. After 30 days without an update for a device, a new location update will be treated as the first position update. 755 Amazon Location Service Developer Guide Use cases for Amazon Location Service Geofences The following are a few common uses for Amazon Location Service Geofences. Improve field service operations Keep a pulse on your mobile workforce with real-time tracking. Set geofences around customer sites and service areas to receive alerts when staff arrive and depart. Use location data to optimize scheduling, dispatch the nearest available technician, and reduce response times. Empower your field teams (such as a your plumbing or HVAC repair business) to work more efficiently, while enhancing the customer experience. Monitor and control critical assets Utilize Amazon Location Service to track the real-time location and status of your valuable equipment, inventory, and other mobile assets. Set up geofences to receive alerts on unauthorized movements or removals, enhancing security and compliance. Use this location visibility to improve asset utilization, optimize maintenance schedules, and ensure your critical resources are accounted for at all times. Always monitor your heavy machinery, IT hardware, or retail inventory with precision, reduce losses, and make more informed operational decisions. Enhance supply chain visibility Leverage Amazon Location Service to track shipments and deliveries across your entire supply chain. Define geofences around distribution centers, stores, and other key facilities to monitor the movement of
amazon-location-developer-guide-144
amazon-location-developer-guide.pdf
144
status of your valuable equipment, inventory, and other mobile assets. Set up geofences to receive alerts on unauthorized movements or removals, enhancing security and compliance. Use this location visibility to improve asset utilization, optimize maintenance schedules, and ensure your critical resources are accounted for at all times. Always monitor your heavy machinery, IT hardware, or retail inventory with precision, reduce losses, and make more informed operational decisions. Enhance supply chain visibility Leverage Amazon Location Service to track shipments and deliveries across your entire supply chain. Define geofences around distribution centers, stores, and other key facilities to monitor the movement of inventory and assets. Use real-time location data to improve inventory management, optimize logistics planning, and deliver a superior customer experience. Gain end-to-end visibility into your supply chain operations, identify bottlenecks, and make data-driven decisions that drive efficiency and responsiveness. Strengthen safety and security Geofencing enables you to set up virtual boundaries around secure areas, restricted zones, and other critical locations. Receive instant alerts when unauthorized personnel or assets enter or exit these predefined geofences. Leverage this real-time location monitoring to enhance workplace safety, deter trespassing, and ensure regulatory compliance. Whether you manage a manufacturing facility, construction site, or corporate campus, geofencing empowers you to maintain tighter control over access, improve incident response, and protect your people, property, and assets. Location-based marketing Use cases 756 Amazon Location Service Developer Guide Unlock the power of location data to supercharge your geomarketing efforts. Use Amazon Location Service to set virtual boundaries around competitor locations, events, and high-traffic areas. Trigger personalized ads, offers, and notifications when customers enter these geofenced zones. Analyze foot traffic patterns to optimize ad placements and uncover prime sites for new business locations. Monitor customer movements within your own geofenced spaces to gain deeper insights on browsing behaviors and path-to-purchase. Combine real-time location tracking with precision geofencing to deliver hyper-targeted, contextual engagement that drives sales and loyalty in the physical world. Geofence concepts This section provides some common geofence concepts, including common terminology and how to manage geofences. Amazon Location Service geofence terminology Geofence collection Contains zero or more geofences. It is capable of geofence monitoring by emitting Entry and Exit events, when requested, to evaluate a device position against its geofences. Geofence A polygon or circle geometry that defines a virtual boundary on a map. Polygon geometry An Amazon Location geofence is a virtual boundary for a geographical area and is represented as a polygon geometry or as a circle. A circle is a point with a distance around it. Use a circle when you want to be notified if a device is within a certain distance of a location. A polygon is an array composed of 1 or more linear rings. Use a polygon when you want to define a specific boundary for device notifications. A linear ring is an array of four or more vertices, where the first and last vertex are the same to form a closed boundary. Each vertex is a 2-dimensional point of the form [longitude, latitude], where the units of longitude and latitude are degrees. The vertices must be listed in counter-clockwise order around the polygon. The following is an example of a single linear external ring: [ Geofence concepts 757 Developer Guide Amazon Location Service [ [-5.716667, -15.933333], [-14.416667, -7.933333], [-12.316667, -37.066667], [-5.716667, -15.933333] ] ] Note Amazon Location Service doesn't support polygons with more than one ring. This includes holes, islands, or multipolygons. Amazon Location also doesn't support polygons that are wound clockwise, or that cross the antimeridian. Get started with Amazon Location Service Geofences Geofences are powerful tools for defining geographic boundaries and triggering actions based on location updates. This guide walks you through the process of creating and using geofence collection resources in Amazon Location. By setting up geofences and evaluating locations against them, you can monitor movement and generate automated events, such as notifications when a device enters or exits a defined area. These features are ideal for applications like fleet tracking, location-based notifications, and more. 1. Create a geofence collection resource in your AWS account. 2. Add geofences to the collection. You can use the geofence upload tool on the Amazon Location console or the Amazon Location Geofences API. For more information about available options, see Authentication. Geofences can either be defined by a polygon or by a circle. Use a polygon to find when a device enters a specific area. Use a circle to find when a device comes within a certain distance (radius) of a point. 3. You can start evaluating locations against all your geofences. When a location update crosses the boundaries of one or more geofences, your geofence collection resource emits one of the following geofence event types on Amazon EventBridge: • ENTER – One event is generated for each geofence where the
amazon-location-developer-guide-145
amazon-location-developer-guide.pdf
145
Geofences API. For more information about available options, see Authentication. Geofences can either be defined by a polygon or by a circle. Use a polygon to find when a device enters a specific area. Use a circle to find when a device comes within a certain distance (radius) of a point. 3. You can start evaluating locations against all your geofences. When a location update crosses the boundaries of one or more geofences, your geofence collection resource emits one of the following geofence event types on Amazon EventBridge: • ENTER – One event is generated for each geofence where the location update crosses its boundary by entering it. • EXIT – One event is generated for each geofence where the location update crosses its boundary by exiting it. Get started 758 Amazon Location Service Developer Guide For more information, see the section called “React to events with EventBridge”. You can also integrate monitoring using services such as Amazon CloudWatch and AWS CloudTrail. For more information see, the section called “Monitor with Amazon CloudWatch” and the section called “Monitor and log with AWS CloudTrail”. For example, you are tracking a fleet of trucks and want to be notified when a truck comes within a certain area of any of your warehouses. Create a geofence for the area around each warehouse. When the trucks send you updated locations, use Amazon Location Service to evaluate those positions and see if a truck has entered (or exited) one of the geofence areas. Note You're billed by the number of geofence collections you evaluate against. Your bill is not affected by the number of geofences in each collection. Since each geofence collection may contain up to 50,000 geofences, you may want to combine your geofences into fewer collections, where possible, to reduce your cost of geofence evaluations. The events generated will include the ID of the individual geofence in the collection, as well as the ID of the collection. How to work with Amazon Location Service Geofences section provides step-by-step guidance for working with geofence-related tasks in Amazon Location. Learn how to evaluate device positions against geofences, respond to geofence events using Amazon EventBridge, and effectively manage your geofence resources. These tutorials are designed to help you implement key functionality for tracking and managing location-based events with ease. Topics • Evaluate device positions against geofences • React to Amazon Location Service events with Amazon EventBridge • Manage your geofence collection resources How to 759 Amazon Location Service Developer Guide Evaluate device positions against geofences There are two ways to evaluate positions against geofences to generate geofence events: • You can link Trackers and Geofence Collections. For more information, see the section: Link a tracker to a geofence collection. • You can make a direct request to the geofence collection resource to evaluate one or more positions. If you also want to track your device location history or display locations on a map, link the tracker with a geofence collection. Alternatively, you may not want to evaluate all location updates, or you don't intend to store location data in a tracker resource. If either of these is the case, you can make a direct request to the geofence collection and evaluate one or more device positions against its geofences. Evaluating device positions against geofences generates events. You can react to these events and route them to other AWS services. For more information about actions that you can take when receiving geofence events, see Reacting to Amazon Location Service events with Amazon EventBridge. An Amazon Location event includes the attributes of the device position update that generates it, including the time, position, accuracy, and key-value metadata, and some attributes of the geofence that is entered or exited. For more information about the data included in a geofence event, see the section called “Event examples”. The following examples use the AWS CLI, or the Amazon Location APIs. API To evaluate device positions against the position of geofences using the Amazon Location APIs Use the BatchEvaluateGeofences operation from the Amazon Location Geofences APIs. The following example uses an API request to evaluate the position of device ExampleDevice to an associated geofence collection ExampleGeofenceCollection. Replace these values with your own geofence and device IDs. POST /geofencing/v0/collections/ExampleGeofenceCollection/positions HTTP/1.1 Content-type: application/json Evaluate device positions against geofences 760 Amazon Location Service Developer Guide { "DevicePositionUpdates": [ { "DeviceId": "ExampleDevice", "Position": [-123.123, 47.123], "SampleTime": "2021-11-30T21:47:25.149Z", "Accuracy": { "Horizontal": 10.30 }, "PositionProperties": { "field1": "value1", "field2": "value2" } } ] } AWS CLI To evaluate device positions against the position of geofences using AWS CLI commands Use the batch-evaluate-geofences command. The following example uses an AWS CLI to evaluate the position of ExampleDevice against an associated geofence collection ExampleGeofenceCollection. Replace these values with your own geofence and device IDs. aws location \ batch-evaluate-geofences \ --collection-name
amazon-location-developer-guide-146
amazon-location-developer-guide.pdf
146
geofence and device IDs. POST /geofencing/v0/collections/ExampleGeofenceCollection/positions HTTP/1.1 Content-type: application/json Evaluate device positions against geofences 760 Amazon Location Service Developer Guide { "DevicePositionUpdates": [ { "DeviceId": "ExampleDevice", "Position": [-123.123, 47.123], "SampleTime": "2021-11-30T21:47:25.149Z", "Accuracy": { "Horizontal": 10.30 }, "PositionProperties": { "field1": "value1", "field2": "value2" } } ] } AWS CLI To evaluate device positions against the position of geofences using AWS CLI commands Use the batch-evaluate-geofences command. The following example uses an AWS CLI to evaluate the position of ExampleDevice against an associated geofence collection ExampleGeofenceCollection. Replace these values with your own geofence and device IDs. aws location \ batch-evaluate-geofences \ --collection-name ExampleGeofenceCollection \ --device-position-updates '[{"DeviceId":"ExampleDevice","Position": [-123.123,47.123],"SampleTime":"2021-11-30T21:47:25.149Z","Accuracy": {"Horizontal":10.30},"PositionProperties":{"field1":"value1","field2":"value2"}}]' React to Amazon Location Service events with Amazon EventBridge Amazon EventBridge is a serverless event bus that efficiently connects applications together using data from AWS services like Amazon Location. EventBridge receives events from Amazon Location and routes that data to targets like AWS Lambda. You can set up routing rules to determine where to send your data to build application architectures that react in real time. React to events with EventBridge 761 Amazon Location Service Developer Guide Only geofence events (ENTER and EXIT events, as devices enter or leave the geofenced areas) are sent to EventBridge by default. You can also enable all filtered position update events for a tracker resource. For more information, see the section called “Enable update events for a tracker”. For more information, see the Events and Event Patterns in the Amazon EventBridge User Guide. Topics • Enable update events for a tracker • Create event rules for Amazon Location • Amazon EventBridge event examples for Amazon Location Service Enable update events for a tracker By default, Amazon Location sends only ENTER and EXIT geofence events to EventBridge. You can enable all filtered position UPDATE events for a tracker to be sent to EventBridge. You can do this when you create or update a tracker. For example, to update an existing tracker using the AWS CLI, you can use the following command (use the name of your tracker resource in place of MyTracker). aws location update-tracker --tracker-name MyTracker --event-bridge-enabled To turn off position events for a tracker, you must use the API or the Amazon Location Service console. Create event rules for Amazon Location You can create up to 300 rules per event bus in EventBridge to configure actions taken in response to an Amazon Location event. For example, you can create a rule for geofence events where a push notification will be sent when a phone is detected within a geofenced boundary. To create a rule for Amazon Location events Using the following values, create an EventBridge rule based on Amazon Location events: • For Rule type, choose Rule with an event pattern. • In the Event pattern box, add the following pattern: React to events with EventBridge 762 Amazon Location Service Developer Guide { "source": ["aws.geo"], "detail-type": ["Location Geofence Event"] } To create a rule for tracker position updates, you can instead use the following pattern: { "source": ["aws.geo"], "detail-type": ["Location Device Position Event"] } You can optionally specify only ENTER or EXIT events by adding a detail tag (if your rule is for tracker position updates, there is only a single EventType, so there is no need to filter on it): { "source": ["aws.geo"], "detail-type": ["Location Geofence Event"], "detail": { "EventType": ["ENTER"] } } You can also optionally filter on properties of the position or geofence: { "source": ["aws.geo"], "detail-type": ["Location Geofence Event"], "detail": { "EventType": ["ENTER"], "GeofenceProperties": { "Type": "LoadingDock" }, "PositionProperties": { "VehicleType": "Truck" } } } React to events with EventBridge 763 Amazon Location Service Developer Guide • For Select targets, choose the target action to take when an event is received from Amazon Location Service. For example, use an Amazon Simple Notification Service (SNS) topic to send an email or text message when an event occurs. You first need to create an Amazon SNS topic using the Amazon SNS console. For more information, see Using Amazon SNS for user notifications. Warning It's best practice to confirm that the event rule was successfully applied or your automated action may not initiate as expected. To verify your event rule, initiate conditions for the event rule. For example, simulate a device entering a geofenced area. You can also capture all events from Amazon Location, by just excluding the detail-type section. For example: { "source": [ "aws.geo" ] } Note The same event may be delivered more than one time. You can use the event id to de- duplicate the events that you receive. Amazon EventBridge event examples for Amazon Location Service The following is an example of an event for entering a geofence initiated by calling BatchUpdateDevicePosition. { "version": "0", "id": "aa11aa22-33a-4a4a-aaa5-example", "detail-type": "Location Geofence Event", "source": "aws.geo", React to events with EventBridge 764 Amazon Location
amazon-location-developer-guide-147
amazon-location-developer-guide.pdf
147
For example, simulate a device entering a geofenced area. You can also capture all events from Amazon Location, by just excluding the detail-type section. For example: { "source": [ "aws.geo" ] } Note The same event may be delivered more than one time. You can use the event id to de- duplicate the events that you receive. Amazon EventBridge event examples for Amazon Location Service The following is an example of an event for entering a geofence initiated by calling BatchUpdateDevicePosition. { "version": "0", "id": "aa11aa22-33a-4a4a-aaa5-example", "detail-type": "Location Geofence Event", "source": "aws.geo", React to events with EventBridge 764 Amazon Location Service Developer Guide "account": "636103698109", "time": "2020-11-10T23:43:37Z", "region": "eu-west-1", "resources": [ "arn:aws:geo:eu-west-1:0123456789101:geofence-collection/GeofenceEvents- GeofenceCollection_EXAMPLE", "arn:aws:geo:eu-west-1:0123456789101:tracker/Tracker_EXAMPLE" ], "detail": { "EventType": "ENTER", "GeofenceId": "polygon_14", "DeviceId": "Device1-EXAMPLE", "SampleTime": "2020-11-10T23:43:37.531Z", "Position": [ -123.12390073297821, 49.23433613216247 ], "Accuracy": { "Horizontal": 15.3 }, "GeofenceProperties": { "ExampleKey1": "ExampleField1", "ExampleKey2": "ExampleField2" }, "PositionProperties": { "ExampleKey1": "ExampleField1", "ExampleKey2": "ExampleField2" } } } The following is an example of an event for exiting a geofence initiated by calling BatchUpdateDevicePosition. { "version": "0", "id": "aa11aa22-33a-4a4a-aaa5-example", "detail-type": "Location Geofence Event", "source": "aws.geo", "account": "123456789012", "time": "2020-11-10T23:41:44Z", "region": "eu-west-1", "resources": [ React to events with EventBridge 765 Amazon Location Service Developer Guide "arn:aws:geo:eu-west-1:0123456789101:geofence-collection/GeofenceEvents- GeofenceCollection_EXAMPLE", "arn:aws:geo:eu-west-1:0123456789101:tracker/Tracker_EXAMPLE" ], "detail": { "EventType": "EXIT", "GeofenceId": "polygon_10", "DeviceId": "Device1-EXAMPLE", "SampleTime": "2020-11-10T23:41:43.826Z", "Position": [ -123.08569321875426, 49.23766166742559 ], "Accuracy": { "Horizontal": 15.3 }, "GeofenceProperties": { "ExampleKey1": "ExampleField1", "ExampleKey2": "ExampleField2" }, "PositionProperties": { "ExampleKey1": "ExampleField1", "ExampleKey2": "ExampleField2" } } } The following is an example of an event for a position update, initiated by calling BatchUpdateDevicePosition. { "version": "0", "id": "aa11aa22-33a-4a4a-aaa5-example", "detail-type": "Location Device Position Event", "source": "aws.geo", "account": "123456789012", "time": "2020-11-10T23:41:44Z", "region": "eu-west-1", "resources": [ "arn:aws:geo:eu-west-1:0123456789101:tracker/Tracker_EXAMPLE" ], "detail": { "EventType": "UPDATE", React to events with EventBridge 766 Amazon Location Service Developer Guide "TrackerName": "tracker_2", "DeviceId": "Device1-EXAMPLE", "SampleTime": "2020-11-10T23:41:43.826Z", "ReceivedTime": "2020-11-10T23:41:39.235Z", "Position": [ -123.08569321875426, 49.23766166742559 ], "Accuracy": { "Horizontal": 15.3 }, "PositionProperties": { "ExampleKey1": "ExampleField1", "ExampleKey2": "ExampleField2" } } } Manage your geofence collection resources Manage your geofence collections using the Amazon Location console, the AWS CLI, or the Amazon Location APIs. List your geofence collection resources You can view your geofence collection list using the Amazon Location console, the AWS CLI, or the Amazon Location APIs: Console To view a list of geofence collections using the Amazon Location console 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Geofence collections from the left navigation pane. 3. View a list of your geofence collections under My geofence collections. API Use the ListGeofenceCollections operation from the Amazon Location Geofences APIs. The following example is an API request to get a list of geofence collections in the AWS account. Manage resources 767 Amazon Location Service Developer Guide POST /geofencing/v0/list-collections The following is an example response for ListGeofenceCollections: { "Entries": [ { "CollectionName": "ExampleCollection", "CreateTime": 2020-09-30T22:59:34.142Z, "Description": "string", "UpdateTime": 2020-09-30T23:59:34.142Z }, "NextToken": "1234-5678-9012" } CLI Use the list-geofence-collections command. The following example is an AWS CLI to get a list of geofence collections in the AWS account. aws location list-geofence-collections Get geofence collection details You can get details about any geofence collection resource in your AWS account using the Amazon Location console, the AWS CLI, or the Amazon Location APIs: Console To view the details of a geofence collection using the Amazon Location console 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Geofence collections from the left navigation pane. 3. Under My geofence collections, select the name link of the target geofence collection. API Use the DescribeGeofenceCollection operation from the Amazon Location Geofences APIs. Manage resources 768 Amazon Location Service Developer Guide The following example is an API request to get the geofence collection details for ExampleCollection. GET /geofencing/v0/collections/ExampleCollection The following is an example response for DescribeGeofenceCollection: { "CollectionArn": "arn:aws:geo:us-west-2:123456789012:geofence-collection/ GeofenceCollection", "CollectionName": "ExampleCollection", "CreateTime": 2020-09-30T22:59:34.142Z, "Description": "string", "KmsKeyId": "1234abcd-12ab-34cd-56ef-1234567890ab", "Tags": { "Tag1" : "Value1" }, "UpdateTime": 2020-09-30T23:59:34.142Z } CLI Use the describe-geofence-collection command. The following example is an AWS CLI to get the geofence collection details for ExampleCollection. aws location describe-geofence-collection \ --collection-name "ExampleCollection" Delete a geofence collection You can delete a geofence collection from your AWS account using the Amazon Location console, the AWS CLI, or the Amazon Location APIs. Console To delete a geofence collection using the Amazon Location console Manage resources 769 Amazon Location Service Developer Guide Warning This operation deletes the resource permanently. 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Geofence collections from the left navigation pane. 3. Under My geofence collection, select the target geofence collection. 4. Choose Delete geofence collection. API Use the DeleteGeofenceCollection operation from the Amazon Location APIs. The following example is an API request to delete the geofence collection ExampleCollection. DELETE /geofencing/v0/collections/ExampleCollection The following is an example response for DeleteGeofenceCollection: HTTP/1.1 200 CLI Use the delete-geofence-collection command. The following example is an AWS CLI command to delete the geofence collection ExampleCollection.
amazon-location-developer-guide-148
amazon-location-developer-guide.pdf
148
resources 769 Amazon Location Service Developer Guide Warning This operation deletes the resource permanently. 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Geofence collections from the left navigation pane. 3. Under My geofence collection, select the target geofence collection. 4. Choose Delete geofence collection. API Use the DeleteGeofenceCollection operation from the Amazon Location APIs. The following example is an API request to delete the geofence collection ExampleCollection. DELETE /geofencing/v0/collections/ExampleCollection The following is an example response for DeleteGeofenceCollection: HTTP/1.1 200 CLI Use the delete-geofence-collection command. The following example is an AWS CLI command to delete the geofence collection ExampleCollection. aws location delete-geofence-collection \ --collection-name "ExampleCollection" List stored geofences You can list geofences stored in a specified geofence collection using the Amazon Location console, the AWS CLI, or the Amazon Location APIs. Manage resources 770 Amazon Location Service Console Developer Guide To view a list of geofences using the Amazon Location console 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Geofence collections from the left navigation pane. 3. Under My geofence collection, select the name link of the target geofence collection. 4. View geofences in the geofence collection under Geofences API Use the ListGeofences operation from the Amazon Location Geofences APIs. The following example is an API request to get a list of geofences stored in the geofence collection ExampleCollection. POST /geofencing/v0/collections/ExampleCollection/list-geofences The following is an example response for ListGeofences: { "Entries": [ { "CreateTime": 2020-09-30T22:59:34.142Z, "GeofenceId": "geofence-1", "Geometry": { "Polygon": [ [-5.716667, -15.933333, [-14.416667, -7.933333], [-12.316667, -37.066667], [-5.716667, -15.933333] ] }, "Status": "ACTIVE", "UpdateTime": 2020-09-30T23:59:34.142Z } ], "NextToken": "1234-5678-9012" } Manage resources 771 Amazon Location Service CLI Use the list-geofences command. Developer Guide The following example is an AWS CLI to get a list of geofences stored in the geofence collection ExampleCollection. aws location list-geofences \ --collection-name "ExampleCollection" Get geofence details You can get the details of a specific geofence, such as the create time, update time, geometry, and status, from a geofence collection using the Amazon Location console, AWS CLI, or the Amazon Location APIs. Console To view the status of a geofence using the Amazon Location console 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Geofence collections from the left navigation pane. 3. Under My geofence collection, select the name link of the target geofence collection. 4. Under Geofences, you’ll be able to view the status of your geofences. API Use the GetGeofence operation from the Amazon Location Geofences APIs. The following example is an API request to get the geofence details from a geofence collection ExampleCollection. GET /geofencing/v0/collections/ExampleCollection/geofences/ExampleGeofence1 The following is an example response for GetGeofence: { "CreateTime": 2020-09-30T22:59:34.142Z, Manage resources 772 Amazon Location Service Developer Guide "GeofenceId": "ExampleGeofence1", "Geometry": { "Polygon": [ [-1,-1], [1,-1], [0,1], [-1,-1] ] }, "Status": "ACTIVE", "UpdateTime": 2020-09-30T23:59:34.142Z } CLI Use the get-geofence command. The following example is an AWS CLI to get the geofence collection details for ExampleCollection. aws location get-geofence \ --collection-name "ExampleCollection" \ --geofence-id "ExampleGeofence1" Delete geofences You can delete geofences from a geofence collection using the Amazon Location console, the AWS CLI, or the Amazon Location APIs. Console To delete a geofence using the Amazon Location console Warning This operation deletes the resource permanently. 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Geofence collections from the left navigation pane. Manage resources 773 Amazon Location Service Developer Guide 3. Under My geofence collection, select the name link of the target geofence collection. 4. Under Geofences, select the target geofence. 5. Choose Delete geofence. API Use the BatchDeleteGeofence operation from the Amazon Location Geofences APIs. The following example is an API request to delete geofences from the geofence collection ExampleCollection. POST /geofencing/v0/collections/ExampleCollection/delete-geofences Content-type: application/json { "GeofenceIds": [ "ExampleGeofence11" ] } The following is an example success response for BatchDeleteGeofence. HTTP/1.1 200 CLI Use the batch-delete-geofence command. The following example is an AWS CLI command to delete geofences from the geofence collection ExampleCollection. aws location batch-delete-geofence \ --collection-name "ExampleCollection" \ --geofence-ids "ExampleGeofence11" Manage costs and usage As you continue learning about Amazon Location Geofences, it's important to understand how to manage service capacity, ensure you follow usage limits, and get the best results through quota Manage costs and usage 774 Amazon Location Service Developer Guide and API optimizations. By applying best practices for performance and accuracy, you can tailor your application to handle place-related queries efficiently and maximize your API requests. Topics • Geofences pricing • Geofences quotas and usage Geofences pricing For pricing information for tracking and geofencing APIs, see the Amazon Location Service pricing page. Position Evaluation You can use BatchEvaluateGeofences to evaluate device positions against the geofence geometries from a given geofence collection. One request will evaluate up to ten device positions against all geofences in a single geofence collection. Price is based on the number of device positions in your
amazon-location-developer-guide-149
amazon-location-developer-guide.pdf
149
Guide and API optimizations. By applying best practices for performance and accuracy, you can tailor your application to handle place-related queries efficiently and maximize your API requests. Topics • Geofences pricing • Geofences quotas and usage Geofences pricing For pricing information for tracking and geofencing APIs, see the Amazon Location Service pricing page. Position Evaluation You can use BatchEvaluateGeofences to evaluate device positions against the geofence geometries from a given geofence collection. One request will evaluate up to ten device positions against all geofences in a single geofence collection. Price is based on the number of device positions in your API requests. Unit price per device position evaluated is based on the total monthly usage volume. See the Amazon Location Service pricing page for details on unit price and volume tiers. You can optimize your Position Evaluation cost by configuring the device position update frequency (also known as ping rate) from your tracking devices, and leveraging the filtering feature on Trackers to only evaluate relevant position updates. Geofence Management and Storage You can use GetGeofence, PutGeofence, BatchPutGeofence, and BatchDeleteGeofence to manage your geofences in a geofence collection. The price for these APIs is based on the number of geofences in your API requests. The storage for geofences will be charged monthly (only for geofences you store for more than one month). You can also manage your Geofence Collection using the following APIs: CreateGeofenceCollection, DeleteGeofenceCollection, DescribeGeofenceCollection, ListGeofenceCollections, UpdateGeofenceCollection, and ListGeofences. The price for these APIs is based on the number of API requests. Geofence Event Forecast Pricing 775 Amazon Location Service Developer Guide You can use ForecastGeofenceEvents to forecast future geofence events that are likely to occur within a specified time horizon if a device continues moving at its current speed. The price is based on number of API requests. Geofences quotas and usage This topic provides a summary of rate limits and quotas for Amazon Location Service Geofences. Note If you require a higher quota, you can use the Service Quotas console to request quota increases for adjustable quotas. When requesting a quota increase, select the Region you require the quota increase in, since most quotas are specific to the AWS Region. You can request up to twice the default limit for each API. For requests that exceed twice the default limit, your request will submit a support ticket. You can also connect to your premium support team. There are no direct charges for quota increase requests, but higher usage levels may lead to increased service costs based on the additional resources consumed. See the section called “Manage quotas” for more information. Service Quotas are maximum number of resources you can have per AWS account and AWS Region. Amazon Location Service denies additional requests that exceed the service quota. Resources API name Collection resources per account Default 1500 Max adjustable limit 3000 If you need more than this, request quota increases or contact the support team. Geofences per collection 50000 Contact the support team. Quotas and usage 776 Amazon Location Service CRUD API Note Developer Guide If you need a higher limit for any of these APIs, request quota increases or contact the support team. API name Default Max adjustable limit CreateGeofenceCollection DeleteGeofenceCollection DescribeGeofenceCollection ListGeofenceCollections UpdateGeofenceCollection 10 10 10 10 10 Data API Note 20 20 20 20 20 If you need a higher limit for any of these APIs, request quota increases or contact the support team. API name Default Max adjustable limit BatchEvaluateGeofences PutGeofence BatchPutGeofence ListGeofences 50 50 50 50 100 100 100 100 Quotas and usage 777 Amazon Location Service API name GetGeofence BatchDeleteGeofence Other usage limits Developer Guide Default Max adjustable limit 50 50 100 100 Quotas and usage 778 Amazon Location Service Developer Guide Amazon Location Service trackers Note Tracker storage is encrypted with AWS owned keys automatically. You can add another layer of encryption using KMS keys that you manage, to ensure that only you can access your data. For more information, see the section called “Data at rest encryption”. A tracker stores position updates for a collection of devices. The tracker can be used to query the devices' current location or location history. It stores the updates, but reduces storage space and visual noise by filtering the locations before storing them. Each position update stored in your tracker resources can include a measure of position accuracy and up to 3 fields of metadata about the position or device that you want to store. The metadata is stored as key-value pairs, and can store information such as speed, direction, tire pressure, or engine temperature. Tracker position filtering and query are useful on their own, but trackers are especially useful when paired with geofences. You can link trackers to one or more of your geofence collection resources, and position updates are evaluated automatically against the
amazon-location-developer-guide-150
amazon-location-developer-guide.pdf
150
noise by filtering the locations before storing them. Each position update stored in your tracker resources can include a measure of position accuracy and up to 3 fields of metadata about the position or device that you want to store. The metadata is stored as key-value pairs, and can store information such as speed, direction, tire pressure, or engine temperature. Tracker position filtering and query are useful on their own, but trackers are especially useful when paired with geofences. You can link trackers to one or more of your geofence collection resources, and position updates are evaluated automatically against the geofences in those collections. Proper use of filtering can greatly reduce the costs of your geofence evaluations, as well. 1. First, you create a tracker resource in your AWS account. 2. Next, decide how you send location updates to your tracker resources. Use AWS SDKs to integrate tracking capabilities into your mobile applications. Alternately, you can use MQTT by following step-by-step directions in tracking using MQTT. 3. You can now use your tracker resource to record location history and visualize it on a map. 4. You can also link your tracker resource to one or more geofence collections so that every position update sent to your tracker resource is automatically evaluated against all the geofence 779 Amazon Location Service Developer Guide in all the linked geofence collections. You can link resource on the tracker resource details page of the Amazon Location console or by using the Amazon Location Trackers API. 5. You can then integrate monitoring using services such as Amazon CloudWatch and AWS CloudTrail. For more information see, the section called “Monitor with Amazon CloudWatch” and the section called “Monitor and log with AWS CloudTrail”. Features • Position filtering – Trackers can automatically filter the positions that are sent to them. There are several reasons why you might want to filter out some of your device location updates. If you have a system that only sends reports every minute or so, you might want to filter devices by time, storing and evaluating positions only every 30 seconds. Even if you are monitoring more frequently, you might want to filter position updates to clean up the inherent noisiness associated with GPS hardware and position reporting. Their accuracy is not 100% perfect, so even a device that is stationary appears to be moving around slightly. At low speeds, this jitter causes visual clutter and can cause false entry and exit events if the device is near the edge of a geofence. The position filtering works as position updates are received by a tracker, reducing visual noise in your device paths (jitter), reducing the number of false geofence entry and exit events, and helping manage costs by reducing the number of position updates stored and geofence evaluations triggered. Trackers offer three position filtering options to help manage costs and reduce jitter in your location updates. • Accuracy-based – Use with any device that provides an accuracy measurement. Most GPS and mobile devices provide this information. The accuracy of each position measurement is affected by many environmental factors, including GPS satellite reception, landscape, and the proximity of WiFi and Bluetooth devices. Most devices, including most mobile devices, can provide an estimate of the accuracy of the measurement along with the measurement. With AccuracyBased filtering, Amazon Location ignores location updates if the device moved less than the measured accuracy. For example, if two consecutive updates from a device have an accuracy range of 5 m and 10 m, Amazon Location ignores the second update if the device has moved less than 15 m. Amazon Location neither evaluates ignored updates against geofences, nor stores them. Features 780 Amazon Location Service Developer Guide When accuracy is not provided, it is treated as zero, and the measurement is considered perfectly accurate, and no filtering will be applied to the updates. Note You can use accuracy-based filtering to remove all filtering. If you select accuracy- based filtering, but override all accuracy data to zero, or omit the accuracy entirely, then Amazon Location will not filter out any updates. • Distance-based – Use when your devices do not provide an accuracy measurement, but you still want to take advantage of filtering to reduce jitter and manage costs. DistanceBased filtering ignores location updates in which devices have moved less than 30 m (98.4 ft). When you use DistanceBased position filtering, Amazon Location neither evaluates these ignored updates against geofences nor stores the updates. The accuracy of most mobile devices, including the average accuracy of iOS and Android devices, is within 15 m. In most applications, DistanceBased filtering can reduce the effect of location inaccuracies when displaying device trajectory on a map, and the bouncing effect of multiple consecutive entry and exit events when devices are near the border of a
amazon-location-developer-guide-151
amazon-location-developer-guide.pdf
151
of filtering to reduce jitter and manage costs. DistanceBased filtering ignores location updates in which devices have moved less than 30 m (98.4 ft). When you use DistanceBased position filtering, Amazon Location neither evaluates these ignored updates against geofences nor stores the updates. The accuracy of most mobile devices, including the average accuracy of iOS and Android devices, is within 15 m. In most applications, DistanceBased filtering can reduce the effect of location inaccuracies when displaying device trajectory on a map, and the bouncing effect of multiple consecutive entry and exit events when devices are near the border of a geofence. It can also help reduce the cost of your application, by making fewer calls to evaluate against linked geofences or retrieve device positions. Distance-based filtering is useful if you want to filter, but your device doesn't provide accuracy measurements, or you want to filter out a larger number of updates than with accuracy-based. • Time-based – (default) Use when your devices send position updates very frequently (more than once every 30 seconds), and you want to achieve near real-time geofence evaluations without storing every update. In TimeBased filtering, every location update is evaluated against linked geofence collections, but not every location update is stored. If your update frequency is more often than 30 seconds, only one update per 30 seconds is stored for each unique device ID. Time-based filtering is particularly useful when you want to store fewer positions, but want every position update to be evaluated against the associated geofence collections. Features 781 Amazon Location Service Developer Guide Note Be mindful of the costs of your tracking application when deciding your filtering method and the frequency of position updates. You are billed for every location update and once for evaluating the position update against each linked geofence collection. For example, when using time-based filtering, if your tracker is linked to two geofence collections, every position update will count as one location update request and two geofence collection evaluations. If you are reporting position updates every 5 seconds for your devices and using time-based filtering, you will be billed for 720 location updates and 1,440 geofence evaluations per hour for each device. Use cases for Amazon Location Service trackers The following are a few common uses for Amazon Location Service trackers. Use trackers with geofences Trackers provide additional functionality when paired with geofences. You associate a tracker with a geofence collection, either through the Amazon Location console or the API, to automatically evaluate tracker locations. Each time the tracker receives an updated location, that location will be evaluated against each geofence in the collection, and the appropriate ENTER and EXIT events are generated in Amazon EventBridge. You can also apply filtering to the tracker, and, depending on the filtering, you can reduce the costs for geofence evaluations by only evaluating meaningful location updates. If you associate the tracker with a geofence collection after it has already received some position updates, the first position update after association is treated as an initial update for the geofence evaluations. If it is within a geofence, you will receive an ENTER event. If it is not within any geofences you will not receive an EXIT event, regardless of the previous state. Improve field service operations Keep a pulse on your mobile workforce with real-time tracking. Set geofences around customer sites and service areas to receive alerts when staff arrive and depart. Use location data to optimize scheduling, dispatch the nearest available technician, and reduce response times. Empower your field teams (such as a your plumbing or HVAC repair business) to work more efficiently, while enhancing the customer experience. Use cases 782 Amazon Location Service Developer Guide Monitor and control critical assets Utilize Amazon Location Service to track the real-time location and status of your valuable equipment, inventory, and other mobile assets. Set up geofences to receive alerts on unauthorized movements or removals, enhancing security and compliance. Use this location visibility to improve asset utilization, optimize maintenance schedules, and ensure your critical resources are accounted for at all times. Always monitor your heavy machinery, IT hardware, or retail inventory with precision, reduce losses, and make more informed operational decisions. Enhance supply chain visibility Leverage Amazon Location Service to track shipments and deliveries across your entire supply chain. Define geofences around distribution centers, stores, and other key facilities to monitor the movement of inventory and assets. Use real-time location data to improve inventory management, optimize logistics planning, and deliver a superior customer experience. Gain end-to-end visibility into your supply chain operations, identify bottlenecks, and make data-driven decisions that drive efficiency and responsiveness. Location-based marketing Unlock the power of location data to supercharge your geomarketing efforts. Use Amazon Location Service to set virtual boundaries around competitor locations, events, and high-traffic areas. Trigger personalized ads, offers, and
amazon-location-developer-guide-152
amazon-location-developer-guide.pdf
152
Amazon Location Service to track shipments and deliveries across your entire supply chain. Define geofences around distribution centers, stores, and other key facilities to monitor the movement of inventory and assets. Use real-time location data to improve inventory management, optimize logistics planning, and deliver a superior customer experience. Gain end-to-end visibility into your supply chain operations, identify bottlenecks, and make data-driven decisions that drive efficiency and responsiveness. Location-based marketing Unlock the power of location data to supercharge your geomarketing efforts. Use Amazon Location Service to set virtual boundaries around competitor locations, events, and high-traffic areas. Trigger personalized ads, offers, and notifications when customers enter these geofenced zones. Analyze foot traffic patterns to optimize ad placements and uncover prime sites for new business locations. Monitor customer movements within your own geofenced spaces to gain deeper insights on browsing behaviors and path-to-purchase. Combine real-time location tracking with precision geofencing to deliver hyper-targeted, contextual engagement that drives sales and loyalty in the physical world. Tracker concepts This section details common trackers concepts. Common Amazon Location Service trackers terminology Tracker resource An AWS resource that receives location updates from devices. The tracker resource provides support for location queries, such as current and historic device location. Linking a tracker Tracker concepts 783 Amazon Location Service Developer Guide resource to a geofence collection evaluates location updates against all geofences in the linked geofence collection automatically. Position data tracked A tracker resource stores information about your devices over time. The information includes a series of position updates, where each update includes location, time, and optional metadata. The metadata can include a position's accuracy, and up to three key-value pairs to help you track key information about each position, such as speed, direction, tire pressure, remaining fuel, or engine temperature of the vehicle you are tracking. Trackers maintain device location history for 30 days. Position filtering Position filtering can help you control costs and improve the quality of your tracking application by filtering out position updates that don't provide valuable information before the updates are stored or evaluated against geofences. You can choose AccuracyBased, DistanceBased, or TimeBased filtering. By default, position filtering is set to TimeBased. You can configure position filtering when you create or update tracker resources. RFC 3339 timestamp format Amazon Location Service trackers use the RFC 3339 format, which follows the International Organization for Standardization (ISO) 8601 format for dates and time. The format is "YYYY-MM-DDThh:mm:ss.sssZ+00:00": • YYYY-MM-DD — Represents the date format. • T — Indicates that the time values will follow. • hh:mm:ss.sss — Represents the time in 24-hour format. • Z — Indicates that the time zone used is UTC, which can be followed with deviations from the UTC time zone. • +00:00 — Optionally indicate deviations from the UTC time zone. For example, +01:00 indicates UTC + 1 hour. Example For July 2, 2020, at 12:15:20 in the afternoon, with an adjustment of an additional 1 hour to the UTC time zone. Common terminology 784 Amazon Location Service Developer Guide 2020-07-02T12:15:20.000Z+01:00 Get started with Amazon Location Service trackers This section provides a comprehensive guide to creating and using trackers with Amazon Location. Trackers allow you to store, process, and evaluate device positions while filtering location updates to reduce noise and manage costs. With advanced position filtering options, support for linked geofence collections, and integration with AWS services like EventBridge and IoT Core, trackers enable accurate real-time tracking and geofencing applications tailored to your specific needs. Topics • Create a tracker • Authenticating your requests • Update your tracker with a device position • Get a device's location history from a tracker • List your device positions Create a tracker Create a tracker resource to store and process position updates from your devices. You can use the Amazon Location Service console, the AWS CLI, or the Amazon Location APIs. Each position update stored in your tracker resources can include a measure of position accuracy, and up to three fields of metadata about the position or device that you want to store. The metadata is stored as key-value pairs, and can store information such as speed, direction, tire pressure, or engine temperature. Trackers filter position updates as they are received. This reduces visual noise in your device paths (called jitter), and reduces the number of false geofence entry and exit events. This also helps manage costs by reducing the number of geofence evaluations initiated. Trackers offer three position filtering options to help manage costs and reduce jitter in your location updates. • Accuracy-based – Use with any device that provides an accuracy measurement. Most mobile devices provide this information. The accuracy of each position measurement is affected by many Get started 785 Amazon Location Service Developer Guide environmental factors, including GPS satellite reception, landscape, and the proximity of Wi-Fi and Bluetooth devices. Most
amazon-location-developer-guide-153
amazon-location-developer-guide.pdf
153
noise in your device paths (called jitter), and reduces the number of false geofence entry and exit events. This also helps manage costs by reducing the number of geofence evaluations initiated. Trackers offer three position filtering options to help manage costs and reduce jitter in your location updates. • Accuracy-based – Use with any device that provides an accuracy measurement. Most mobile devices provide this information. The accuracy of each position measurement is affected by many Get started 785 Amazon Location Service Developer Guide environmental factors, including GPS satellite reception, landscape, and the proximity of Wi-Fi and Bluetooth devices. Most devices, including most mobile devices, can provide an estimate of the accuracy of the measurement along with the measurement. With AccuracyBased filtering, Amazon Location ignores location updates if the device moved less than the measured accuracy. For example, if two consecutive updates from a device have an accuracy range of 5 m and 10 m, Amazon Location ignores the second update if the device has moved less than 15 m. Amazon Location neither evaluates ignored updates against geofences, nor stores them. When accuracy is not provided, it is treated as zero, and the measurement is considered perfectly accurate. Note You can also use accuracy-based filtering to remove all filtering. If you select accuracy- based filtering, but override all accuracy data to zero, or omit the accuracy entirely, then Amazon Location will not filter out any updates. • Distance-based – Use when your devices do not provide an accuracy measurement, but you still want to take advantage of filtering to reduce jitter and manage costs. DistanceBased filtering ignores location updates in which devices have moved less than 30 m (98.4 ft). When you use DistanceBased position filtering, Amazon Location neither evaluates these ignored updates against geofences nor stores the updates. The accuracy of most mobile devices, including the average accuracy of iOS and Android devices, is within 15 m. In most applications, DistanceBased filtering can reduce the effect of location inaccuracies when displaying device trajectory on a map, and the bouncing effect of multiple consecutive entry and exit events when devices are near the border of a geofence. It can also help reduce the cost of your application, by making fewer calls to evaluate against linked geofences or retrieve device positions. • Time-based – (default) Use when your devices send position updates very frequently (more than once every 30 seconds), and you want to achieve near real-time geofence evaluations without storing every update. In TimeBased filtering, every location update is evaluated against linked geofence collections, but not every location update is stored. If your update frequency is more often than 30 seconds, only one update per 30 seconds is stored for each unique device ID. Create a tracker 786 Amazon Location Service Developer Guide Note Be mindful of the costs of your tracking application when deciding your filtering method and the frequency of position updates. You are billed for every location update and once for evaluating the position update against each linked geofence collection. For example, when using time-based filtering, if your tracker is linked to two geofence collections, every position update will count as one location update request and two geofence collection evaluations. If you are reporting position updates every 5 seconds for your devices and using time-based filtering, you will be billed for 720 location updates and 1,440 geofence evaluations per hour for each device. Your bill is not affected by the number of geofences in each collection. Since each geofence collection may contain up to 50,000 geofences, you may want to combine your geofences into fewer collections, where possible, to reduce your cost of geofence evaluations. By default, you will get EventBridge events each time a tracked device enters or exits a linked geofence. For more information, see Link a tracker to a geofence collection. You can enable events for all filtered position updates for a tracker resource. For more information, see the section called “Enable update events for a tracker”. Note If you wish to encrypt your data using your own AWS KMS customer managed key, then the Bounding Polygon Queries feature will be disabled by default. This is because by using this Bounding Polygon Queries feature, a representation of your device positions will not be encrypted using your AWS KMS managed key. However, the exact device position is still encrypted using your managed key. You can choose to opt-in to the Bounding Polygon Queries feature by setting the KmsKeyEnableGeospatialQueries parameter to true when creating or updating a Tracker. Console To create a tracker using the Amazon Location console 1. Open the Amazon Location Service console at https://console.aws.amazon.com/location/. Create a tracker 787 Amazon Location Service Developer Guide 2. In the left navigation pane, choose Trackers. 3. Choose Create tracker. 4. Fill the following fields: • Name –
amazon-location-developer-guide-154
amazon-location-developer-guide.pdf
154
a representation of your device positions will not be encrypted using your AWS KMS managed key. However, the exact device position is still encrypted using your managed key. You can choose to opt-in to the Bounding Polygon Queries feature by setting the KmsKeyEnableGeospatialQueries parameter to true when creating or updating a Tracker. Console To create a tracker using the Amazon Location console 1. Open the Amazon Location Service console at https://console.aws.amazon.com/location/. Create a tracker 787 Amazon Location Service Developer Guide 2. In the left navigation pane, choose Trackers. 3. Choose Create tracker. 4. Fill the following fields: • Name – Enter a unique name. For example, ExampleTracker. Maximum 100 characters. Valid entries include alphanumeric characters, hyphens, periods, and underscores. • Description – Enter an optional description. 5. Under Position filtering, choose the option that best fits how you intend to use your tracker resource. If you do not set Position filtering, the default setting is TimeBased. For more information, see Trackers in this guide, and PositionFiltering in the Amazon Location Service Trackers API Reference. 6. 7. 8. (Optional) Under Tags, enter a tag Key and Value. This adds a tag your new geofence collection. For more information, see the section called “How to use tags”. (Optional) Under Customer managed key encryption, you can choose to Add a customer managed key. This adds a symmetric customer managed key that you create, own, and manage over the default AWS owned encryption. For more information, see Encrypting data at rest. (Optional) Under KmsKeyEnableGeospatialQueries, you can choose to enable Geospatial Queries. This allows you use the Bounding Polygon Queries feature, while encrypting your data using a customer AWS KMS managed key. Note When you use the Bounding Polygon Queries feature a representation of your device positions is not be encrypted using your AWS KMS managed key. However, the exact device position is still encrypted using your managed key. 9. (Optional) Under EventBridge configuration, you can choose to enable EventBridge events for filtered position updates. This will send an event each time a position update for a device in this tracker meets the position filtering evaluation. 10. Choose Create tracker. API To create a tracker by using the Amazon Location APIs Create a tracker 788 Amazon Location Service Developer Guide Use the CreateTracker operation from the Amazon Location Trackers APIs. The following example uses an API request to create a tracker called ExampleTracker. The tracker resource is associated with a customer managed AWS KMS key to encrypt customer data, and does not enable position updates in EventBridge. POST /tracking/v0/trackers Content-type: application/json { "TrackerName": "ExampleTracker", "Description": "string", "KmsKeyEnableGeospatialQueries": false, "EventBridgeEnabled": false, "KmsKeyId": "1234abcd-12ab-34cd-56ef-1234567890ab", "PositionFiltering": "AccuracyBased", "Tags": { "string" : "string" } } Create a tracker with KmsKeyEnableGeospatialQueries enabled The following example has the parameter KmsKeyEnableGeospatialQueries set to true. This allows you use the Bounding Polygon Queries feature, while encrypting your data using a customer AWS KMS managed key. For information on using the Bounding Polygon Queries feature, see the section called “List your device positions” Note When you use the Bounding Polygon Queries feature a representation of your device positions is not be encrypted using your AWS KMS managed key. However, the exact device position is still encrypted using your managed key. POST /tracking/v0/trackers Content-type: application/json Create a tracker 789 Amazon Location Service Developer Guide { "TrackerName": "ExampleTracker", "Description": "string", "KmsKeyEnableGeospatialQueries": true, "EventBridgeEnabled": false, "KmsKeyId": "1234abcd-12ab-34cd-56ef-1234567890ab", "PositionFiltering": "AccuracyBased", "Tags": { "string" : "string" } } AWS CLI To create a tracker using AWS CLI commands Use the create-tracker command. The following example uses the AWS CLI to create a tracker called ExampleTracker. The tracker resource is associated with a customer managed AWS KMS key to encrypt customer data, and does not enable position updates in EventBridge. aws location \ create-tracker \ --tracker-name "ExampleTracker" \ --position-filtering "AccuracyBased" \ --event-bridge-enabled false \ --kms-key-enable-geospatial-queries false \ --kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab" Create a tracker with KmsKeyEnableGeospatialQueries enabled The following example has the parameter KmsKeyEnableGeospatialQueries set to true. This allows you use the Bounding Polygon Queries feature, while encrypting your data using a customer AWS KMS managed key. For information on using the Bounding Polygon Queries feature, see the section called “List your device positions” Create a tracker 790 Amazon Location Service Developer Guide Note When you use the Bounding Polygon Queries feature a representation of your device positions is not be encrypted using your AWS KMS managed key. However, the exact device position is still encrypted using your managed key. aws location \ create-tracker \ --tracker-name "ExampleTracker" \ --position-filtering "AccuracyBased" \ --event-bridge-enabled false \ --kms-key-enable-geospatial-queries true \ --kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab" Note Billing depends on your usage. You may incur fees for the use of other AWS services. For more information, see Amazon Location Service pricing. You can edit the Description, Position filtering, and EventBridge configuration after the tracker is created by choosing Edit
amazon-location-developer-guide-155
amazon-location-developer-guide.pdf
155
Guide Note When you use the Bounding Polygon Queries feature a representation of your device positions is not be encrypted using your AWS KMS managed key. However, the exact device position is still encrypted using your managed key. aws location \ create-tracker \ --tracker-name "ExampleTracker" \ --position-filtering "AccuracyBased" \ --event-bridge-enabled false \ --kms-key-enable-geospatial-queries true \ --kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab" Note Billing depends on your usage. You may incur fees for the use of other AWS services. For more information, see Amazon Location Service pricing. You can edit the Description, Position filtering, and EventBridge configuration after the tracker is created by choosing Edit tracker. Authenticating your requests Once you create a tracker resource and you're ready to begin evaluating device positions against geofences, choose how you would authenticate your requests: • To explore ways you can access the services, see Authentication. • If you want to publish device positions with unauthenticated requests,you may want to use Amazon Cognito. Example The following example shows using an Amazon Cognito identity pool for authorization, using AWS JavaScript SDK v3, and the Amazon Location the section called “Web”. Create an unauthenticated identity pool 791 Amazon Location Service Developer Guide import { LocationClient, BatchUpdateDevicePositionCommand } from "@aws-sdk/client- location"; import { withIdentityPoolId } from "@aws/amazon-location-utilities-auth-helper"; // Unauthenticated identity pool you created const identityPoolId = "us-east-1:1234abcd-5678-9012-abcd-sample-id"; // Create an authentication helper instance using credentials from Cognito const authHelper = await withIdentityPoolId(identityPoolId); const client = new LocationClient({ region: "us-east-1", // The region containing both the identity pool and tracker resource ...authHelper.getLocationClientConfig(), // Provides configuration required to make requests to Amazon Location }); const input = { TrackerName: "ExampleTracker", Updates: [ { DeviceId: "ExampleDevice-1", Position: [-123.4567, 45.6789], SampleTime: new Date("2020-10-02T19:09:07.327Z"), }, { DeviceId: "ExampleDevice-2", Position: [-123.123, 45.123], SampleTime: new Date("2020-10-02T19:10:32Z"), }, ], }; const command = new BatchUpdateDevicePositionCommand(input); // Send device position updates const response = await client.send(command); Create an unauthenticated identity pool 792 Amazon Location Service Developer Guide Update your tracker with a device position To track your devices, you can post device position updates to your tracker. You can later retrieve these device positions or the device position history from your tracker resource. Each position update must include the device ID, a timestamp , and a position. You may optionally include other metadata, including accuracy and up to 3 key-value pairs for your own use. If your tracker is linked to one or more geofence collections, updates will be evaluated against those geofences (following the filtering rules that you specified for the tracker). If a device breaches a geofenced area (by moving from inside the area to outside, or vice versa), you will receive events in EventBridge. These ENTER or EXIT events include the position update details, including the device ID, the timestamp, and any associated metadata. Note For more information about position filtering, see the section called “Create a tracker”. For more information about geofence events, see the section called “React to events with EventBridge”. Use either of these methods to send device updates: • Send MQTT updates to an AWS IoT Core resource and link it to your tracker resource. • Send location updates using the Amazon Location Trackers API, by using the AWS CLI, or the Amazon Location APIs. You can use the AWS SDKs to call the APIs from your iOS or Android application. API To send a position update using the Amazon Location APIs Use the BatchUpdateDevicePosition operation from the Amazon Location Trackers APIs. The following example uses an API request to post a device position update for ExampleDevice to a tracker ExampleTracker. POST /tracking/v0/trackers/ExampleTracker/positions Content-type: application/json { Update your tracker with a device position 793 Amazon Location Service Developer Guide "Updates": [ { "DeviceId": "1", "Position": [ -123.12245146162303, 49.27521118043802 ], "SampleTime": "2022-10-24T19:09:07.327Z", "PositionProperties": { "name" : "device1" }, "Accuracy": { "Horizontal": 10 } }, { "DeviceId": "2", "Position": [ -123.1230104928471, 49.27752402723152 ], "SampleTime": "2022-10-02T19:09:07.327Z" }, { "DeviceId": "3", "Position": [ -123.12325592118916, 49.27340530543111 ], "SampleTime": "2022-10-02T19:09:07.327Z" }, { "DeviceId": "4", "Position": [ -123.11958813096311, 49.27774641063121 ], "SampleTime": "2022-10-02T19:09:07.327Z" }, { "DeviceId": "5", "Position": [ -123.1277418058896, 49.2765989015285 ], "SampleTime": "2022-10-02T19:09:07.327Z" }, { Update your tracker with a device position 794 Amazon Location Service Developer Guide "DeviceId": "6", "Position": [ -123.11964267059481, 49.274188155916534 ], "SampleTime": "2022-10-02T19:09:07.327Z" } ] } AWS CLI To send a position update using AWS CLI commands Use the batch-update-device-position command. The following example uses an AWS CLI to post a device position update for ExampleDevice-1 and ExampleDevice-2 to a tracker ExampleTracker. aws location batch-update-device-position \ --tracker-name ExampleTracker \ --updates '[{"DeviceId":"ExampleDevice-1","Position": [-123.123,47.123],"SampleTime":"2021-11-30T21:47:25.149Z"}, {"DeviceId":"ExampleDevice-2","Position": [-123.123,47.123],"SampleTime":"2021-11-30T21:47:25.149Z","Accuracy": {"Horizontal":10.30},"PositionProperties":{"field1":"value1","field2":"value2"}}]' Get a device's location history from a tracker Your Amazon Location tracker resource maintains the location history of all your tracked devices for a period of 30 days. You can retrieve device location history, including all associated metadata, from your tracker resource. The following examples use the AWS CLI, or the
amazon-location-developer-guide-156
amazon-location-developer-guide.pdf
156
AWS CLI To send a position update using AWS CLI commands Use the batch-update-device-position command. The following example uses an AWS CLI to post a device position update for ExampleDevice-1 and ExampleDevice-2 to a tracker ExampleTracker. aws location batch-update-device-position \ --tracker-name ExampleTracker \ --updates '[{"DeviceId":"ExampleDevice-1","Position": [-123.123,47.123],"SampleTime":"2021-11-30T21:47:25.149Z"}, {"DeviceId":"ExampleDevice-2","Position": [-123.123,47.123],"SampleTime":"2021-11-30T21:47:25.149Z","Accuracy": {"Horizontal":10.30},"PositionProperties":{"field1":"value1","field2":"value2"}}]' Get a device's location history from a tracker Your Amazon Location tracker resource maintains the location history of all your tracked devices for a period of 30 days. You can retrieve device location history, including all associated metadata, from your tracker resource. The following examples use the AWS CLI, or the Amazon Location APIs. API To get the device location history from a tracker using the Amazon Location APIs Use the GetDevicePositionHistory operation from the Amazon Location Trackers APIs. The following example uses an API URI request to get the device location history of ExampleDevice from a tracker called ExampleTracker starting from 19:05:07 (inclusive) and ends at 19:20:07 (exclusive) on 2020–10–02. Get a device's location history 795 Amazon Location Service Developer Guide POST /tracking/v0/trackers/ExampleTracker/devices/ExampleDevice/list-positions Content-type: application/json { "StartTimeInclusive": "2020-10-02T19:05:07.327Z", "EndTimeExclusive": "2020-10-02T19:20:07.327Z" } AWS CLI To get the device location history from a tracker using AWS CLI commands Use the get-device-position-history command. The following example uses an AWS CLI to get the device location history of ExampleDevice from a tracker called ExampleTracker starting from 19:05:07 (inclusive) and ends at 19:20:07 (exclusive) on 2020–10–02. aws location \ get-device-position-history \ --device-id "ExampleDevice" \ --start-time-inclusive "2020-10-02T19:05:07.327Z" \ --end-time-exclusive "2020-10-02T19:20:07.327Z" \ --tracker-name "ExampleTracker" List your device positions You can view a list device positions for a tracker using the AWS CLI, or the Amazon Location APIs, with the ListDevicePositions API. When you call the ListDevicePositions API, a list of the latest positions for all devices associated with a given tracker is returned. By default this API returns 100 of the latest device positions per page of results for a given tracker. To only return devices within a specific region use the FilterGeometry parameter to create a Bounding Polygon Query. This way when you call ListDevicePositions, only devices inside the polygon will be returned. Note If you wish to encrypt your data using your own AWS KMS customer managed key, then the Bounding Polygon Queries feature will be disabled by default. This is because by using this feature, a representation of your device positions will not be encrypted using your List your device positions 796 Amazon Location Service Developer Guide AWS KMS managed key. The exact device position, however; is still encrypted using your managed key. You can choose to opt-in to the Bounding Polygon Queries feature. This is done by setting the KmsKeyEnableGeospatialQueries parameter to true when creating or updating a Tracker. API Use the ListDevicePositions operation from the Amazon Location Trackers APIs. The following example is an API request to get a list of device positions in polygonal area, using the optional parameter FilterGeometry. The example returns 3 device locations present in the area defined by the Polygon array. POST /tracking/v0/trackers/TrackerName/list-positions HTTP/1.1 Content-type: application/json { "FilterGeometry": { "Polygon": [ [ [ -123.12003339442259, 49.27425121147397 ], [ -123.1176984148229, 49.277063620879744 ], [ -123.12389509145294, 49.277954183760926 ], [ -123.12755921328647, 49.27554025235713 ], [ -123.12330236586217, 49.27211836076236 ], List your device positions 797 Developer Guide Amazon Location Service [ -123.12003339442259, 49.27425121147397 ] ] ] }, "MaxResults": 3, "NextToken": "1234-5678-9012" } The following is an example response for ListDevicePositions: { "Entries": [ { "DeviceId": "1", "SampleTime": "2022-10-24T19:09:07.327Z", "Position": [ -123.12245146162303, 49.27521118043802 ], "Accuracy": { "Horizontal": 10 }, "PositionProperties": { "name": "device1" } }, { "DeviceId": "3", "SampleTime": "2022-10-02T19:09:07.327Z", "Position": [ -123.12325592118916, 49.27340530543111 ] }, { "DeviceId": "2", "SampleTime": "2022-10-02T19:09:07.327Z", "Position": [ -123.1230104928471, 49.27752402723152 List your device positions 798 Amazon Location Service ] } ], "NextToken": "1234-5678-9012" } CLI Use the list-trackers command. Developer Guide The following example is an AWS CLI to get a list of devices in a polygonal area. aws location list-device-positions TODO: add arguments add props for filter geo How to work with Amazon Location Service trackers This section provides instructions for working with Amazon Location trackers. Learn how to verify device positions, link trackers to geofence collections, and track locations using AWS IoT and MQTT. Additionally, learn how to manage your trackers effectively to support your location-based applications and ensure accurate, real-time tracking. Topics • Verify device positions • Link a tracker to a geofence collection • Track using AWS IoT and MQTT with Amazon Location Service • Manage your Amazon Location Service tracker Verify device positions To check the integrity of a device position use the VerifyDevicePosition API. This API returns information about the integrity of the device's position, by evaluating properties such as the device's cell signal, Wi-Fi access point, Ipv4 address, and if a proxy is in use. Prerequisites Before being able to use the listed APIs for device verification, make sure you have the following prerequisite:
amazon-location-developer-guide-157
amazon-location-developer-guide.pdf
157
accurate, real-time tracking. Topics • Verify device positions • Link a tracker to a geofence collection • Track using AWS IoT and MQTT with Amazon Location Service • Manage your Amazon Location Service tracker Verify device positions To check the integrity of a device position use the VerifyDevicePosition API. This API returns information about the integrity of the device's position, by evaluating properties such as the device's cell signal, Wi-Fi access point, Ipv4 address, and if a proxy is in use. Prerequisites Before being able to use the listed APIs for device verification, make sure you have the following prerequisite: How to 799 Amazon Location Service Developer Guide • You have created a tracker for the device or devices you want to check. For more information, see Get started with Amazon Location Service trackers. The following example shows a request for the Amazon Location VerifyDevicePosition API. API To verify device positions using the Amazon Location APIs Use the VerifyDevicePosition operation from the Amazon Location Tracking APIs. The following example shows an API request to evaluate the integrity of the position of a device. Replace these values with your own device IDs. POST /tracking/v0/trackers/TrackerName/positions/verify HTTP/1.1 Content-type: application/json { "DeviceState": { "Accuracy": { "Horizontal": number }, "CellSignals": { "LteCellDetails": [ { "CellId": number, "LocalId": { "Earfcn": number, "Pci": number }, "Mcc": number, "Mnc": number, "NetworkMeasurements": [ { "CellId": number, "Earfcn": number, "Pci": number, "Rsrp": number, "Rsrq": number } ], "NrCapable": boolean, "Rsrp": number, Verify positions 800 Amazon Location Service Developer Guide "Rsrq": number, "Tac": number, "TimingAdvance": number } ] }, "DeviceId": "ExampleDevice", "Ipv4Address": "string", "Position": [ number ], "SampleTime": "string", "WiFiAccessPoints": [ { "MacAddress": "string", "Rss": number } ] }, "DistanceUnit": "string" } Note The Integrity SDK provides enhanced features related to device verification, and it is available for use by request. To get access to the SDK, contact Sales Support. Link a tracker to a geofence collection Now that you have a geofence collection and a tracker, you can link them together so that location updates are automatically evaluated against all of your geofences. If you don’t want to evaluate all location updates, or alternatively, if you aren't storing some of your locations in a tracker resource, you can evaluate device positions against geofences on demand. When device positions are evaluated against geofences, events are generated. You can set an action to these events. For more information about actions that you can set for geofence events, see Reacting to Amazon Location Service events with Amazon EventBridge. An Amazon Location event includes the attributes of the device position update that generates it and some attributes of the geofence that is entered or exited. For more information about the data included in a geofence event, see the section called “Event examples”. Link to a geofence collection 801 Amazon Location Service Developer Guide The following examples link a tracker resource to a geofence collection using the console, the AWS CLI, or the Amazon Location APIs. Console To link a tracker resource to a geofence collection using the Amazon Location Service console 1. Open the Amazon Location Service console at https://console.aws.amazon.com/location/. 2. In the left navigation pane, choose Trackers. 3. Under Device trackers, select the name link of the target tracker. 4. Under Linked Geofence Collections, choose Link Geofence Collection. 5. In the Linked Geofence Collection window, select a geofence collection from the dropdown menu. 6. Choose Link. After you link the tracker resource, it will be assigned an Active status. API To link a tracker resource to a geofence collection using the Amazon Location APIs Use the AsssociateTrackerConsumer operation from the Amazon Location Trackers APIs. The following example uses an API request that associates ExampleTracker with a geofence collection using its Amazon Resource Name (ARN). POST /tracking/v0/trackers/ExampleTracker/consumers Content-type: application/json { "ConsumerArn": "arn:aws:geo:us-west-2:123456789012:geofence- collection/ExampleGeofenceCollection" } AWS CLI To link a tracker resource to a geofence collection using AWS CLI commands Use the associate-tracker-consumer command. Link to a geofence collection 802 Amazon Location Service Developer Guide The following example uses an AWS CLI to create a geofence collection called ExampleGeofenceCollection. aws location \ associate-tracker-consumer \ --consumer-arn "arn:aws:geo:us-west-2:123456789012:geofence- collection/ExampleGeofenceCollection" \ --tracker-name "ExampleTracker" Track using AWS IoT and MQTT with Amazon Location Service MQTT is a lightweight and widely adopted messaging protocol designed for constrained devices. AWS IoT Core supports device connections that use the MQTT protocol and MQTT over WebSocket Secure (WSS) protocol. AWS IoT Core connects devices to AWS and enables you to send and receive messages between them. The AWS IoT Core rules engine stores queries about your devices' message topics and enables you to define actions for sending messages to other AWS services, such as Amazon Location Service. Devices that are aware of their location as coordinates can have their locations forwarded to Amazon Location through the rules engine. Note Devices may know their
amazon-location-developer-guide-158
amazon-location-developer-guide.pdf
158
widely adopted messaging protocol designed for constrained devices. AWS IoT Core supports device connections that use the MQTT protocol and MQTT over WebSocket Secure (WSS) protocol. AWS IoT Core connects devices to AWS and enables you to send and receive messages between them. The AWS IoT Core rules engine stores queries about your devices' message topics and enables you to define actions for sending messages to other AWS services, such as Amazon Location Service. Devices that are aware of their location as coordinates can have their locations forwarded to Amazon Location through the rules engine. Note Devices may know their own position, for example via built-in GPS. AWS IoT also has support for third party device location tracking. For more information, see AWS IoT Core Device Location in the AWS IoT Core Developer Guide. The following walkthrough describes tracking using AWS IoT Core rules. You can also send the device information to your own AWS Lambda function, if you need to process it before sending to Amazon Location. For more details about using Lambda to process your device locations, see Use AWS Lambda with MQTT. Topics • Prerequisite • Create an AWS IoT Core rule • Test your AWS IoT Core rule in the console Track using AWS IoT and MQTT 803 Amazon Location Service • Use AWS Lambda with MQTT Prerequisite Developer Guide Before you can begin tracking, you must complete the following prerequisites: • Create a tracker resource that you will send the device location data to. • Create an IAM role for granting AWS IoT Core access to your tracker. When following those steps, use the following policy to give access to your tracker: { "Version": "2012-10-17", "Statement": [ { "Sid": "WriteDevicePosition", "Effect": "Allow", "Action": "geo:BatchUpdateDevicePosition", "Resource": "arn:aws:geo:*:*:tracker/*" } ] } Create an AWS IoT Core rule Next, create an AWS IoT Core rule to forward your devices' positional telemetry to Amazon Location Service. For more information about creating rules, see the following topics in the AWS IoT Core Developer Guide: • Creating an AWS IoT rule for information about creating a new rule. • Location action for information specific to creating a rule for publishing to Amazon Location Test your AWS IoT Core rule in the console If no devices are currently publishing telemetry that includes location, you can test your rule using the AWS IoT Core console. The console has a test client where you can publish a sample message to verify the results of the solution. 1. Sign in to the AWS IoT Core console at https://console.aws.amazon.com/iot/. Track using AWS IoT and MQTT 804 Amazon Location Service Developer Guide 2. In the left navigation, expand Test, and choose MQTT test client. 3. Under Publish to a topic, set the Topic name to iot/topic (or the name of the topic that you set up in your AWS IoT Core rule, if different), and provide the following for the Message payload. Replace the timestamp 1604940328 with a valid timestamp within the last 30 days (any timestamps older than 30 days are ignored by Amazon Location Service trackers). { "payload": { "deviceid": "thing123", "timestamp": 1604940328, "location": { "lat": 49.2819, "long": -123.1187 }, "accuracy": { "Horizontal": 20.5 }, "positionProperties": { "field1": "value1", "field2": "value2" } } } 4. Choose Publish to topic to send the test message. 5. To validate that the message was received by Amazon Location Service, use the following AWS CLI command. If you modified it during setup, replace the tracker name with the one that you used. aws location batch-get-device-position --tracker-name MyTracker --device-ids thing123 Use AWS Lambda with MQTT While using AWS Lambda is no longer required when sending device location data to Amazon Location for tracking, you may still want to use Lambda in some cases. For example, if you wish to process your device location data yourself, before sending it on to Amazon Location. The following topics describe how to use Lambda to process messages before sending them to your tracker. For more information about this pattern, see the reference architecture. Topics • Prerequisite • Create a Lambda function • Create an AWS IoT Core rule • Test your AWS IoT Core rule in the console Track using AWS IoT and MQTT 805 Amazon Location Service Prerequisite Developer Guide Before you can begin tracking, you must create a tracker resource. To create a tracker resource, you can use the Amazon Location console, the AWS CLI, or the Amazon Location APIs. The following example uses the Amazon Location Service console to create the tracker resource: 1. Open the Amazon Location Service console at https://console.aws.amazon.com/location/. 2. In the left navigation pane, choose Trackers. 3. Choose Create tracker. 4. Fill out the following boxes: • Name – Enter a unique name that has a maximum of 100 characters. Valid entries include alphanumeric characters, hyphens, and
amazon-location-developer-guide-159
amazon-location-developer-guide.pdf
159
805 Amazon Location Service Prerequisite Developer Guide Before you can begin tracking, you must create a tracker resource. To create a tracker resource, you can use the Amazon Location console, the AWS CLI, or the Amazon Location APIs. The following example uses the Amazon Location Service console to create the tracker resource: 1. Open the Amazon Location Service console at https://console.aws.amazon.com/location/. 2. In the left navigation pane, choose Trackers. 3. Choose Create tracker. 4. Fill out the following boxes: • Name – Enter a unique name that has a maximum of 100 characters. Valid entries include alphanumeric characters, hyphens, and underscores. For example, MyTracker. • Description – Enter an optional description. For example, Tracker for storing AWS IoT Core device positions. • Position filtering – Select the filtering that you want to use for position updates. For example, Accuracy-based filtering. 5. Choose Create tracker. Create a Lambda function To create a connection between AWS IoT Core and Amazon Location Service, you need an AWS Lambda function to process messages forwarded by AWS IoT Core. This function will extract any positional data, format it for Amazon Location Service, and submit it through the Amazon Location Tracker API. You can create this function through the AWS Lambda console, or you can use the AWS Command Line Interface (AWS CLI) or the AWS Lambda APIs. To create a Lambda function that publishes position updates to Amazon Location using the console: 1. Open the AWS Lambda console at https://console.aws.amazon.com/lambda/. 2. From the left navigation, choose Functions. 3. Choose Create Function, and make sure that Author from scratch is selected. 4. Fill out the following boxes: Track using AWS IoT and MQTT 806 Amazon Location Service Developer Guide • Function name – Enter a unique name for your function. Valid entries include alphanumeric characters, hyphens, and underscores with no spaces. For example, MyLambda. • Runtime – Choose Python 3.8. 5. Choose Create function. 6. Choose the Code tab to open the editor. 7. Overwrite the placeholder code in lambda_function.py with the following, replacing the value assigned to TRACKER_NAME with the name of the tracker that you created as a prerequisite. from datetime import datetime import json import os import boto3 # Update this to match the name of your Tracker resource TRACKER_NAME = "MyTracker" """ This Lambda function receives a payload from AWS IoT Core and publishes device updates to Amazon Location Service via the BatchUpdateDevicePosition API. Parameter 'event' is the payload delivered from AWS IoT Core. In this sample, we assume that the payload has a single top-level key 'payload' and a nested key 'location' with keys 'lat' and 'long'. We also assume that the name of the device is nested in the payload as 'deviceid'. Finally, the timestamp of the payload is present as 'timestamp'. For example: >>> event { 'payload': { 'deviceid': 'thing123', 'timestamp': 1604940328, 'location': { 'lat': 49.2819, 'long': -123.1187 }, 'accuracy': {'Horizontal': 20.5 }, 'positionProperties': {'field1':'value1','field2':'value2'} } } Track using AWS IoT and MQTT 807 Amazon Location Service Developer Guide If your data doesn't match this schema, you can either use the AWS IoT Core rules engine to format the data before delivering it to this Lambda function, or you can modify the code below to match it. """ def lambda_handler(event, context): update = { "DeviceId": event["payload"]["deviceid"], "SampleTime": datetime.fromtimestamp(event["payload"] ["timestamp"]).strftime("%Y-%m-%dT%H:%M:%SZ"), "Position": [ event["payload"]["location"]["long"], event["payload"]["location"]["lat"] ] } if "accuracy" in event["payload"]: update["Accuracy"] = event["payload"]['accuracy'] if "positionProperties" in event["payload"]: update["PositionProperties"] = event["payload"]['positionProperties'] client = boto3.client("location") response = client.batch_update_device_position(TrackerName=TRACKER_NAME, Updates=[update]) return { "statusCode": 200, "body": json.dumps(response) } 8. Choose Deploy to save the updated function. 9. Choose the Configuration tab. 10. In the Permissions section, choose the hyperlinked Role name to grant Amazon Location Service permissions to your Lambda function. 11. From your role's Summary page, choose Add permissions, and then from the dropdown list, select Create inline policy. 12. Choose the JSON tab, and overwrite the policy with the following document. This allows your Lambda function to update device positions managed by all tracker resources across all Regions. { Track using AWS IoT and MQTT 808 Amazon Location Service Developer Guide "Version": "2012-10-17", "Statement": [ { "Sid": "WriteDevicePosition", "Effect": "Allow", "Action": "geo:BatchUpdateDevicePosition", "Resource": "arn:aws:geo:*:*:tracker/*" } ] } 13. Choose Review policy. 14. Enter a policy name. For example, AmazonLocationTrackerWriteOnly. 15. Choose Create policy. You can modify this function code, as necessary, to adapt to your own device message schema. Create an AWS IoT Core rule Next, create an AWS IoT Core rule to forward your devices' positional telemetry to the AWS Lambda function for transformation and publication to Amazon Location Service. The example rule provided assumes that any necessary transformation of device payloads is handled by your Lambda function. You can create this rule through the AWS IoT Core console, the AWS Command Line Interface (AWS CLI), or the AWS IoT Core
amazon-location-developer-guide-160
amazon-location-developer-guide.pdf
160
Enter a policy name. For example, AmazonLocationTrackerWriteOnly. 15. Choose Create policy. You can modify this function code, as necessary, to adapt to your own device message schema. Create an AWS IoT Core rule Next, create an AWS IoT Core rule to forward your devices' positional telemetry to the AWS Lambda function for transformation and publication to Amazon Location Service. The example rule provided assumes that any necessary transformation of device payloads is handled by your Lambda function. You can create this rule through the AWS IoT Core console, the AWS Command Line Interface (AWS CLI), or the AWS IoT Core APIs. Note While the AWS IoT console handles the permission necessary to allow AWS IoT Core to invoke your Lambda function, if you are creating your rule from the AWS CLI or SDK, you must configure a policy to grant permission to AWS IoT. To create an AWS IoT Core using the console 1. 2. Sign in to the AWS IoT Core console at https://console.aws.amazon.com/iot/. In the left navigation, expand Act, and choose Rules. 3. Choose Create a rule to start the new rule wizard. 4. Enter a name and description for your rule. Track using AWS IoT and MQTT 809 Amazon Location Service Developer Guide 5. For the Rule query statement, update the FROM attribute to refer to a topic where at least one device is publishing telemetry that includes location. If you are testing the solution, no modification is needed. SELECT * FROM 'iot/topic' 6. Under Set one or more actions , choose Add action. 7. Select Send a message to a lambda function. 8. Choose Configure action. 9. Find and select your Lambda function from the list. 10. Choose Add action. 11. Choose Create rule. Test your AWS IoT Core rule in the console If no devices are currently publishing telemetry that includes location, you can test your rule and this solution using the AWS IoT Core console. The console has a test client where you can publish a sample message to verify the results of the solution. 1. 2. Sign in to the AWS IoT Core console at https://console.aws.amazon.com/iot/. In the left navigation, expand Test, and choose MQTT test client. 3. Under Publish to a topic, set the Topic name to iot/topic (or the name of the topic that you set up in your AWS IoT Core rule, if different), and provide the following for the Message payload. Replace the timestamp 1604940328 with a valid timestamp within the last 30 days (any timestamps older than 30 days are ignored). { "payload": { "deviceid": "thing123", "timestamp": 1604940328, "location": { "lat": 49.2819, "long": -123.1187 }, "accuracy": { "Horizontal": 20.5 }, "positionProperties": { "field1": "value1", "field2": "value2" } } } 4. Choose Publish to topic to send the test message. Track using AWS IoT and MQTT 810 Amazon Location Service Developer Guide 5. To validate that the message was received by Amazon Location Service, use the following AWS CLI command. If you modified them during setup, replace the tracker name and device id with the ones that you used. aws location batch-get-device-position --tracker-name MyTracker --device-ids thing123 Manage your Amazon Location Service tracker You can manage your trackers using the Amazon Location console, the AWS CLI, or the Amazon Location APIs. List your trackers You can view your trackers list using the Amazon Location console, the AWS CLI, or the Amazon Location APIs: Console To view a list of existing trackers using the Amazon Location console 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Trackers from the left navigation. 3. View a list of your tracker resources under My trackers. API Use the ListTrackers operation from the Amazon Location Trackers APIs. The following example is an API request to get a list of trackers in your AWS account. POST /tracking/v0/list-trackers The following is an example response for ListTrackers: { "Entries": [ { Manage trackers 811 Amazon Location Service Developer Guide "CreateTime": 2020-10-02T19:09:07.327Z, "Description": "string", "TrackerName": "ExampleTracker", "UpdateTime": 2020-10-02T19:10:07.327Z } ], "NextToken": "1234-5678-9012" } CLI Use the list-trackers command. The following example is an AWS CLI to get a list of trackers in your AWS account. aws location list-trackers Disconnecting a tracker from a geofence collection You can disconnect a tracker from a geofence collection using the Amazon Location console, the AWS CLI, or the Amazon Location APIs: Console To disassociate a tracker from an associated geofence collection using the Amazon Location console 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Trackers from the left navigation pane. 3. Under My trackers, select the name link of the target tracker. 4. Under Linked Geofence Collections, select a geofence collection with a Linked status. 5. Choose Unlink. API Use the DisassociateTrackerConsumer operation from the Amazon Location Trackers APIs. The following example is an API request to disassociate a tracker
amazon-location-developer-guide-161
amazon-location-developer-guide.pdf
161
a tracker from a geofence collection using the Amazon Location console, the AWS CLI, or the Amazon Location APIs: Console To disassociate a tracker from an associated geofence collection using the Amazon Location console 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Trackers from the left navigation pane. 3. Under My trackers, select the name link of the target tracker. 4. Under Linked Geofence Collections, select a geofence collection with a Linked status. 5. Choose Unlink. API Use the DisassociateTrackerConsumer operation from the Amazon Location Trackers APIs. The following example is an API request to disassociate a tracker from an associated geofence collection. Manage trackers 812 Amazon Location Service Developer Guide DELETE /tracking/v0/trackers/ExampleTracker/consumers/arn:aws:geo:us- west-2:123456789012:geofence-collection/ExampleCollection The following is an example response for DisassociateTrackerConsumer: HTTP/1.1 200 CLI Use the disassociate-tracker-consumer command. The following example is an AWS CLI command to disassociate a tracker from an associated geofence collection. aws location disassociate-tracker-consumer \ --consumer-arn "arn:aws:geo:us-west-2:123456789012:geofence-collection/ ExampleCollection" \ --tracker-name "ExampleTracker" Get tracker details You can get details about any tracker in your AWS account by using the Amazon Location console, the AWS CLI, or the Amazon Location APIs. Console To view tracker details by using the Amazon Location console 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Trackers from the left navigation. 3. Under My trackers, select the name link of the target tracker. 4. View the tracker details under Information. API Use the DescribeTracker operation from the Amazon Location Tracker APIs. The following example is an API request to get the tracker details for ExampleTracker. Manage trackers 813 Amazon Location Service Developer Guide GET /tracking/v0/trackers/ExampleTracker The following is an example response for DescribeTracker: { "CreateTime": 2020-10-02T19:09:07.327Z, "Description": "string", "EventBridgeEnabled": false, "KmsKeyId": "1234abcd-12ab-34cd-56ef-1234567890ab", "PositionFiltering": "TimeBased", "Tags": { "Tag1" : "Value1" }, "TrackerArn": "arn:aws:geo:us-west-2:123456789012:tracker/ExampleTracker", "TrackerName": "ExampleTracker", "UpdateTime": 2020-10-02T19:10:07.327Z } CLI Use the describe-tracker command. The following example is an AWS CLI command to get tracker details for ExampleTracker. aws location describe-tracker \ --tracker-name "ExampleTracker" Delete a tracker You can delete a tracker from your AWS account using the Amazon Location console, the AWS CLI, or the Amazon Location APIs: Console To delete an existing map resource using the Amazon Location console Manage trackers 814 Amazon Location Service Developer Guide Warning This operation deletes the resource permanently. If the tracker resource is in use, you may encounter an error. Make sure that the target resource isn't a dependency for your applications. 1. Open the Amazon Location console at https://console.aws.amazon.com/location/. 2. Choose Trackers from the left navigation pane. 3. Under My trackers, select the target tracker. 4. Choose Delete tracker. API Use the DeleteTracker operation from the Amazon Location Tracker APIs. The following example is an API request to delete the tracker ExampleTracker. DELETE /tracking/v0/trackers/ExampleTracker The following is an example response for DeleteTracker: HTTP/1.1 200 CLI Use the delete-tracker command. The following example is an AWS CLI command to delete the tracker ExampleTracker. aws location delete-tracker \ --tracker-name "ExampleTracker" Manage costs and usage As you continue learning about Amazon Location trackers, it's important to understand how to manage service capacity, ensure you follow usage limits, and get the best results through quota Manage costs and usage 815 Amazon Location Service Developer Guide and API optimizations. By applying best practices for performance and accuracy, you can tailor your application to handle place-related queries efficiently and maximize your API requests. Topics • Trackers pricing • Trackers quota and usage Trackers pricing For pricing information for tracking and geofencing APIs, see the Amazon Location Service pricing page. Position Written You can use BatchUpdateDevicePosition to upload position update data for one or more devices to a tracker resource (up to ten devices per batch). Price is based on the number of device positions in your API request. Unit price per device position update is based on the total monthly usage volume. See the Amazon Location Service pricing page for details on unit price and volume tiers. You can optimize your Position Written cost by configuring the device position update frequency (also known as ping rate) from your tracking devices, and optionally use a local filter to only upload relevant device position updates to Amazon Location Service. Position Read You can use BatchGetDevicePosition to lists the latest device positions for requested devices, up to 100 devices per request. You can also use GetDevicePosition to retrieve a device's most recent position according to its sample time. Price is based on the number of API requests. Position Delete You can use BatchDeleteDevicePositionHistory to delete the position history of one or more devices from a tracker resource, up to 100 devices per request. Price is based on the number of devices in your API request. Position Integrity Evaluation Pricing 816 Amazon Location Service Developer Guide You can use VerifyDevicePosition to verify the integrity of the device's position by determining if it
amazon-location-developer-guide-162
amazon-location-developer-guide.pdf
162
positions for requested devices, up to 100 devices per request. You can also use GetDevicePosition to retrieve a device's most recent position according to its sample time. Price is based on the number of API requests. Position Delete You can use BatchDeleteDevicePositionHistory to delete the position history of one or more devices from a tracker resource, up to 100 devices per request. Price is based on the number of devices in your API request. Position Integrity Evaluation Pricing 816 Amazon Location Service Developer Guide You can use VerifyDevicePosition to verify the integrity of the device's position by determining if it was reported behind a proxy, and by comparing it to an inferred position estimated based on the device's state. Price is based on the number of API requests. Trackers quota and usage This topic provides a summary of rate limits and quotas for Amazon Location Service trackers. Note If you require a higher quota, you can use the Service Quotas console to request quota increases for adjustable quotas. When requesting a quota increase, select the Region you require the quota increase in, since most quotas are specific to the AWS Region. You can request up to twice the default limit for each API. For requests that exceed twice the default limit, your request will submit a support ticket. You can also connect to your premium support team. There are no direct charges for quota increase requests, but higher usage levels may lead to increased service costs based on the additional resources consumed. See the section called “Manage quotas” for more information. Service Quotas are maximum number of resources you can have per AWS account and AWS Region. Amazon Location Service denies additional requests that exceed the service quota. Resources API name Default Max adjustable limit Tracker resources per account 500 1000 Tracker consumers per tracker 5 If you need more than this, request quota increases or contact the support team. Max adjustable limit is not applicable. Quotas and usage 817 Amazon Location Service Developer Guide API name Default Max adjustable limit Contact the support team. CRUD API Note If you need a higher limit for any of these APIs, request quota increases or contact the support team. API name Default Max adjustable limit AssociateTrackerConsumer CreateTracker DeleteTracker DescribeTracker 10 10 10 10 DisassociateTrackerConsumer 10 ListTrackerConsumers ListTrackers UpdateTracker 10 10 10 Data API Note 20 20 20 20 20 20 20 20 If you need a higher limit for any of these APIs, request quota increases or contact the support team. Quotas and usage 818 Amazon Location Service Developer Guide API name Default Max adjustable limit BatchGetDevicePosition BatchUpdateDevicePosition GetDevicePosition GetDevicePositionHistory BatchDeleteDevicePositionHi story ListDevicePositions 50 50 50 50 50 50 Other usage limits 100 100 100 100 100 100 Quotas and usage 819 Amazon Location Service Developer Guide Developer tools for using Amazon Location Service AWS provides Software Development Kits (SDKs) for multiple programming languages, allowing you to easily integrate the Amazon Location Service into your applications. This page outlines the available SDKs, their installation procedures, and code examples to help you get started with the Amazon Location Service in your preferred development environment. There are several tools that will help you to use Amazon Location Service. Topics • SDKs and frameworks for Amazon Location Service • Amazon Location Service API and CLI • Examples and Learning Resources SDKs and frameworks for Amazon Location Service AWS provides Software Development Kits (SDKs) for multiple programming languages, allowing you to easily integrate the Amazon Location Service into your applications. This page outlines the available SDKs, their installation procedures, and code examples to help you get started with the Amazon Location Service in your preferred development environment. There are several tools that will help you to use Amazon Location Service. • AWS SDKs – The AWS software development kits (SDKs) are available in many popular programming languages, providing an API, code examples, and documentation that makes it easier to build applications in your preferred language. The AWS SDKs include the core Amazon Location APIs and functionality, including access to Maps, Places, Routes, Geofencing, and Trackers. To learn more about the SDKs available to use with Amazon Location Service for different applications and languages, see the section called “SDKs by language”. • MapLibre – Amazon Location Service recommends rendering maps using the MapLibre rendering engine. MapLibre is an engine for displaying maps in web or mobile applications. MapLibre also has a plugin model, and supports user interface for searching and routes in some languages and platforms. To learn more about using MapLibre and the functionality it provides, see the section called “Use MapLibre tools”. • Amazon Location SDKs – The Amazon Location SDKs are a set of open source libraries that make it easier to develop applications with Amazon Location Service. The libraries provide SDKs and frameworks
amazon-location-developer-guide-163
amazon-location-developer-guide.pdf
163
called “SDKs by language”. • MapLibre – Amazon Location Service recommends rendering maps using the MapLibre rendering engine. MapLibre is an engine for displaying maps in web or mobile applications. MapLibre also has a plugin model, and supports user interface for searching and routes in some languages and platforms. To learn more about using MapLibre and the functionality it provides, see the section called “Use MapLibre tools”. • Amazon Location SDKs – The Amazon Location SDKs are a set of open source libraries that make it easier to develop applications with Amazon Location Service. The libraries provide SDKs and frameworks 820 Amazon Location Service Developer Guide functionality to support authentication for mobile and web applications, location tracking for mobile applications, conversion between Amazon Location data types and GeoJSON, as well as a hosted package of the Amazon Location client for the AWS SDK v3. To learn more about the Amazon Location SDKs, see the section called “SDKs by language”. • Amazon Location Migration SDK – The Amazon Location Migration SDK provides a bridge that allows you to migrate existing applications from Google Maps to Amazon Location. The Migration SDK provides an option for your application built using the Google Maps SDK for JavaScript to use Amazon Location Service without needing to rewrite any of the application or business logic if Amazon Location supports the capabilities used. The Migration SDK redirects all API calls to the Amazon Location instead of Google Map. To get started, see the Amazon Location Migration SDK on GitHub. Developer tutorials Use this section to learn how to use various aspects of the Amazon Location Service SDK. Topics • How to use authentication helpers • Use Amazon Location MapLibre Geocoder GL plugin • How to use Tracking SDKs • Use MapLibre tools and related libraries with Amazon Location How to use authentication helpers This section provides additional information about authentication helpers. Web The Amazon Location JavaScript authentication utilities assistw in authenticating when making Amazon Location Service API calls from JavaScript applications. These utilities specifically support authentication using API keys or Amazon Cognito. Installation • Install this library using NPM: npm install @aws/amazon-location-utilities-auth-helper Developer tutorials 821 Amazon Location Service Developer Guide • To use it directly in the browser, include the following in your HTML file: <script src="https://cdn.jsdelivr.net/npm/@aws/amazon-location-utilities-auth- helper@1"></script> Usage To use the authentication helpers, import the library and call the necessary utility functions. This library supports authenticating requests from the Amazon Location Service SDKs, including the Maps, Places, and Routes standalone SDKs, as well as rendering maps with MapLibre GL JS. Usage with Modules This example demonstrates the use of the standalone Places SDK to make a request authenticated with API keys: npm install @aws-sdk/geo-places-client import { GeoPlacesClient, GeocodeCommand } from "@aws-sdk/geo-places-client"; import { withAPIKey } from "@aws/amazon-location-utilities-auth-helper"; const authHelper = withAPIKey("<API Key>", "<Region>"); const client = new GeoPlacesClient(authHelper.getClientConfig()); const input = { ... }; const command = new GeocodeCommand(input); const response = await client.send(command); This example demonstrates the use of the standalone Routes SDK to make a request authenticated with API keys: npm install @aws-sdk/geo-routes-client import { GeoRoutesClient, CalculateRoutesCommand } from "@aws-sdk/geo-routes-client"; import { withAPIKey } from "@aws/amazon-location-utilities-auth-helper"; const authHelper = withAPIKey("<API Key>", "<Region>"); const client = new GeoRoutesClient(authHelper.getClientConfig()); const input = { ... }; const command = new CalculateRoutesCommand(input); Developer tutorials 822 Amazon Location Service Developer Guide const response = await client.send(command); This example uses the Location SDK with API key authentication: npm install @aws-sdk/client-location import { LocationClient, ListGeofencesCommand } from "@aws-sdk/client-location"; import { withAPIKey } from "@aws/amazon-location-utilities-auth-helper"; const authHelper = withAPIKey("<API Key>", "<Region>"); const client = new LocationClient(authHelper.getClientConfig()); const input = { ... }; const command = new ListGeofencesCommand(input); const response = await client.send(command); Usage with Browser Utility functions are accessible under the amazonLocationAuthHelper global object when used directly in a browser environment. This example demonstrates a request with the Amazon Location Client, authenticated using API keys: <script src="https://cdn.jsdelivr.net/npm/@aws/amazon-location-client@1"></script> const authHelper = amazonLocationClient.withAPIKey("<API Key>", "<Region>"); const client = new amazonLocationClient.GeoRoutesClient(authHelper.getClientConfig()); const input = { ... }; const command = new amazonLocationClient.routes.CalculateRoutesCommand(input); const response = await client.send(command); This example demonstrates rendering a map with MapLibre GL JS, authenticated with an API key: <script src="https://cdn.jsdelivr.net/npm/maplibre-gl@4"></script> const apiKey = "<API Key>"; const region = "<Region>"; const styleName = "Standard"; const map = new maplibregl.Map({ Developer tutorials 823 Amazon Location Service container: "map", center: [-123.115898, 49.295868], zoom: 10, Developer Guide style: `https://maps.geo.${region}.amazonaws.com/v2/styles/${styleName}/descriptor? key=${apiKey}`, }); This example demonstrates rendering a map with MapLibre GL JS using Amazon Cognito: <script src="https://cdn.jsdelivr.net/npm/maplibre-gl@4"></script> <script src="https://cdn.jsdelivr.net/npm/@aws/amazon-location-utilities-auth- helper@1"></script> const identityPoolId = "<Identity Pool ID>"; const authHelper = await amazonLocationAuthHelper.withIdentityPoolId(identityPoolId); const map = new maplibregl.Map({ container: "map", center: [-123.115898, 49.295868], zoom: 10, style: `https://maps.geo.${region}.amazonaws.com/v2/styles/${styleName}/descriptor`, ...authHelper.getMapAuthenticationOptions(), }); Alternative Usage with Authenticated Identities You can modify the withIdentityPoolId function to include custom parameters for authenticated identities: const userPoolId = "<User Pool ID>"; const authHelper = await amazonLocationAuthHelper.withIdentityPoolId(identityPoolId,
amazon-location-developer-guide-164
amazon-location-developer-guide.pdf
164
= "Standard"; const map = new maplibregl.Map({ Developer tutorials 823 Amazon Location Service container: "map", center: [-123.115898, 49.295868], zoom: 10, Developer Guide style: `https://maps.geo.${region}.amazonaws.com/v2/styles/${styleName}/descriptor? key=${apiKey}`, }); This example demonstrates rendering a map with MapLibre GL JS using Amazon Cognito: <script src="https://cdn.jsdelivr.net/npm/maplibre-gl@4"></script> <script src="https://cdn.jsdelivr.net/npm/@aws/amazon-location-utilities-auth- helper@1"></script> const identityPoolId = "<Identity Pool ID>"; const authHelper = await amazonLocationAuthHelper.withIdentityPoolId(identityPoolId); const map = new maplibregl.Map({ container: "map", center: [-123.115898, 49.295868], zoom: 10, style: `https://maps.geo.${region}.amazonaws.com/v2/styles/${styleName}/descriptor`, ...authHelper.getMapAuthenticationOptions(), }); Alternative Usage with Authenticated Identities You can modify the withIdentityPoolId function to include custom parameters for authenticated identities: const userPoolId = "<User Pool ID>"; const authHelper = await amazonLocationAuthHelper.withIdentityPoolId(identityPoolId, { logins: { [`cognito-idp.${region}.amazonaws.com/${userPoolId}`]: "cognito-id-token" } }); iOS The Amazon Location Service Mobile Authentication SDK for iOS helps authenticate requests to Amazon Location Service APIs from iOS applications. It specifically supports authentication via API keys or Amazon Cognito. Developer tutorials 824 Amazon Location Service Installation Developer Guide • Open Xcode and go to File > Add Package Dependencies. • Type the package URL (https://github.com/aws-geospatial/amazon-location-mobile-auth-sdk- ios/) into the search bar and press Enter. • Select the "amazon-location-mobile-auth-sdk-ios" package and click Add Package. • Choose the "AmazonLocationiOSAuthSDK" package product and click Add Package. Usage After installing the library, use the AuthHelper class to configure client settings for either API keys or Amazon Cognito. API Keys Here is an example using the standalone Places SDK with API key authentication: import AmazonLocationiOSAuthSDK import AWSGeoPlaces func geoPlacesExample() { let apiKey = "<API key>" let region = "<Region>" let authHelper = try await AuthHelper.withApiKey(apiKey: apiKey, region: region) let client: GeoPlacesClient = GeoPlacesClient(config: authHelper.getGeoPlacesClientConfig()) let input = AWSGeoPlaces.SearchTextInput( biasPosition: [-97.7457518, 30.268193], queryText: "tacos" ) let output = try await client.searchText(input: input) } Here is an example using the standalone Routes SDK with API key authentication: import AmazonLocationiOSAuthSDK import AWSGeoRoutes Developer tutorials 825 Amazon Location Service Developer Guide func geoRoutesExample() { let apiKey = "<API key>" let region = "<Region>" let authHelper = try await AuthHelper.withApiKey(apiKey: apiKey, region: region) let client: GeoRoutesClient = GeoRoutesClient(config: authHelper.getGeoRoutesClientConfig()) let input = AWSGeoRoutes.CalculateRoutesInput( destination: [-123.1651031, 49.2577281], origin: [-97.7457518, 30.268193] ) let output = try await client.calculateRoutes(input: input) } Here is an example using the Location SDK with API key authentication: import AmazonLocationiOSAuthSDK import AWSLocation func locationExample() { let apiKey = "<API key>" let region = "<Region>" let authHelper = try await AuthHelper.withApiKey(apiKey: apiKey, region: region) let client: LocationClient = LocationClient(config: authHelper.getLocationClientConfig()) let input = AWSLocation.ListGeofencesInput( collectionName: "<Collection name>" ) let output = try await client.listGeofences(input: input) } Here is an example using the standalone Places SDK with Amazon Cognito: import AmazonLocationiOSAuthSDK import AWSGeoPlaces func geoPlacesExample() { Developer tutorials 826 Amazon Location Service Developer Guide let identityPoolId = "<Identity Pool ID>" let authHelper = try await AuthHelper.withIdentityPoolId(identityPoolId: identityPoolId) let client: GeoPlacesClient = GeoPlacesClient(config: authHelper.getGeoPlacesClientConfig()) let input = AWSGeoPlaces.SearchTextInput( biasPosition: [-97.7457518, 30.268193], queryText: "tacos" ) let output = try await client.searchText(input: input) } Here is an example using the standalone Routes SDK with Amazon Cognito: import AmazonLocationiOSAuthSDK import AWSGeoRoutes func geoRoutesExample() { let identityPoolId = "<Identity Pool ID>" let authHelper = try await AuthHelper.withIdentityPoolId(identityPoolId: identityPoolId) let client: GeoRoutesClient = GeoRoutesClient(config: authHelper.getGeoRoutesClientConfig()) let input = AWSGeoRoutes.CalculateRoutesInput( destination: [-123.1651031, 49.2577281], origin: [-97.7457518, 30.268193] ) let output = try await client.calculateRoutes(input: input) } Here is an example using the Location SDK with Amazon Cognito: import AmazonLocationiOSAuthSDK import AWSLocation func locationExample() { let identityPoolId = "<Identity Pool ID>" Developer tutorials 827 Amazon Location Service Developer Guide let authHelper = try await AuthHelper.withIdentityPoolId(identityPoolId: identityPoolId) let client: LocationClient = LocationClient(config: authHelper.getLocationClientConfig()) let input = AWSLocation.ListGeofencesInput( collectionName: "<Collection name>" ) let output = try await client.listGeofences(input: input) } Android The Amazon Location Service Mobile Authentication SDK for Android helps you authenticate requests to Amazon Location Service APIs from Android applications, specifically supporting authentication using Amazon Cognito. Installation • This authentication SDK works with the overall AWS Kotlin SDK. Both SDKs are published to Maven Central. Check the latest version of the auth SDK on Maven Central. • Add the following lines to the dependencies section of your build.gradle file in Android Studio: implementation("software.amazon.location:auth:1.1.0") implementation("org.maplibre.gl:android-sdk:11.5.2") implementation("com.squareup.okhttp3:okhttp:4.12.0") • For the standalone Maps, Places, and Routes SDKs, add the following lines: implementation("aws.sdk.kotlin:geomaps:1.3.65") implementation("aws.sdk.kotlin:geoplaces:1.3.65") implementation("aws.sdk.kotlin:georoutes:1.3.65") • For the consolidated Location SDK that includes Geofencing and Tracking, add the following line: implementation("aws.sdk.kotlin:location:1.3.65") Developer tutorials 828 Amazon Location Service Usage Import the following classes in your code: Developer Guide // For the standalone Maps, Places, and Routes SDKs import aws.sdk.kotlin.services.geomaps.GeoMapsClient import aws.sdk.kotlin.services.geoplaces.GeoPlacesClient import aws.sdk.kotlin.services.georoutes.GeoRoutesClient // For the consolidated Location SDK import aws.sdk.kotlin.services.location.LocationClient import software.amazon.location.auth.AuthHelper import software.amazon.location.auth.LocationCredentialsProvider import software.amazon.location.auth.AwsSignerInterceptor import org.maplibre.android.module.http.HttpRequestUtil import okhttp3.OkHttpClient You can create an AuthHelper and use it with the AWS Kotlin SDK: Example: Credential Provider with Identity Pool ID private suspend fun exampleCognitoLogin() { val authHelper = AuthHelper.withCognitoIdentityPool("MY-COGNITO-IDENTITY-POOL-ID", applicationContext) var geoMapsClient = GeoMapsClient(authHelper?.getGeoMapsClientConfig()) var geoPlacesClient = GeoPlacesClient(authHelper?.getGeoPlacesClientConfig()) var geoRoutesClient = GeoRoutesClient(authHelper?.getGeoRoutesClientConfig()) var locationClient = LocationClient(authHelper?.getLocationClientConfig()) } Example: Credential Provider with Custom Credential Provider private
amazon-location-developer-guide-165
amazon-location-developer-guide.pdf
165
Location Service Usage Import the following classes in your code: Developer Guide // For the standalone Maps, Places, and Routes SDKs import aws.sdk.kotlin.services.geomaps.GeoMapsClient import aws.sdk.kotlin.services.geoplaces.GeoPlacesClient import aws.sdk.kotlin.services.georoutes.GeoRoutesClient // For the consolidated Location SDK import aws.sdk.kotlin.services.location.LocationClient import software.amazon.location.auth.AuthHelper import software.amazon.location.auth.LocationCredentialsProvider import software.amazon.location.auth.AwsSignerInterceptor import org.maplibre.android.module.http.HttpRequestUtil import okhttp3.OkHttpClient You can create an AuthHelper and use it with the AWS Kotlin SDK: Example: Credential Provider with Identity Pool ID private suspend fun exampleCognitoLogin() { val authHelper = AuthHelper.withCognitoIdentityPool("MY-COGNITO-IDENTITY-POOL-ID", applicationContext) var geoMapsClient = GeoMapsClient(authHelper?.getGeoMapsClientConfig()) var geoPlacesClient = GeoPlacesClient(authHelper?.getGeoPlacesClientConfig()) var geoRoutesClient = GeoRoutesClient(authHelper?.getGeoRoutesClientConfig()) var locationClient = LocationClient(authHelper?.getLocationClientConfig()) } Example: Credential Provider with Custom Credential Provider private suspend fun exampleCustomCredentialLogin() { var authHelper = AuthHelper.withCredentialsProvider(MY-CUSTOM-CREDENTIAL-PROVIDER, "MY-AWS-REGION", applicationContext) var geoMapsClient = GeoMapsClient(authHelper?.getGeoMapsClientConfig()) var geoPlacesClient = GeoPlacesClient(authHelper?.getGeoPlacesClientConfig()) var geoRoutesClient = GeoRoutesClient(authHelper?.getGeoRoutesClientConfig()) Developer tutorials 829 Amazon Location Service Developer Guide var locationClient = LocationClient(authHelper?.getLocationClientConfig()) } Example: Credential Provider with API Key private suspend fun exampleApiKeyLogin() { var authHelper = AuthHelper.withApiKey("MY-API-KEY", "MY-AWS-REGION", applicationContext) var geoMapsClient = GeoMapsClient(authHelper?.getGeoMapsClientConfig()) var geoPlacesClient = GeoPlacesClient(authHelper?.getGeoPlacesClientConfig()) var geoRoutesClient = GeoRoutesClient(authHelper?.getGeoRoutesClientConfig()) var locationClient = LocationClient(authHelper?.getLocationClientConfig()) } You can use LocationCredentialsProvider to load the MapLibre map. Here is an example: HttpRequestUtil.setOkHttpClient( OkHttpClient.Builder() .addInterceptor( AwsSignerInterceptor( "geo", "MY-AWS-REGION", locationCredentialsProvider, applicationContext ) ) .build() ) Use the created clients to make calls to Amazon Location Service. Here is an example that searches for places near a specified latitude and longitude: val suggestRequest = SuggestRequest { biasPosition = listOf(-97.718833, 30.405423) maxResults = MAX_RESULT language = "PREFERRED-LANGUAGE" } val nearbyPlaces = geoPlacesClient.suggest(suggestRequest) Developer tutorials 830 Amazon Location Service Developer Guide Use Amazon Location MapLibre Geocoder GL plugin The Amazon Location MapLibre geocoder plugin is designed to make it easier for you to incorporate Amazon Location functionality into your JavaScript applications, when working with map rendering and geocoding using the maplibre-gl-geocoder library. Installation Install Amazon Location MapLibre geocoder plugin from NPM for usage with modules. Type this command: npm install @aws/amazon-location-for-maplibre-gl-geocoder You can also import HTML and CSS files for usage directly in the browser with a script: <script src="https://cdn.jsdelivr.net/npm/@aws/amazon-location-for-maplibre-gl- geocoder@2"></script> <link href="https://cdn.jsdelivr.net/npm/@aws/amazon-location-for-maplibre-gl-geocoder@2/ dist/amazon-location-for-mlg-styles.css" rel="stylesheet" /> Usage with module - standalone GeoPlaces SDK This example uses the AWS SDK for JavaScript V3 to get a GeoPlacesClient to provide to the library and AuthHelper for authenticating the GeoPlacesClient. It enables all APIs for the geocoder. // Import MapLibre GL JS import maplibregl from "maplibre-gl"; // Import from the AWS JavaScript SDK V3 import { GeoPlacesClient } from "@aws-sdk/client-geo-places"; // Import the utility functions import { withAPIKey } from "@aws/amazon-location-utilities-auth-helper"; // Import the AmazonLocationMaplibreGeocoder import { buildAmazonLocationMaplibreGeocoder, AmazonLocationMaplibreGeocoder, } from "@aws/amazon-location-for-maplibre-gl-geocoder"; const apiKey = "<API Key>"; const mapName = "Standard"; Developer tutorials 831 Amazon Location Service Developer Guide const region = "<Region>"; // region containing Amazon Location API Key // Create an authentication helper instance using an API key and region const authHelper = await withAPIKey(apiKey, region); const client = new GeoPlacesClient(authHelper.getClientConfig()); // Render the map const map = new maplibregl.Map({ container: "map", center: [-123.115898, 49.295868], zoom: 10, style: `https://maps.geo.${region}.amazonaws.com/maps/v2/styles/${mapStyle}/ descriptor?key=${apiKey}`, }); // Gets an instance of the AmazonLocationMaplibreGeocoder Object. const amazonLocationMaplibreGeocoder = buildAmazonLocationMaplibreGeocoder(client, { enableAll: true }); // Now we can add the Geocoder to the map. map.addControl(amazonLocationMaplibreGeocoder.getPlacesGeocoder()); Usage with a browser - standalone GeoPlaces SDK This example uses the Amazon Location client to make a request that authenticates using an API key. Note Some of these example use the Amazon Location GeoPlacesClient. This client is based on the AWS SDK for JavaScript V3 and allows for making calls to Amazon Location through a script referenced in an HTML file. Include the following in an HTML file: <!-- Import the Amazon Location For Maplibre Geocoder --> <script src="https://cdn.jsdelivr.net/npm/@aws/amazon-location-for-maplibre-gl- geocoder@2"></script> <link Developer tutorials 832 Amazon Location Service Developer Guide href="https://cdn.jsdelivr.net/npm/@aws/amazon-location-for-maplibre-gl-geocoder@2/ dist/amazon-location-for-mlg-styles.css" rel="stylesheet" /> <!-- Import the Amazon GeoPlacesClient --> <script src="https://cdn.jsdelivr.net/npm/@aws/amazon-location-client@1"></script> Include the following in a JavaScript file: const apiKey = "<API Key>"; const mapStyle = "Standard"; const region = "<Region>"; // region containing Amazon Location API key // Create an authentication helper instance using an API key and region const authHelper = await amazonLocationClient.withAPIKey(apiKey, region); const client = new amazonLocationClient.GeoPlacesClient(authHelper.getClientConfig()); // Render the map const map = new maplibregl.Map({ container: "map", center: [-123.115898, 49.295868], zoom: 10, style: `https://maps.geo.${region}.amazonaws.com/maps/v2/styles/${mapStyle}/ descriptor?key=${apiKey}`, }); // Initialize the AmazonLocationMaplibreGeocoder object const amazonLocationMaplibreGeocoderObject = amazonLocationMaplibreGeocoder.buildAmazonLocationMaplibreGeocoder( client, { enableAll: true }, ); // Use the AmazonLocationWithMaplibreGeocoder object to add a geocoder to the map. map.addControl(amazonLocationMaplibreGeocoderObject.getPlacesGeocoder()); Functions Listed below are the functions used in the Amazon Location MapLibre geocoder plugin: • buildAmazonLocationMaplibreGeocoder Developer tutorials 833 Amazon Location Service Developer Guide This class creates an instance of the AmazonLocationMaplibreGeocder, which is the entry point to the other all other calls. Using standalone GeoPlacesClient API calls (client is instance of GeoPlacesClient): const amazonLocationMaplibreGeocoder = buildAmazonLocationMaplibreGeocoder(client, { enableAll: true }); Using consolidated LocationClient API calls (client is instance of LocationClient): const amazonLocationMaplibreGeocoder = buildAmazonLocationMaplibreGeocoder(client, { enableAll: true, placesIndex: placeIndex, }); • getPlacesGeocoder Returns a ready-to-use IControl
amazon-location-developer-guide-166
amazon-location-developer-guide.pdf
166
// Use the AmazonLocationWithMaplibreGeocoder object to add a geocoder to the map. map.addControl(amazonLocationMaplibreGeocoderObject.getPlacesGeocoder()); Functions Listed below are the functions used in the Amazon Location MapLibre geocoder plugin: • buildAmazonLocationMaplibreGeocoder Developer tutorials 833 Amazon Location Service Developer Guide This class creates an instance of the AmazonLocationMaplibreGeocder, which is the entry point to the other all other calls. Using standalone GeoPlacesClient API calls (client is instance of GeoPlacesClient): const amazonLocationMaplibreGeocoder = buildAmazonLocationMaplibreGeocoder(client, { enableAll: true }); Using consolidated LocationClient API calls (client is instance of LocationClient): const amazonLocationMaplibreGeocoder = buildAmazonLocationMaplibreGeocoder(client, { enableAll: true, placesIndex: placeIndex, }); • getPlacesGeocoder Returns a ready-to-use IControl object that can be added directly to a map. const geocoder = getPlacesGeocoder(); // Initialize map see: <insert link to initializing a map instance here> let map = await initializeMap(); // Add the geocoder to the map. map.addControl(geocoder); How to use Tracking SDKs This topic provides information about how to use Tracking SDKs. iOS The Amazon Location mobile tracking SDK provides utilities which help easily authenticate, capture device positions, and send position updates to Amazon Location Trackers. The SDK supports local filtering of location updates with configurable update intervals. This reduces data costs and optimizes intermittent connectivity for your iOS applications. The iOS tracking SDK is available on GitHub: Amazon Location Mobile Tracking SDK for iOS. Developer tutorials 834 Amazon Location Service Developer Guide This section covers the following topics for the Amazon Location mobile tracking iOS SDK: Topics • Installation • Usage • Filters • iOS Mobile SDK tracking functions • Examples Installation Use the following procedure to install the mobile tracking SDK for iOS: 1. 2. 3. 4. In your Xcode project, go to File and select Add Package Dependencies. Type the following URL: https://github.com/aws-geospatial/amazon-location-mobile- tracking-sdk-ios/ into the search bar and press the enter key. Select the amazon-location-mobile-tracking-sdk-ios package and click on Add Package. Select the AmazonLocationiOSTrackingSDK package product and click on Add Package. Usage The following procedure shows you how to create an authentication helper using credentials from Amazon Cognito. 1. After installing the library, you need to add one or both of the descriptions into your info.plist file: Privacy - Location When In Use Usage Description Privacy - Location Always and When In Use Usage Description 2. Next, import the AuthHelper in your class: import AmazonLocationiOSAuthSDKimport AmazonLocationiOSTrackingSDK 3. Then you will create an AuthHelper object and use it with the AWS SDK, by creating an authentication helper using credentials from Amazon Cognito. Developer tutorials 835 Amazon Location Service Developer Guide let authHelper = AuthHelper() let locationCredentialsProvider = authHelper.authenticateWithCognitoUserPool(identityPoolId: "My-Cognito-Identity- Pool-Id", region: "My-region") //example: us-east-1 let locationTracker = LocationTracker(provider: locationCredentialsProvider, trackerName: "My-tracker-name") // Optionally you can set ClientConfig with your own values in either initialize or in a separate function // let trackerConfig = LocationTrackerConfig(locationFilters: [TimeLocationFilter(), DistanceLocationFilter()], trackingDistanceInterval: 30, trackingTimeInterval: 30, logLevel: .debug) // locationTracker = LocationTracker(provider: credentialsProvider, trackerName: "My-tracker-name",config: trackerConfig) // locationTracker.setConfig(config: trackerConfig) Filters The Amazon Location mobile tracking iOS SDK has three inbuilt location filters. • TimeLocationFilter: Filters the current location to be uploaded based on a defined time interval. • DistanceLocationFilter: Filters location updates based on a specified distance threshold. • AccuracyLocationFilter: Filters location updates by comparing the distance moved since the last update with the current location's accuracy. This example adds filters in the LocationTracker at the creation time: val config = LocationTrackerConfig( trackerName = "MY-TRACKER-NAME", logLevel = TrackingSdkLogLevel.DEBUG, accuracy = Priority.PRIORITY_HIGH_ACCURACY, latency = 1000, frequency = 5000, waitForAccurateLocation = false, Developer tutorials 836 Amazon Location Service Developer Guide minUpdateIntervalMillis = 5000, locationFilters = mutableListOf(TimeLocationFilter(), DistanceLocationFilter(), AccuracyLocationFilter()) ) locationTracker = LocationTracker( applicationContext, locationCredentialsProvider, config, ) This example enables and disables filter at runtime with LocationTracker: // To enable the filter locationTracker?.enableFilter(TimeLocationFilter()) // To disable the filter locationTracker?.disableFilter(TimeLocationFilter()) iOS Mobile SDK tracking functions The Amazon Location mobile tracking SDK for iOS includes the following functions: • Class: LocationTracker init(provider: LocationCredentialsProvider, trackerName: String, config: LocationTrackerConfig? = nil) This is an initializer function to create a LocationTracker object. It requires instances of LocationCredentialsProvider , trackerName and optionally an instance of LocationTrackingConfig. If the config is not provided it will be initialized with default values. • Class: LocationTracker setTrackerConfig(config: LocationTrackerConfig) This sets Tracker's config to take effect at any point after initialization of location tracker. • Class: LocationTracker getTrackerConfig() Developer tutorials 837 Amazon Location Service Developer Guide This gets the location tracking config to use or modify in your app. Returns: LocationTrackerConfig • Class: LocationTracker getDeviceId() Gets the location tracker's generated device Id. Returns: String? • Class: LocationTracker startTracking() Starts the process of accessing the user's location and sending it to the AWS tracker. • Class: LocationTracker resumeTracking() Resumes the process of accessing the user's location and sending it to the AWS tracker. • Class: LocationTracker stopTracking() Stops the process of tracking the user's location. • Class: LocationTracker startBackgroundTracking(mode: BackgroundTrackingMode) Starts the process of accessing the user's
amazon-location-developer-guide-167
amazon-location-developer-guide.pdf
167
LocationTracker getTrackerConfig() Developer tutorials 837 Amazon Location Service Developer Guide This gets the location tracking config to use or modify in your app. Returns: LocationTrackerConfig • Class: LocationTracker getDeviceId() Gets the location tracker's generated device Id. Returns: String? • Class: LocationTracker startTracking() Starts the process of accessing the user's location and sending it to the AWS tracker. • Class: LocationTracker resumeTracking() Resumes the process of accessing the user's location and sending it to the AWS tracker. • Class: LocationTracker stopTracking() Stops the process of tracking the user's location. • Class: LocationTracker startBackgroundTracking(mode: BackgroundTrackingMode) Starts the process of accessing the user's location and sending it to the AWS tracker while the application is in the background. BackgroundTrackingMode has the following options: • Active: This option doesn't automatically pauses location updates. • BatterySaving: This option automatically pauses location updates. • None: This option overall disables background location updates. • Class: LocationTracker resumeBackgroundTracking(mode: BackgroundTrackingMode) Developer tutorials 838 Amazon Location Service Developer Guide Resumes the process of accessing the user's location and sending it to the AWS tracker while the application is in the background. • Class: LocationTracker stopBackgroundTracking() Stops the process of accessing the user's location and sending it to the AWS tracker while the application is in the background. • Class: LocationTracker getTrackerDeviceLocation(nextToken: String?, startTime: Date? = nil, endTime: Date? = nil, completion: @escaping (Result<GetLocationResponse, Error>) Retrieves the uploaded tracking locations for the user's device between start and end date and time. Returns: Void • Class: LocationTrackerConfig init() This initializes the LocationTrackerConfig with default values. • Class: LocationTrackerConfig init(locationFilters: [LocationFilter]? = nil, trackingDistanceInterval: Double? = nil, trackingTimeInterval: Double? = nil, trackingAccuracyLevel: Double? = nil, uploadFrequency: Double? = nil, desiredAccuracy: CLLocationAccuracy? = nil, activityType: CLActivityType? = nil, logLevel: LogLevel? = nil) This initializes the LocationTrackerConfig with user-defined parameter values. If a parameter value is not provided it will be set to a default value. • Class: LocationFilter shouldUpload(currentLocation: LocationEntity, previousLocation: LocationEntity?, trackerConfig: LocationTrackerConfig) Developer tutorials 839 Amazon Location Service Developer Guide The LocationFilter is a protocol that users can implement for their custom filter implementation. A user would need to implement shouldUpload function to compare previous and current location and return if the current location should be uploaded. Examples This sections details examples of using the Amazon Location Mobile Tracking SDK for iOS. Note Ensure that the necessary permissions are set in the info.plist file. These are the same permissions listed in the the section called “Usage” section. The following example demonstrates functionality for tracking device location and retrieving tracked locations: Privacy - Location When In Use Usage Description Privacy - Location Always and When In Use Usage Description Start tracking the location: do { try locationTracker.startTracking() } catch TrackingLocationError.permissionDenied { // Handle permissionDenied by showing the alert message or opening the app settings } Resume tracking the location: do { try locationTracker.resumeTracking() } catch TrackingLocationError.permissionDenied { // Handle permissionDenied by showing the alert message or opening the app settings } Developer tutorials 840 Amazon Location Service Stop tracking the location: locationTracker.stopTracking() Start background tracking: do { Developer Guide locationTracker.startBackgroundTracking(mode: .Active) // .Active, .BatterySaving, .None } catch TrackingLocationError.permissionDenied { // Handle permissionDenied by showing the alert message or opening the app settings } Resume background tracking: do { locationTracker.resumeBackgroundTracking(mode: .Active) } catch TrackingLocationError.permissionDenied { // Handle permissionDenied by showing the alert message or opening the app settings } To stop background tracking: locationTracker.stopBackgroundTracking() Retrieve device's tracked locations from the tracker: func getTrackingPoints(nextToken: String? = nil) { let startTime: Date = Date().addingTimeInterval(-86400) // Yesterday's day date and time let endTime: Date = Date() locationTracker.getTrackerDeviceLocation(nextToken: nextToken, startTime: startTime, endTime: endTime, completion: { [weak self] result in switch result { case .success(let response): let positions = response.devicePositions // You can draw positions on map or use it further as per your requirement Developer tutorials 841 Amazon Location Service Developer Guide // If nextToken is available, recursively call to get more data if let nextToken = response.nextToken { self?.getTrackingPoints(nextToken: nextToken) } case .failure(let error): print(error) } }) } Android Mobile Tracking SDK The Amazon Location mobile tracking SDK provides utilities which help easily authenticate, capture device positions, and send position updates to Amazon Location Trackers. The SDK supports local filtering of location updates with configurable update intervals. This reduces data costs and optimizes intermittent connectivity for your Android applications. The Android tracking SDK is available on GitHub: Amazon Location Mobile Tracking SDK for Android. Additionally, both the mobile authentication SDK and the AWS SDK are available on the AWS Maven repository. The Android tracking SDK is designed to work with the general AWS SDK. This section covers the following topics for the Amazon Location mobile tracking Android SDK: Topics • Installation • Usage • Filters • Android Mobile SDK tracking functions • Examples Installation To install the SDK, add the following lines to the dependencies section of
amazon-location-developer-guide-168
amazon-location-developer-guide.pdf
168
This reduces data costs and optimizes intermittent connectivity for your Android applications. The Android tracking SDK is available on GitHub: Amazon Location Mobile Tracking SDK for Android. Additionally, both the mobile authentication SDK and the AWS SDK are available on the AWS Maven repository. The Android tracking SDK is designed to work with the general AWS SDK. This section covers the following topics for the Amazon Location mobile tracking Android SDK: Topics • Installation • Usage • Filters • Android Mobile SDK tracking functions • Examples Installation To install the SDK, add the following lines to the dependencies section of your build.gradle file in Android Studio: implementation("software.amazon.location:tracking:0.0.1") implementation("software.amazon.location:auth:0.0.1") Developer tutorials 842 Amazon Location Service Developer Guide implementation("com.amazonaws:aws-android-sdk-location:2.72.0") Usage This procedure shows you how to use the SDK to authenticate and create the LocationTracker object: Note This procedure assumes you have imported the library mentioned in the the section called “Installation” section. 1. Import the following classes in your code: import software.amazon.location.tracking.LocationTracker import software.amazon.location.tracking.config.LocationTrackerConfig import software.amazon.location.tracking.util.TrackingSdkLogLevel import com.amazonaws.services.geo.AmazonLocationClient import software.amazon.location.auth.AuthHelper import software.amazon.location.auth.LocationCredentialsProvider 2. Next create an AuthHelper, since the LocationCredentialsProvider parameter is required for creating a LocationTracker object: // Create an authentication helper using credentials from Amazon Cognito val authHelper = AuthHelper(applicationContext) val locationCredentialsProvider : LocationCredentialsProvider = authHelper.authenticateWithCognitoIdentityPool("My-Cognito-Identity-Pool-Id") 3. Now, use the LocationCredentialsProvider and LocationTrackerConfig to create a LocationTracker object: val config = LocationTrackerConfig( trackerName = "MY-TRACKER-NAME", logLevel = TrackingSdkLogLevel.DEBUG, accuracy = Priority.PRIORITY_HIGH_ACCURACY, latency = 1000, frequency = 5000, waitForAccurateLocation = false, minUpdateIntervalMillis = 5000, Developer tutorials 843 Amazon Location Service ) locationTracker = LocationTracker( applicationContext, locationCredentialsProvider, config, ) Filters Developer Guide The Amazon Location mobile tracking Android SDK has three inbuilt location filters. • TimeLocationFilter: Filters the current location to be uploaded based on a defined time interval. • DistanceLocationFilter: Filters location updates based on a specified distance threshold. • AccuracyLocationFilter: Filters location updates by comparing the distance moved since the last update with the current location's accuracy. This example adds filters in the LocationTracker at the creation time: val config = LocationTrackerConfig( trackerName = "MY-TRACKER-NAME", logLevel = TrackingSdkLogLevel.DEBUG, accuracy = Priority.PRIORITY_HIGH_ACCURACY, latency = 1000, frequency = 5000, waitForAccurateLocation = false, minUpdateIntervalMillis = 5000, locationFilters = mutableListOf(TimeLocationFilter(), DistanceLocationFilter(), AccuracyLocationFilter()) ) locationTracker = LocationTracker( applicationContext, locationCredentialsProvider, config, ) This example enables and disables filter at runtime with LocationTracker: // To enable the filter Developer tutorials 844 Amazon Location Service Developer Guide locationTracker?.enableFilter(TimeLocationFilter()) // To disable the filter locationTracker?.disableFilter(TimeLocationFilter()) Android Mobile SDK tracking functions The Amazon Location mobile tracking SDK for Android includes the following functions: • Class: LocationTracker constructor(context: Context,locationCredentialsProvider: LocationCredentialsProvider,trackerName: String), or constructor(context: Context,locationCredentialsProvider: LocationCredentialsProvider,clientConfig: LocationTrackerConfig) This is an initializer function to create a LocationTracker object. It requires instances of LocationCredentialsProvider , trackerName and optionally an instance of LocationTrackingConfig. If the config is not provided it will be initialized with default values. • Class: LocationTracker start(locationTrackingCallback: LocationTrackingCallback) Starts the process of accessing the user's location and sending it to an Amazon Location tracker. • Class: LocationTracker isTrackingInForeground() Checks if location tracking is currently in progress. • Class: LocationTracker stop() Stops the process of tracking the user's location. • Class: LocationTracker startTracking() Starts the process of accessing the user's location and sending it to the AWS tracker. Developer tutorials 845 Amazon Location Service • Class: LocationTracker Developer Guide startBackground(mode: BackgroundTrackingMode, serviceCallback: ServiceCallback) Starts the process of accessing the user's location and sending it to the AWS tracker while the application is in the background. BackgroundTrackingMode has the following options: • ACTIVE_TRACKING: This option actively tracks a user's location updates. • BATTERY_SAVER_TRACKING: This option tracks user's location updates every 15 minutes. • Class: LocationTracker stopBackgroundService() Stops the process of accessing the user's location and sending it to the AWS tracker while the application is in the background. • Class: LocationTracker getTrackerDeviceLocation() Retrieves the device location from Amazon Location services. • Class: LocationTracker getDeviceLocation(locationTrackingCallback: LocationTrackingCallback?) Retrieves the current device location from the fused location provider client and uploads it to Amazon Location tracker. • Class: LocationTracker uploadLocationUpdates(locationTrackingCallback: LocationTrackingCallback?) Uploads the device location to Amazon Location services after filtering based on the configured location filters. • Class: LocationTracker enableFilter(filter: LocationFilter) Enables a particular location filter. • Class: LocationTracker Developer tutorials 846 Amazon Location Service Developer Guide checkFilterIsExistsAndUpdateValue(filter: LocationFilter) Disable particular location filter. • Class: LocationTrackerConfig LocationTrackerConfig( // Required var trackerName: String, // Optional var locationFilters: MutableList = mutableListOf( TimeLocationFilter(), DistanceLocationFilter(), ), var logLevel: TrackingSdkLogLevel = TrackingSdkLogLevel.DEBUG, var accuracy: Int = Priority.PRIORITY_HIGH_ACCURACY, var latency: Long = 1000, var frequency: Long = 1500, var waitForAccurateLocation: Boolean = false, var minUpdateIntervalMillis: Long = 1000, var persistentNotificationConfig: NotificationConfig = NotificationConfig()) This initializes the LocationTrackerConfig with user-defined parameter values. If a parameter value is not provided, it will be set to a default value. • Class: LocationFilter shouldUpload(currentLocation: LocationEntry, previousLocation: LocationEntry?): Boolean The LocationFilter is a protocol that users can implement for their custom filter implementation. You need to
amazon-location-developer-guide-169
amazon-location-developer-guide.pdf
169
LocationTrackerConfig LocationTrackerConfig( // Required var trackerName: String, // Optional var locationFilters: MutableList = mutableListOf( TimeLocationFilter(), DistanceLocationFilter(), ), var logLevel: TrackingSdkLogLevel = TrackingSdkLogLevel.DEBUG, var accuracy: Int = Priority.PRIORITY_HIGH_ACCURACY, var latency: Long = 1000, var frequency: Long = 1500, var waitForAccurateLocation: Boolean = false, var minUpdateIntervalMillis: Long = 1000, var persistentNotificationConfig: NotificationConfig = NotificationConfig()) This initializes the LocationTrackerConfig with user-defined parameter values. If a parameter value is not provided, it will be set to a default value. • Class: LocationFilter shouldUpload(currentLocation: LocationEntry, previousLocation: LocationEntry?): Boolean The LocationFilter is a protocol that users can implement for their custom filter implementation. You need to implement the shouldUpload function to compare previous and current location and return if the current location should be uploaded. Examples The following code sample shows the mobile tracking SDK functionality. This example uses the LocationTracker to start and stop tracking in background: // For starting the location tracking locationTracker?.startBackground( BackgroundTrackingMode.ACTIVE_TRACKING, object : ServiceCallback { override fun serviceStopped() { if (selectedTrackingMode == BackgroundTrackingMode.ACTIVE_TRACKING) { isLocationTrackingBackgroundActive = false } else { Developer tutorials 847 Amazon Location Service Developer Guide isLocationTrackingBatteryOptimizeActive = false } } }, ) // For stopping the location tracking locationTracker?.stopBackgroundService() Use MapLibre tools and related libraries with Amazon Location MapLibre is primarily a rendering engine for displaying maps in a web or mobile application. However, it also includes support for plug-ins and provides functionality for working with other aspects of Amazon Location. The following describes tools that you can use, based on the area or location that you want to work with. Note To use any aspect of Amazon Location, install the AWS SDK for the language that you want to use. • Maps To display maps in your application, you need a map rendering engine that will use the data provided by Amazon Location, and draw to the screen. Map rendering engines also provide functionality to pan and zoom the map, or to add markers or pushpins and other annotations to the map. Amazon Location Service recommends rendering maps using the MapLibre rendering engine. MapLibre GL JS is an engine for displaying maps in JavaScript, while MapLibre Native provides maps for either iOS or Android. MapLibre also has a plug-in ecosystem to extend the core functionality. For more information, visit https://maplibre.org/maplibre-gl-js-docs/plugins/. • Places search To make creating a search user interface simpler, you can use the MapLibre geocoder for web (Android applications can use the Android Places plug-in). Developer tutorials 848 Amazon Location Service Developer Guide Use the Amazon Location for MapLibre geocoder library to simplify the process of using Amazon Location with amazon-location-for-maplibre-gl-geocoder in JavaScript Applications. For more information, see the section called “Use MapLibre Geocoder GL plugin”. • Routes • Geofences and Trackers MapLibre doesn't have any specific rendering or tools for geofences and tracking, but you can use the rendering functionality and plug-ins to show the geofences and tracked devices on the map. The devices being tracked can use MQTT or manually send updates to Amazon Location Service. Geofence events can be responded to using AWS Lambda. Many open source libraries are available to provide additional functionality for Amazon Location Service, for example Turf which provide spatial analysis functionality. Many libraries use the open standard GeoJSON formatted data. Amazon Location Service provides a library to convert responses into GeoJSON for use in JavaScript applications. For more information, see the section called “SDKs and frameworks”. SDKs by language SDK Versions We recommend that you use the most recent build of the AWS SDK, and any other SDKs, that you use in your projects, and to keep the SDKs up to date. The AWS SDK provides you with the latest features and functionality, and also security updates. To find the latest build of the AWS SDK for JavaScript, for example, see the browser installation topic in the AWS SDK for JavaScript documentation. The following tables provide information about AWS SDKs and Map Rendering Framework versions for languages and frameworks, by application type: web, mobile, or backend application. SDKs by language 849 Amazon Location Service Web frontend Developer Guide The following AWS SDKs and Map Rendering Framework versions are available for web frontend application development. Language / Framework AWS SDK Map Rendering Framework Fully supported JavaScript ReactJS TypeScript Partially supported Flutter Node.js PHP https://aws.amazon.com/sd k-for-javascript/ https://maplibre.org/proj ects/maplibre-gl-js/ https://aws.amazon.com/sd k-for-javascript/ https://github.com/maplib re/maplibre-react-native https://aws.amazon.com/sd k-for-javascript/ https://maplibre.org/proj ects/maplibre-gl-js/ https://docs.amplify.aws/ start/q/integration/flutter/ https://github.com/maplib re/flutter-maplibre-gl Flutter is not yet fully The MapLibre Flutter library supported by AWS, but is considered experimental. limited support is offered via Amplify. https://aws.amazon.com/sd k-for-javascript/ https://github.com/maplib re/maplibre-native https://www.npmjs.com/pac kage/@maplibre/maplibre-g l-native https://aws.amazon.com/sd k-for-php/ There is no MapLibre support for PHP. SDKs by language 850 Amazon Location Service Mobile frontend Developer Guide The following AWS SDKs and Map Rendering Framework versions are available for mobile frontend application development. Language / Framework AWS SDK Map Rendering Framework Fully supported Java Kotlin https://aws.amazon.com/sd k-for-java/
amazon-location-developer-guide-170
amazon-location-developer-guide.pdf
170
Partially supported Flutter Node.js PHP https://aws.amazon.com/sd k-for-javascript/ https://maplibre.org/proj ects/maplibre-gl-js/ https://aws.amazon.com/sd k-for-javascript/ https://github.com/maplib re/maplibre-react-native https://aws.amazon.com/sd k-for-javascript/ https://maplibre.org/proj ects/maplibre-gl-js/ https://docs.amplify.aws/ start/q/integration/flutter/ https://github.com/maplib re/flutter-maplibre-gl Flutter is not yet fully The MapLibre Flutter library supported by AWS, but is considered experimental. limited support is offered via Amplify. https://aws.amazon.com/sd k-for-javascript/ https://github.com/maplib re/maplibre-native https://www.npmjs.com/pac kage/@maplibre/maplibre-g l-native https://aws.amazon.com/sd k-for-php/ There is no MapLibre support for PHP. SDKs by language 850 Amazon Location Service Mobile frontend Developer Guide The following AWS SDKs and Map Rendering Framework versions are available for mobile frontend application development. Language / Framework AWS SDK Map Rendering Framework Fully supported Java Kotlin https://aws.amazon.com/sd k-for-java/ https://maplibre.org/proj ects/maplibre-native/ https://aws.amazon.com/sd k-for-kotlin/ https://maplibre.org/proj ects/maplibre-native/ Amazon Location Service Requires custom bindings, as Mobile Authentication MapLibre is Java-based. SDK for Android: https:// github.com/aws-geospatial/ amazon-location-mobile-aut h-sdk-android Amazon Location Service Mobile Tracking SDK for Android: https://github.co m/aws-geospatial/amazon-l ocation-mobile-tracking-sdk -android ObjectiveC ReactNative Swift https://github.com/aws-am plify/aws-sdk-ios https://maplibre.org/proj ects/maplibre-native/ https://aws.amazon.com/sd k-for-javascript/ https://github.com/maplib re/maplibre-react-native https://aws.amazon.com/sd k-for-swift/ https://maplibre.org/proj ects/maplibre-native/ Amazon Location Service Mobile Authentication SDK SDKs by language 851 Amazon Location Service Developer Guide Language / Framework AWS SDK Map Rendering Framework for iOS: https://github.com/ aws-geospatial/amazon-l ocation-mobile-auth-sdk-ios Amazon Location Service Mobile Tracking SDK for iOS: https://github.com/aws-ge ospatial/amazon-location-m obile-tracking-sdk-ios Partially supported Flutter https://docs.amplify.aws/ start/q/integration/flutter/ https://github.com/maplib re/flutter-maplibre-gl Flutter is not yet fully The MapLibre Flutter library supported by AWS, but is considered experimental. limited support is offered via Amplify. Backend application The following AWS SDKs are available for backend application development. Map Rendering Frameworks are not listed here, because map rendering is not typically needed for backend applications. Language .NET C++ Go Java AWS SDK https://aws.amazon.com/sdk-for-net/ https://aws.amazon.com/sdk-for-cpp/ https://aws.amazon.com/sdk-for-go/ https://aws.amazon.com/sdk-for-java/ JavaScript https://aws.amazon.com/sdk-for-javascript/ SDKs by language 852 Amazon Location Service Developer Guide Language Node.js TypeScript Kotlin PHP Python Ruby Rust AWS SDK https://aws.amazon.com/sdk-for-javascript/ https://aws.amazon.com/sdk-for-javascript/ https://aws.amazon.com/sdk-for-kotlin/ https://aws.amazon.com/sdk-for-php/ https://aws.amazon.com/sdk-for-python/ https://aws.amazon.com/sdk-for-ruby/ https://aws.amazon.com/sdk-for-rust/ The AWS SDK for Rust is in developer preview. Map Rendering SDK by language We recommend rendering Amazon Location Service maps using the MapLibre rendering engine. MapLibre is an engine for displaying maps in web or mobile applications. MapLibre also has a plugin model and supports user interfaces for searching and routes in some languages and platforms. To learn more about using MapLibre and the functionality it provides, see the section called “Use MapLibre tools” and the section called “Dynamic maps”. The following tables provide information about Map Rendering SDKs versions for languages and frameworks, by application type: web or mobile application. Web frontend The following Map Rendering SDKs are available for web frontend application development. Map Rendering SDK by language 853 Amazon Location Service Developer Guide Map Rendering Framework Language / Framework Fully supported JavaScript https://maplibre.org/projects/maplibre-gl-js/ ReactJS https://github.com/maplibre/maplibre-react-native TypeScript https://maplibre.org/projects/maplibre-gl-js/ Partially supported Flutter https://github.com/maplibre/flutter-maplibre-gl The MapLibre Flutter library is considered experimental. Node.js There is no MapLibre support for Node.js. PHP There is no MapLibre support for PHP. Mobile frontend The following Map Rendering SDKs are available for mobile frontend application development. Language / Framework Fully supported Java Kotlin Map Rendering Framework https://maplibre.org/projects/maplibre-native/ https://maplibre.org/projects/maplibre-native/ Requires custom bindings, as MapLibre is Java-based. ObjectiveC https://maplibre.org/projects/maplibre-native/ ReactNative https://github.com/maplibre/maplibre-react-native Map Rendering SDK by language 854 Amazon Location Service Language / Framework Map Rendering Framework Developer Guide Swift https://maplibre.org/projects/maplibre-native/ Partially supported Flutter https://github.com/maplibre/flutter-maplibre-gl The MapLibre Flutter library is considered experimental. Amazon Location Service API and CLI Amazon Location Service provides API and CLI operations access to the location functionality. See the lists below for more information. Amazon Location Service API This includes the following APIs: • Places • Routes • Authentication • Maps • Geofences • Trackers • Tags Amazon Location Service CLI This includes the following CLIs: AWS CLI Operations for Amazon Location Service Amazon Location Service provides AWS CLI (Command-line interface) operations to access location functionality, including the following APIs: API and CLI reference 855 Developer Guide Amazon Location Service Places • autocomplete • geocode • get-place • reverse-geocode • search-nearby • search-text • Learn More Routes • calculate-isolines • calculate-route-matrix • calculate-routes • optimize-waypoints • snap-to-roads • Learn More Authentication • create-key • delete-key • describe-key • list-keys • update-key • Learn More Maps • get-glyphs • get-sprites • get-static-map • get-style-descriptor • get-tile • Learn More CLI 856 Developer Guide Amazon Location Service Geofences • batch-delete-geofence • batch-evaluate-geofences • batch-put-geofence • forecast-geofence-events • create-geofence-collection • delete-geofence-collection • describe-geofence-collection • list-geofence-collections • update-geofence-collection • get-geofence • list-geofences • put-geofence • Learn More Trackers • batch-get-device-position • batch-update-device-position • batch-delete-device-position-history • get-device-position • get-device-position-history • associate-tracker-consumer • disassociate-tracker-consumer • create-tracker • delete-tracker • describe-tracker • list-trackers • update-tracker • list-tracker-consumers • verify-device-position • Learn More CLI 857 Amazon Location Service Tags • list-tags-for-resource • tag-resource • untag-resource • Learn More Developer Guide Examples and Learning Resources This topic provides links and details about our available demos and sample projects. These resources are designed to help you quickly understand and implement key features of our tools and APIs. Additionally, you’ll find links to GitHub repositories containing
amazon-location-developer-guide-171
amazon-location-developer-guide.pdf
171
get-geofence • list-geofences • put-geofence • Learn More Trackers • batch-get-device-position • batch-update-device-position • batch-delete-device-position-history • get-device-position • get-device-position-history • associate-tracker-consumer • disassociate-tracker-consumer • create-tracker • delete-tracker • describe-tracker • list-trackers • update-tracker • list-tracker-consumers • verify-device-position • Learn More CLI 857 Amazon Location Service Tags • list-tags-for-resource • tag-resource • untag-resource • Learn More Developer Guide Examples and Learning Resources This topic provides links and details about our available demos and sample projects. These resources are designed to help you quickly understand and implement key features of our tools and APIs. Additionally, you’ll find links to GitHub repositories containing source code, and example solutions. Demos • Web • Visit: https://location.aws.com/demo • Explore source code: https://github.com/aws-geospatial/amazon-location-features-demo-web • Android • Install: https://play.google.com/store/apps/details? id=com.aws.amazonlocation&trk=devguide • Explore source code: https://github.com/aws-geospatial/amazon-location-features-demo- android • iOS • Do the following to use our app: • Install TestFlight • Join the Amazon Location beta • Explore source code: https://github.com/aws-geospatial/amazon-location-features-demo-ios Samples 1. Visit: https://location.aws.com/samples. 2. Explore source code: Examples and Learning Resources 858 Amazon Location Service Developer Guide • JS: https://github.com/aws-geospatial/amazon-location-samples-js/ • React: https://github.com/aws-geospatial/amazon-location-samples-react • Swift: https://github.com/aws-geospatial/amazon-location-samples-ios • Android: https://github.com/aws-geospatial/amazon-location-samples-android GitHub See https://github.com/aws-geospatial/. GitHub 859 Amazon Location Service Developer Guide AWS integration for Amazon Location Service Amazon Location Service is integrated with various AWS services for efficient authentication, monitoring, management and development. Monitor • Amazon CloudWatch – View metrics on service usage and health, including requests, latency, faults, and logs. For more information, see the section called “Monitor with Amazon CloudWatch”. • AWS CloudTrail – Log and monitor your API calls, which include actions taken by a user, role or an AWS service. For more information, see the section called “Monitor and log with AWS CloudTrail”. Manage • AWS CloudFormation – Amazon Location is integrated with AWS CloudFormation, a service that helps you to model and set up your AWS resources so that you can spend less time creating and managing your resources and infrastructure. For more information, see the section called “Create resources with AWS CloudFormation”. • Service Quotas – Use the Service Quotas console and AWS CLI to request changes to your adjustable quotas. For more information, see the section called “Manage quotas”. • Tags – Use resource tagging in Amazon Location to create tags to categorize your resources by purpose, owner, environment, or criteria. Tagging your resources helps you manage, identify, organize, search, and filter your resources. For more information, see the section called “Manage resources with Tags”. Authenticate • Amazon Cognito – You can use Amazon Cognito authentication as an alternative to directly using AWS Identity and Access Management (IAM) with both frontend SDKs and direct HTTPS requests. For more information, see the section called “Use Amazon Cognito”. • IAM – AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Amazon Location Service resources. For more information, see the section called “Use IAM”. 860 Amazon Location Service Value added Developer Guide • Amazon EventBridge – Enable an event-driven application architecture so you can use AWS Lambda functions to activate other parts of your application and work flows. For more information, see the section called “React to events with EventBridge”. • AWS IoT – The AWS IoT Core rules engine stores queries about your devices' message topics and enables you to define actions for sending messages to other AWS services, such as Amazon Location Service. Devices that are aware of their location as coordinates can have their locations forwarded to Amazon Location through the rules engine. For more information, see the section called “Track using AWS IoT and MQTT”. Developer tool • SDKs – Amazon Location Service offers a variety of tools for developers to build location- enabled applications. These include the standard AWS SDKs, mobile and web SDKs. For more information, see the section called “SDKs and frameworks”. • AWS CLI – The AWS Command Line Interface (AWS CLI) is an open source tool that enables you to interact with AWS services using commands in your command-line shell. With minimal configuration. For more information, see AWS Command Line Interface or learn more about AWS CLI. • Sample code – Sample code uses AWS SDKs, mobile and web SDKs, MapLibre to demonstrate how you can use Amazon Location. For more information, see samples. • Amazon Location Service console – Use the Amazon Location console to learn about APIs, resources, and to get started with a visual and interactive learning tool. For more information, see the Amazon Location Service console. Cost and billing • AWS Billing and Cost Management – Service provides helps to you pay your bills and optimize your costs. Amazon Web Services bills your account for usage, which ensures that you pay only for what you use.
amazon-location-developer-guide-172
amazon-location-developer-guide.pdf
172
– Sample code uses AWS SDKs, mobile and web SDKs, MapLibre to demonstrate how you can use Amazon Location. For more information, see samples. • Amazon Location Service console – Use the Amazon Location console to learn about APIs, resources, and to get started with a visual and interactive learning tool. For more information, see the Amazon Location Service console. Cost and billing • AWS Billing and Cost Management – Service provides helps to you pay your bills and optimize your costs. Amazon Web Services bills your account for usage, which ensures that you pay only for what you use. For more information, see the section called “Pricing model” or the section called “Manage billing and costs”. Resource Management Resource management provides tools and processes to manage quotas, organize resources with tags, control costs, and automate resource creation using AWS CloudFormation. These capabilities enable you to efficiently allocate, monitor, and manage your resources within Amazon Location. Resource Management 861 Amazon Location Service Developer Guide Use these tools to maintain operational efficiency by setting service limits, tagging resources for better organization, monitoring your expenses, and using infrastructure as code with CloudFormation to create and manage resources programmatically. Topics • Manage quotas with Service Quotas • Manage resources with Tags • Manage billing and costs with AWS Billing and Cost Management • Create resources with AWS CloudFormation Manage quotas with Service Quotas Note If you require a higher quota, you can use the Service Quotas console to request quota increases for adjustable quotas. When requesting a quota increase, select the Region you require the quota increase in since most quotas are specific to the AWS Region. Service Quotas console allow you to request quota increases or decrease quota for adjustable quotas. Service Quotas are the maximum number of API call or resources you can have per AWS account and AWS Region. Amazon Location Service denies additional requests that exceed the service quota. Rate limits (quotas that start with Rate of...) are the maximum number of requests per second, with a burst rate of 80 percent of the limit within any part of the second, defined for each API operation. Amazon Location Service throttles requests that exceed the operation's rate limit. Managing your Amazon Location service quotas Amazon Location Service is integrated with Service Quotas, an AWS service that enables you to view and manage your quotas from a central location. For more information, see What Is Service Quotas? in the Service Quotas User Guide. Service Quotas makes it easy to look up the value of your Amazon Location service quotas. Manage quotas 862 Amazon Location Service AWS Management Console 1. Open the Service Quotas console. Developer Guide 2. 3. 4. In the navigation pane, choose AWS services. From the AWS services list, search for and select Amazon Location. In the Service quotas list, you can see the service quota name, applied value (if it is available), AWS default quota, and whether the quota value is adjustable. 5. To view additional information about a service quota, such as the description, choose the quota name. 6. (Optional) To request a quota increase, select the quota that you want to increase, select Request quota increase, enter or select the required information, and select Request. To work more with service quotas using the console, see the Service Quotas User Guide. To request a quota increase, see Requesting a quota increase in the Service Quotas User Guide. AWS CLI Run the following command to view the default Amazon Location quotas. aws service-quotas list-aws-default-service-quotas \ --query 'Quotas[*]. {Adjustable:Adjustable,Name:QuotaName,Value:Value,Code:QuotaCode}' \ --service-code geo \ --output table To work more with service quotas using the AWS CLI, see the Service Quotas section in the AWS CLI Command Reference. To request a quota increase, see request-service-quota- increase in the AWS CLI Command Reference. FAQ What are the default quotas for Amazon Location Service resources? Amazon Location Service sets default quotas for APIs to help manage service capacity, which can be viewed in the AWS Service Quotas Management Console. You can find links to all of these in the the section called “Monitoring your Amazon Location service quotas” section below. How can I request an increase in quotas for Amazon Location Service? Manage quotas 863 Amazon Location Service Developer Guide You can request an increase in Amazon Location Service quotas through the self-service console, for up to 2x the default limit for each API. For quota limits exceeding 2x the default limit, request in the self service console and it will submit a support ticket. Alternately, you can connect your premium support team Are there additional costs associated with requesting higher quotas for Amazon Location Service? There are no direct charges for quota increase requests, but higher usage levels may lead to increased service costs based on the additional resources consumed. Monitoring your
amazon-location-developer-guide-173
amazon-location-developer-guide.pdf
173
quotas 863 Amazon Location Service Developer Guide You can request an increase in Amazon Location Service quotas through the self-service console, for up to 2x the default limit for each API. For quota limits exceeding 2x the default limit, request in the self service console and it will submit a support ticket. Alternately, you can connect your premium support team Are there additional costs associated with requesting higher quotas for Amazon Location Service? There are no direct charges for quota increase requests, but higher usage levels may lead to increased service costs based on the additional resources consumed. Monitoring your Amazon Location service quotas You can monitor your usage against your quotas with Amazon CloudWatch. For more information, see the section called “Monitor with Amazon CloudWatch”. Name Default Description Adjustabl e API Key resources per account Each supported Region: 500 No Geofence Collection resources per account Each supported Region: 1,500 Yes Geofences per Geofence Collection Each supported Region: 50,000 No The maximum number of API key resources (active or expired) that you can have per account. The maximum number of Geofence Collection resources that you can create per account. The maximum number of Geofences that you can create per Geofence Collection. Map resources per account Each supported Region: 40 Yes The maximum number of Map resources that you can create per account. Place Index resources per account Each supported Region: 40 Yes The maximum number of Place Index resources Manage quotas 864 Amazon Location Service Developer Guide Name Default Description Adjustabl e that you can create per account. Rate of AssociateTrackerConsumer API requests Each supported Region: 10 per Yes The maximum number of AssociateTrackerCo second Rate of BatchDeleteDevicePositionHi story API requests Each supported Region: 50 per Yes second nsumer requests that you can make per second. Additional requests are throttled. The maximum number of BatchDeleteDeviceP ositionHistory requests that you can make per second. Additional requests are throttled. Rate of BatchDeleteGeofence API requests Each supported Region: 50 per Yes The maximum number of BatchDeleteGeofenc second Rate of BatchEvaluateGeofences API requests Each supported Region: 50 per second Yes e requests that you can make per second. Additional requests are throttled. The maximum number of BatchEvaluateGeofe nces requests that you can make per second. Additional requests are throttled. Manage quotas 865 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of BatchGetDevicePosition API requests Each supported Region: 50 per Yes second The maximum number of BatchGetDevicePosi tion requests that you can make per second. Additional requests are throttled. Rate of BatchPutGeofence API requests Each supported Region: 50 per Yes The maximum number of BatchPutGeofence second requests that you can make per second. Additional requests are throttled. Rate of BatchUpdateDevicePosition API requests Each supported Region: 50 per Yes The maximum number of BatchUpdateDeviceP second osition requests that you can make per second. Additional requests are throttled. Rate of CalculateRoute API requests Each supported Region: 10 per Yes The maximum number of CalculateRoute requests second Rate of CalculateRouteMatrix API requests Each supported Region: 5 per second Yes that you can make per second. Additional requests are throttled. The maximum number of CalculateRouteMatr ix requests that you can make per second. Additional requests are throttled. Manage quotas 866 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of CreateGeofenceCollection API requests Each supported Region: 10 per Yes The maximum number of CreateGeofenceColl Rate of CreateKey API requests Rate of CreateMap API requests second Each supported Region: 10 per second Each supported Region: 10 per second Yes Yes ection requests that you can make per second. Additional requests are throttled. The maximum number of CreateKey requests that you can make per second. Additional requests are throttled. The maximum number of CreateMap requests that you can make per second. Additional requests are throttled. Rate of CreatePlaceIndex API requests Each supported Region: 10 per Yes The maximum number of CreatePlaceIndex second Rate of CreateRouteCalculator API requests Each supported Region: 10 per second Yes requests that you can make per second. Additional requests are throttled. The maximum number of CreateRouteCalcula tor requests that you can make per second. Additional requests are throttled. Manage quotas 867 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of CreateTracker API requests Each supported Region: 10 per Yes The maximum number of CreateTracker requests second that you can make per second. Additional requests are throttled. Rate of DeleteGeofenceCollection API requests Each supported Region: 10 per Yes The maximum number of DeleteGeofenceColl Rate of DeleteKey API requests Rate of DeleteMap API requests Rate of DeletePlaceIndex API requests second Each supported Region: 10 per second Each supported Region: 10 per second Each supported Region: 10 per second Yes Yes Yes ection requests that you can make per second. Additional requests are throttled. The maximum number of DeleteKey requests that you
amazon-location-developer-guide-174
amazon-location-developer-guide.pdf
174
of CreateTracker API requests Each supported Region: 10 per Yes The maximum number of CreateTracker requests second that you can make per second. Additional requests are throttled. Rate of DeleteGeofenceCollection API requests Each supported Region: 10 per Yes The maximum number of DeleteGeofenceColl Rate of DeleteKey API requests Rate of DeleteMap API requests Rate of DeletePlaceIndex API requests second Each supported Region: 10 per second Each supported Region: 10 per second Each supported Region: 10 per second Yes Yes Yes ection requests that you can make per second. Additional requests are throttled. The maximum number of DeleteKey requests that you can make per second. Additional requests are throttled. The maximum number of DeleteMap requests that you can make per second. Additional requests are throttled. The maximum number of DeletePlaceIndex requests that you can make per second. Additional requests are throttled. Manage quotas 868 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of DeleteRouteCalculator API requests Each supported Region: 10 per Yes The maximum number of DeleteRouteCalcula second tor requests that you can make per second. Additional requests are throttled. Rate of DeleteTracker API requests Each supported Region: 10 per Yes The maximum number of DeleteTracker requests second that you can make per second. Additional requests are throttled. Rate of DescribeGeofenceCollection API requests Each supported Region: 10 per Yes The maximum number of DescribeGeofenceCo Rate of DescribeKey API requests Rate of DescribeMap API requests second Each supported Region: 10 per second Each supported Region: 10 per second Yes Yes llection requests that you can make per second. Additional requests are throttled. The maximum number of DescribeKey requests that you can make per second. Additional requests are throttled. The maximum number of DescribeMap requests that you can make per second. Additional requests are throttled. Manage quotas 869 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of DescribePlaceIndex API requests Each supported Region: 10 per Yes The maximum number of DescribePlaceIndex second Rate of DescribeRouteCalculator API requests Each supported Region: 10 per Yes second requests that you can make per second. Additional requests are throttled. The maximum number of DescribeRouteCalcu lator requests that you can make per second. Additional requests are throttled. Rate of DescribeTracker API requests Each supported Region: 10 per Yes The maximum number of DescribeTracker requests second Rate of DisassociateTrackerConsumer API requests Each supported Region: 10 per Yes second Rate of ForecastGeofenceEvents API requests Each supported Region: 50 per second Yes that you can make per second. Additional requests are throttled. The maximum number of DisassociateTracke rConsumer requests that you can make per second. Additional requests are throttled. The maximum number of ForecastGeofenceEv ents requests that you can make per second. Additional requests are throttled. Manage quotas 870 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of GetDevicePosition API requests Each supported Region: 50 per Yes The maximum number of GetDevicePosition second requests that you can make per second. Additional requests are throttled. Rate of GetDevicePositionHistory API requests Each supported Region: 50 per Yes The maximum number of GetDevicePositionH second istory requests that you can make per second. Additional requests are throttled. Rate of GetGeofence API requests Each supported Region: 50 per Yes The maximum number of GetGeofence requests second that you can make per second. Additional requests are throttled. Rate of GetMapGlyphs API requests Each supported Region: 50 per Yes The maximum number of GetMapGlyphs requests Rate of GetMapSprites API requests second Each supported Region: 50 per second Yes that you can make per second. Additional requests are throttled. The maximum number of GetMapSprites requests that you can make per second. Additional requests are throttled. Manage quotas 871 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of GetMapStyleDescriptor API requests Each supported Region: 50 per Yes The maximum number of GetMapStyleDescrip Rate of GetMapTile API requests Rate of GetPlace API requests second Each supported Region: 500 per Yes second Each supported Region: 50 per second Yes tor requests that you can make per second. Additional requests are throttled. The maximum number of GetMapTile requests that you can make per second. Additional requests are throttled. The maximum number of GetPlace requests that you can make per second. Additional requests are throttled. Rate of ListDevicePositions API requests Each supported Region: 50 per Yes The maximum number of ListDevicePosition second Rate of ListGeofenceCollections API requests Each supported Region: 10 per second Yes s requests that you can make per second. Additional requests are throttled. The maximum number of ListGeofenceCollec tions requests that you can make per second. Additional requests are throttled. Manage quotas 872 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of ListGeofences API requests Each supported Region: 50 per Yes The maximum number of ListGeofences requests Rate of ListKeys API requests Rate of ListMaps API
amazon-location-developer-guide-175
amazon-location-developer-guide.pdf
175
requests are throttled. Rate of ListDevicePositions API requests Each supported Region: 50 per Yes The maximum number of ListDevicePosition second Rate of ListGeofenceCollections API requests Each supported Region: 10 per second Yes s requests that you can make per second. Additional requests are throttled. The maximum number of ListGeofenceCollec tions requests that you can make per second. Additional requests are throttled. Manage quotas 872 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of ListGeofences API requests Each supported Region: 50 per Yes The maximum number of ListGeofences requests Rate of ListKeys API requests Rate of ListMaps API requests second Each supported Region: 10 per second Each supported Region: 10 per second Yes Yes that you can make per second. Additional requests are throttled. The maximum number of ListKeys requests that you can make per second. Additional requests are throttled. The maximum number of ListMaps requests that you can make per second. Additional requests are throttled. Rate of ListPlaceIndexes API requests Each supported Region: 10 per Yes The maximum number of ListPlaceIndexes requests second Rate of ListRouteCalculators API requests Each supported Region: 10 per second Yes that you can make per second. Additional requests are throttled. The maximum number of ListRouteCalculato rs requests that you can make per second. Additional requests are throttled. Manage quotas 873 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of ListTagsForResource API requests Each supported Region: 10 per Yes The maximum number of ListTagsForResourc second e requests that you can make per second. Additional requests are throttled. Rate of ListTrackerConsumers API requests Each supported Region: 10 per Yes The maximum number of ListTrackerConsume second Rate of ListTrackers API requests Each supported Region: 10 per second Yes rs requests that you can make per second. Additional requests are throttled. The maximum number of ListTrackers requests that you can make per second. Additional requests are throttled. Rate of PutGeofence API requests Each supported Region: 50 per Yes The maximum number of PutGeofence requests second Rate of SearchPlaceIndexForPosition API requests Each supported Region: 50 per second Yes that you can make per second. Additional requests are throttled. The maximum number of SearchPlaceIndexFo rPosition requests that you can make per second. Additional requests are throttled. Manage quotas 874 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of SearchPlaceIndexForSuggesti ons API requests Each supported Region: 50 per Yes second Rate of SearchPlaceIndexForText API requests Each supported Region: 50 per Yes second The maximum number of SearchPlaceIndexFo rSuggestions requests that you can make per second. Additional requests are throttled. The maximum number of SearchPlaceIndexFo rText requests that you can make per second. Additional requests are throttled. Rate of TagResource API requests Each supported Region: 10 per Yes The maximum number of TagResource requests second that you can make per second. Additional requests are throttled. Rate of UntagResource API requests Each supported Region: 10 per Yes The maximum number of UntagResource requests second Rate of UpdateGeofenceCollection API requests Each supported Region: 10 per second Yes that you can make per second. Additional requests are throttled. The maximum number of UpdateGeofenceColl ection requests that you can make per second. Additional requests are throttled. Manage quotas 875 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of UpdateKey API requests Rate of UpdateMap API requests Each supported Region: 10 per second Each supported Region: 10 per second Yes Yes The maximum number of UpdateKey requests that you can make per second. Additional requests are throttled. The maximum number of UpdateMap requests that you can make per second. Additional requests are throttled. Rate of UpdatePlaceIndex API requests Each supported Region: 10 per Yes The maximum number of UpdatePlaceIndex second requests that you can make per second. Additional requests are throttled. Rate of UpdateRouteCalculator API requests Each supported Region: 10 per Yes The maximum number of UpdateRouteCalcula second Rate of UpdateTracker API requests Each supported Region: 10 per second Yes tor requests that you can make per second. Additional requests are throttled. The maximum number of UpdateTracker requests that you can make per second. Additional requests are throttled. Manage quotas 876 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of VerifyDevicePosition API requests Each supported Region: 50 per Yes The maximum number of VerifyDevicePositi second Rate of geo-maps:GetStaticMap API requests Each supported Region: 50 per Yes second on requests that you can make per second. Additional requests are throttled. The maximum number of geo-maps:GetStatic Map requests that you can make per second. Additional requests are throttled. Rate of geo-maps:GetTile API requests Each supported Region: 2,000 per Yes The maximum number of geo-maps:GetTile second Rate of geo-places:Autocomplete API requests Each supported Region: 100 per Yes second Rate of geo-places:Geocode API requests Each supported Region: 100 per second Yes requests that you can make per
amazon-location-developer-guide-176
amazon-location-developer-guide.pdf
176
Each supported Region: 50 per Yes The maximum number of VerifyDevicePositi second Rate of geo-maps:GetStaticMap API requests Each supported Region: 50 per Yes second on requests that you can make per second. Additional requests are throttled. The maximum number of geo-maps:GetStatic Map requests that you can make per second. Additional requests are throttled. Rate of geo-maps:GetTile API requests Each supported Region: 2,000 per Yes The maximum number of geo-maps:GetTile second Rate of geo-places:Autocomplete API requests Each supported Region: 100 per Yes second Rate of geo-places:Geocode API requests Each supported Region: 100 per second Yes requests that you can make per second. Additional requests are throttled. The maximum number of geo-places:Autocom plete requests that you can make per second. Additional requests are throttled. The maximum number of geo-places:Geocode requests that you can make per second. Additional requests are throttled. Manage quotas 877 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of geo-places:GetPlace API requests Each supported Region: 100 per Yes The maximum number of geo-places:GetPlac second e requests that you can make per second. Additional requests are throttled. Rate of geo-places:ReverseGeocode API requests Each supported Region: 100 per Yes The maximum number of geo-places:Reverse second Rate of geo-places:SearchNearby API requests Each supported Region: 100 per Yes second Rate of geo-places:SearchText API requests Each supported Region: 100 per Yes second Rate of geo-places:Suggest API requests Each supported Region: 100 per second Yes Geocode requests that you can make per second. Additional requests are throttled. The maximum number of geo-places:SearchN earby requests that you can make per second. Additional requests are throttled. The maximum number of geo-places:SearchT ext requests that you can make per second. Additional requests are throttled. The maximum number of geo-places:Suggest requests that you can make per second. Additional requests are throttled. Manage quotas 878 Amazon Location Service Developer Guide Name Default Description Adjustabl e Rate of geo-routes:CalculateIsolines API requests Each supported Region: 20 per Yes The maximum number of geo-routes:Calcula second Rate of geo-routes:CalculateRouteMa trix API requests Each supported Region: 5 per Yes second teIsolines requests that you can make per second. Additional requests are throttled. The maximum number of geo-routes:Calcula teRouteMatrix requests that you can make per second. Additional requests are throttled. Rate of geo-routes:CalculateRoutes API requests Each supported Region: 20 per Yes The maximum number of geo-routes:Calcula second Rate of geo-routes:OptimizeWaypoints API requests Each supported Region: 5 per Yes second Rate of geo-routes:SnapToRoads API requests Each supported Region: 20 per second Yes teRoutes requests that you can make per second. Additional requests are throttled. The maximum number of geo-routes:Optimiz eWaypoints requests that you can make per second. Additional requests are throttled. The maximum number of geo-routes:SnapToR oads requests that you can make per second. Additional requests are throttled. Manage quotas 879 Amazon Location Service Developer Guide Name Default Description Adjustabl e Route Calculator resources per account Each supported Region: 40 Yes The maximum number of Route Calculator resources that you can create per account. Tracker consumers per tracker Each supported Region: 5 No The maximum number of Geofence Collection that Tracker resources per account Each supported Region: 500 Yes Tracker resource can be associated with. The maximum number of Tracker resources that you can create per account. Learn more To learn more about service quotas, see the Service Quotas documentation. Manage resources with Tags Use resource tagging in Amazon Location to create tags to categorize your resources by purpose, owner, environment, or criteria. Tagging your resources helps you manage, identify, organize, search, and filter your resources. For example, with AWS Resource Groups, you can create groups of AWS resources based on one or more tags or portions of tags. You can also create groups based on their occurrence in an AWS CloudFormation stack. Using Resource Groups and Tag Editor, you can consolidate and view data for applications that consist of multiple services, resources, and Regions in one place. For more information on Common Tagging Strategies, see the AWS General Reference. Each tag is a label consisting of a key and value that you define: • Tag key – A general label that categorizes the tag values. For example, CostCenter. • Tag value – An optional description for the tag key category. For example, MobileAssetTrackingResourcesProd. Manage resources with Tags 880 Amazon Location Service Developer Guide This topic helps you get started with tagging by reviewing tagging restrictions. It also shows you how to create tags and use tags to track your AWS cost for each active tag by using cost allocation reports. For more information about: • Tagging best practices, see Tagging AWS resources in the AWS General Reference. • Using tags to control access to AWS resources, see Controlling access to AWS resources using tags in the AWS Identity and Access Management User Guide. Restrictions Note If you add a new tag with the
amazon-location-developer-guide-177
amazon-location-developer-guide.pdf
177
Manage resources with Tags 880 Amazon Location Service Developer Guide This topic helps you get started with tagging by reviewing tagging restrictions. It also shows you how to create tags and use tags to track your AWS cost for each active tag by using cost allocation reports. For more information about: • Tagging best practices, see Tagging AWS resources in the AWS General Reference. • Using tags to control access to AWS resources, see Controlling access to AWS resources using tags in the AWS Identity and Access Management User Guide. Restrictions Note If you add a new tag with the same tag key as an existing tag, the new tag overwrites the existing tag. Tagging allows you to organize and manage your resources more effectively. This page outlines the specific rules and constraints that govern the use of tags within Amazon Location Service. By understanding these tagging restrictions, you can ensure compliance with best practices and avoid potential issues when implementing tagging strategies for your location-based resources and applications. The following basic restrictions apply to tags: • Maximum tags per resource: 50 • For each resource, each tag key must be unique, and each tag key can have only one value. • Maximum key length: 128 Unicode characters in UTF-8 • Maximum value length: 256 Unicode characters in UTF-8 • The allowed characters across services are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. • Tag keys and values are case-sensitive. • The aws: prefix is reserved for AWS use. If a tag has a tag key with this prefix, then you can't edit or delete the tag's key or value. Tags with the aws: prefix don't count against your tags per resource limit. Manage resources with Tags 881 Amazon Location Service Developer Guide Grant permission to tag resources You can use IAM policies to control access to your Amazon Location resources and grant permission to tag a resource on creation. In addition to granting permission to create resources, the policy can include Action permissions to allow tagging operations: • geo:TagResource – Allows a user to assign one or more tags to a specified Amazon Location resource. • geo:UntagResource – Allows a user to remove one or more tags from a specified Amazon Location resource. • geo:ListTagsForResource – Allows a user to list all the tags assigned to an Amazon Location resource. The following is a policy example to allow a user to create a geofence collection and tag resources: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowTaggingForGeofenceCollectionOnCreation", "Effect": "Allow", "Action": [ "geo:CreateGeofenceCollection", "geo:TagResource" ], "Resource": "arn:aws:geo:region:accountID:geofence-collection/*" ] } Add a tag to a resource You can add tags when creating your resources using the Amazon Location console, the AWS CLI, or the Amazon Location APIs: • the section called “Manage resources” • the section called “Create a tracker” Manage resources with Tags 882 Amazon Location Service Developer Guide To tag existing resources, edit or delete tags 1. Open the Amazon Location console. 2. In the left navigation pane, choose the resource you want to tag. For example, Maps. 3. Choose a resource from the list. 4. Choose Manage tags to add, edit, or delete your tags. How to use tags You can use tags for cost allocation to track your AWS cost in detail. After you activate the cost allocation tags, AWS uses the cost allocation tags to organize your resource billing on your cost allocation report. This helps you categorize and track your usage costs. Amazon Location supports User-defined – Tags. These are custom tags that you create. The user- defined tags use the user: prefix, for example, user:CostCenter. You must activate each tag type individually. After your tags are activated, you can enable AWS Cost Explorer or view your monthly cost allocation report. To activate user-defined tags 1. Open the Billing and Cost Management console. 2. In the left navigation pane, choose Cost Allocation Tags. 3. Under the User-Defined Cost Allocation Tags tab, select the tag keys you want to activate. 4. Choose Activate. After you activate your tags, AWS generates a monthly Cost Allocation Report for your resource usage and cost. This cost allocation report includes all of your AWS costs for each billing period, including tagged and untagged resources. For more information, see Organizing and tracking costs using AWS cost allocation tags in the AWS Billing User Guide. Control access to resources using tags AWS Identity and Access Management (IAM) policies support tag-based conditions, which enables you to manage authorization for your resources based on specific tags key and values. For example, an IAM role policy can include conditions to limit access to specific environments, such as development, test, or production, based on tags. For more information, see the topic on control resource access
amazon-location-developer-guide-178
amazon-location-developer-guide.pdf
178
includes all of your AWS costs for each billing period, including tagged and untagged resources. For more information, see Organizing and tracking costs using AWS cost allocation tags in the AWS Billing User Guide. Control access to resources using tags AWS Identity and Access Management (IAM) policies support tag-based conditions, which enables you to manage authorization for your resources based on specific tags key and values. For example, an IAM role policy can include conditions to limit access to specific environments, such as development, test, or production, based on tags. For more information, see the topic on control resource access based on tags. Manage resources with Tags 883 Amazon Location Service Developer Guide Manage billing and costs with AWS Billing and Cost Management AWS Billing and Cost Management is a web service that provides features that helps you pay your bills and optimize your costs. Amazon Web Services bills your account for usage, which ensures that you pay only for what you use. How to see bills and manage cost 1. Open Billing and Cost Management in the AWS Management Console. 2. Search for location service in Amazon Web Services, Inc. charges by service 3. Choose [+] Location service 4. Choose [+] Region Name To learn more, see Billing and Cost Management in the AWS Management Console. Create resources with AWS CloudFormation Amazon Location Service is integrated with AWS CloudFormation, a service that helps you to model and set up your AWS resources so that you can spend less time creating and managing your resources and infrastructure. You create a template that describes all the AWS resources that you want (such as Amazon Location resources), and AWS CloudFormation provisions and configures those resources for you. When you use AWS CloudFormation, you can reuse your template to set up your Amazon Location resources consistently and repeatedly. Describe your resources once, and then provision the same resources over and over in multiple AWS accounts and Regions. Related AWS CloudFormation templates To provision and configure resources for Amazon Location and related services, you must understand AWS CloudFormation templates. Templates are formatted text files in JSON or YAML. These templates describe the resources that you want to provision in your AWS CloudFormation stacks. If you're unfamiliar with JSON or YAML, you can use AWS CloudFormation Designer to help you get started with AWS CloudFormation templates. For more information, see What is AWS CloudFormation Designer? in the AWS CloudFormation User Guide. Amazon Location supports creating the following resource types in AWS CloudFormation: • AWS::Location::Tracker Manage billing and costs 884 Amazon Location Service Developer Guide • AWS::Location::TrackerConsumer • AWS::Location::GeofenceCollection For more information, including examples of JSON and YAML templates for Amazon Location resources, see the Amazon Location Service resource type reference in the AWS CloudFormation User Guide. Learn more about AWS CloudFormation To learn more about AWS CloudFormation, see the following resources: • AWS CloudFormation • AWS CloudFormation User Guide • AWS CloudFormation API Reference • AWS CloudFormation Command Line Interface User Guide Monitoring and Auditing Monitoring and Auditing provides capabilities to track, monitor, and log activities in your Amazon Location Services environment. With Amazon CloudWatch and AWS CloudTrail, you can ensure the reliability, security, and compliance of your applications. These tools help you observe resource performance metrics, detect anomalies, and log API activity for auditing and troubleshooting. Use them to enhance operational insights, diagnose issues, and ensure adherence to compliance standards. Topics • Monitor with Amazon CloudWatch • Monitor and log with AWS CloudTrail Monitor with Amazon CloudWatch Amazon CloudWatch monitors your AWS resources and the applications that you run on AWS in near-real time. You can monitor Amazon Location resources using CloudWatch, which collects raw data and processes metrics into meaningful statistics in near-real time. You can view historical information for up to 15 months, or search metrics to view in the Amazon CloudWatch console Monitoring and Auditing 885 Amazon Location Service Developer Guide for more perspective on how your application or service is performing. You can also set alarms by defining thresholds, and send notifications or take actions when those thresholds are met. For more information, see the Amazon CloudWatch User Guide Topics • Amazon Location Service metrics and dimensions • View Amazon Location Service metrics • Create CloudWatch alarms for Amazon Location Service metrics • Use CloudWatch to monitor usage against quotas • CloudWatch metric examples for Amazon Location Service Amazon Location Service metrics and dimensions Metrics are time-ordered data points that are exported to CloudWatch. A dimension is a name/ value pair that identifies the metric. For more information, see Using CloudWatch metrics and CloudWatch dimensions in the Amazon CloudWatch User Guide. Note The result is approximate because of the distributed architecture of Amazon Location Service. In most cases, the count should be close to the actual number of API operations
amazon-location-developer-guide-179
amazon-location-developer-guide.pdf
179
Location Service metrics • Create CloudWatch alarms for Amazon Location Service metrics • Use CloudWatch to monitor usage against quotas • CloudWatch metric examples for Amazon Location Service Amazon Location Service metrics and dimensions Metrics are time-ordered data points that are exported to CloudWatch. A dimension is a name/ value pair that identifies the metric. For more information, see Using CloudWatch metrics and CloudWatch dimensions in the Amazon CloudWatch User Guide. Note The result is approximate because of the distributed architecture of Amazon Location Service. In most cases, the count should be close to the actual number of API operations being sent. Amazon Location Service metrics The following are metrics that Amazon Location Service exports to CloudWatch in the AWS/ Location namespace. Metric Description Dimensions CallCount The number of calls made to a given API endpoint. Valid Statistic: Sum OperationName OperationName, ResourceName ApiKeyName, OperationName Monitor with Amazon CloudWatch 886 Amazon Location Service Developer Guide Metric Description Dimensions Units: Count ApiKeyName, OperationName, ResourceName OperationName, OperationVersion OperationName, OperationVersion, ResourceN ame ApiKeyName, OperationName, Operation Version ApiKeyName, OperationName, Operation Version, ResourceName OperationName OperationName, ResourceName ApiKeyName, OperationName ApiKeyName, OperationName, ResourceName OperationName OperationName, ResourceName ApiKeyName, OperationName ApiKeyName, OperationName, ResourceName ErrorCount SuccessCount The number of error responses from calls made to a given API endpoint. Valid Statistic: Sum Units: Count The number of successful calls made to a given API endpoint. Valid Statistic: Sum Units: Count Monitor with Amazon CloudWatch 887 Amazon Location Service Developer Guide Metric Description Dimensions CallLatency The amount of time the operation takes to process and return a response when a call is made to a OperationName OperationName, ResourceName ApiKeyName, OperationName given API endpoint. ApiKeyName, OperationName, ResourceName Valid Statistic: Average Units: Milliseconds Amazon Location Service dimensions for metrics You can use the dimensions in the following table to filter Amazon Location Service metrics. Dimension Description OperationName Filters Amazon Location metrics for API operation with the specified operation name. OperationName, ResourceName Filter Amazon Location metrics for API operation with the specified operation name and resource name. ApiKeyName, OperationName Filter Amazon Location metrics for API operation with the specified operation name and using given API key name. ApiKeyName, OperationName, ResourceName Filter Amazon Location metrics for API operation with the specified operation name, resource name and using given API key name. OperationName, OperationVersion Filters Amazon Location metrics for API operation with the specified operation name. Amazon Location Service standalone Maps, Places, and Routes will be export metric to this dimension. Monitor with Amazon CloudWatch 888 Amazon Location Service Developer Guide Dimension Description OperationName, OperationVersion, ResourceName ApiKeyName, OperationName, OperationVersion ApiKeyName, OperationName, OperationVersion, ResourceName Filter Amazon Location metrics for API operation with the specified operation name, version, and Amazon Location resource name. Amazon Location standalone Maps, Places, and Routes will be export metric to this dimension. Filter Amazon Location metrics for API operation with the specified operation name, version, and using given API key name. Amazon Location standalone Maps, Places, and Routes will be export metric to this dimension. Filter Amazon Location metrics for API operation with the specified operation name, version, resource name and using given API key name. Amazon Location standalone Maps, Places, and Routes will be export metric to this dimension. View Amazon Location Service metrics You can view metrics for Amazon Location Service on the Amazon CloudWatch console or by using the Amazon CloudWatch API. To view metrics using the CloudWatch console Example 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation pane, choose Metrics. 3. On the All metrics tab, choose the Location namespace. 4. Select the type of metric to view. 5. Select the metric and add it to the chart. For more information, see View Available Metrics in the Amazon CloudWatch User Guide. Monitor with Amazon CloudWatch 889 Amazon Location Service Developer Guide Create CloudWatch alarms for Amazon Location Service metrics You can use CloudWatch to set alarms on your Amazon Location Service metrics. For example, you can create an alarm in CloudWatch to send an email whenever an error count spike occurs. The following topics give you a high-level overview of how to set alarms using CloudWatch. For detailed instructions, see Using Alarms in the Amazon CloudWatch User Guide. To set alarms using the CloudWatch console Example 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation pane, choose Alarm. 3. Choose Create Alarm. 4. Choose Select metric. 5. On the All metrics tab, select the Location namespace. 6. Select a metric category. 7. Find the row with the metric you want to create an alarm for, then select the check box next to this row. 8. Choose Select metric. 9. Under Metric, fill in the values. 10.Specify the alarm Conditions. 11.Choose Next. 12.If you want to send a notification when the alarm conditions are met: • Under Alarm state trigger, select the alarm state to prompt
amazon-location-developer-guide-180
amazon-location-developer-guide.pdf
180
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation pane, choose Alarm. 3. Choose Create Alarm. 4. Choose Select metric. 5. On the All metrics tab, select the Location namespace. 6. Select a metric category. 7. Find the row with the metric you want to create an alarm for, then select the check box next to this row. 8. Choose Select metric. 9. Under Metric, fill in the values. 10.Specify the alarm Conditions. 11.Choose Next. 12.If you want to send a notification when the alarm conditions are met: • Under Alarm state trigger, select the alarm state to prompt a notification to be sent. • Under Select an SNS topic, choose Create new topic to create a new Amazon Simple Notification Service (Amazon SNS) topic. Enter the topic name and the email to send the notification to. • Under Send a notification to enter additional email addresses to send the notification to. • Choose Add notification. This list is saved and appears in the field for future alarms. 13.When done, choose Next. 14.Enter a name and description for the alarm, then choose Next. 15.Confirm the alarm details, then choose Next. Monitor with Amazon CloudWatch 890 Amazon Location Service Developer Guide Note When creating a new Amazon SNS topic, you must verify the email address before a notification can be sent. If the email is not verified, the notification will not be received when an alarm is initiated by a state change. For more information about how to set alarms using the CloudWatch console, see Create an Alarm that Sends Email in the Amazon CloudWatch User Guide. Use CloudWatch to monitor usage against quotas You can create Amazon CloudWatch alarms to notify you when your utilization of a given quota exceeds a configurable threshold. This enables you to recognize when you are close to your quota limits, and either adapt your utilization to avoid cost overruns, or request a quota increase, if needed. For information about how to use CloudWatch to monitor quotas, see Visualizing your service quotas and setting alarms in the Amazon CloudWatch User Guide. CloudWatch metric examples for Amazon Location Service You can use the GetMetricData API to retrieve metrics for Amazon Location. • For example, you can monitor CallCount and set an alarm for when a drop in number occurs. Monitoring the CallCount metrics for SendDeviceLocation can help give you perspective on tracked assets. If the CallCount drops, it means that tracked assets, such as a fleet of trucks, have stopped sending their current locations. Setting an alarm for this can help notify you an issue has occurred. • For another example, you can monitor ErrorCount and set an alarm for when a spike in number occurs. Trackers must be associated with geofence collections in order for device locations to be evaluated against geofences. If you have a device fleet that requires continuous location updates, seeing the CallCount for BatchEvaluateGeofence or BatchPutDevicePosition drop to zero indicates that updates are no longer flowing. Monitor with Amazon CloudWatch 891 Amazon Location Service Developer Guide The following is an example output for GetMetricData with the metrics for CallCount and ErrorCount for creating map resources. { "StartTime": 1518867432, "EndTime": 1518868032, "MetricDataQueries": [ { "Id": "m1", "MetricStat": { "Metric": { "Namespace": "AWS/Location", "MetricName": "CallCount", "Dimensions": [ { "Name": "SendDeviceLocation", "Value": "100" } ] }, "Period": 300, "Stat": "SampleCount", "Unit": "Count" } }, { "Id": "m2", "MetricStat": { "Metric": { "Namespace": "AWS/Location", "MetricName": "ErrorCount", "Dimensions": [ { "Name": "AssociateTrackerConsumer", "Value": "0" } ] }, "Period": 1, "Stat": "SampleCount", "Unit": "Count" } } Monitor with Amazon CloudWatch 892 Amazon Location Service ] } Developer Guide Monitor and log with AWS CloudTrail AWS CloudTrail is a service that provides a record of actions taken by a user, role, or an AWS service. CloudTrail records all API calls as events. You can use Amazon Location Service with CloudTrail to monitor your API calls, which include calls from the Amazon Location Service console and AWS SDK calls to the Amazon Location Service API operations. CloudTrail is automatically enabled when you create your AWS account. When activity occurs in Amazon Location Service, that activity is recorded in a CloudTrail event along with other AWS service events in Event history. You can view, search, and download event history for the past 90 days per AWS Region. For more information about CloudTrail, see the AWS CloudTrail User Guide. There are no CloudTrail charges for viewing the Event history. For an ongoing records of events in your AWS account past 90 days, including events from Amazon Location Service, create a trail or a CloudTrail Lake data store. CloudTrail trails A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. When you create a trail in AWS Management Console, the trail applies to all AWS Regions. The trail
amazon-location-developer-guide-181
amazon-location-developer-guide.pdf
181
Event history. You can view, search, and download event history for the past 90 days per AWS Region. For more information about CloudTrail, see the AWS CloudTrail User Guide. There are no CloudTrail charges for viewing the Event history. For an ongoing records of events in your AWS account past 90 days, including events from Amazon Location Service, create a trail or a CloudTrail Lake data store. CloudTrail trails A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. When you create a trail in AWS Management Console, the trail applies to all AWS Regions. The trail logs events from all regions in the AWS Partition and delivers the log files to the S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information on how to create a trail, see Overview for Creating a Trail. For a list of CloudTrail supported services and integrations, see CloudTrail Supported Services and Integrations. You can deliver one copy of your ongoing management events to your Amazon S3 bucket at no charge from CloudTrail by creating a trail. However, there are Amazon S3 storage charges. For more information about CloudTrail pricing, see AWS CloudTrail pricing. For information about Amazon S3 pricing, see Amazon S3 pricing. CloudTrail Lake event data stores Monitor and log with AWS CloudTrail 893 Amazon Location Service Developer Guide CloudTrail Lake lets you run SQL-based queries on your events. Events are aggregated into event data stores, which are immutable collections of events based on criteria that you select by applying advanced event selectors. The selectors that you apply to an event data store control which events persist and are available for you to query. For more information about CloudTrail Lake, see Working with AWS CloudTrail Lake. CloudTrail Lake event data stores and queries incur costs. When you create an event data store, you choose the pricing option you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For more information about CloudTrail pricing, see AWS CloudTrail pricing. Topics • Amazon Location management events in CloudTrail • Amazon Location data events in CloudTrail • Learn about Amazon Location Service log file entries • Example: CloudTrail log file entry for an Amazon Location management event • Example: CloudTrail log file entry for an Amazon Location data event • CalculateRouteMatrix examples • CalculateRouteMatrix with a geometry-based routing boundary Amazon Location management events in CloudTrail You can view Amazon Location management events in your CloudTrail event history. These events include all API calls that manage Amazon Location resources and configurations. For a complete list of supported actions, refer to the Amazon Location Service API references. Amazon Location data events in CloudTrail Data events provide information about operations performed directly on a resource. These events, also known as data plane operations, can be high-volume. By default, CloudTrail does not log data events, and the CloudTrail Event History does not record them. You incur additional charges when you enable data events. For more information about CloudTrail pricing, see AWS CloudTrail Pricing. You can choose which Amazon Location resource types log data events by using the CloudTrail console, AWS CLI, or CloudTrail API operations. For instructions on how to enable and manage data Monitor and log with AWS CloudTrail 894 Amazon Location Service Developer Guide events, see Logging data events with the AWS Management Console and Logging data events with the AWS Command Line Interface . The following table lists the Amazon Location resource types for which you can log data events: Supported Amazon Location Data Events Data event type (console) resources.type value Geo Maps AWS::GeoMaps::Provider Geo Places AWS::GeoPlaces::Provider Geo Routes AWS::GeoRoutes::Provider Data APIs logged to CloudTrail See the Amazon GeoMaps API reference See the Amazon GeoPlaces API reference See the Amazon GeoRoutes API reference Note Amazon Location does not publish CloudTrail events for the following GeoMaps APIs: GetStyleDescriptor, GetGlyphs, and GetSprites. These APIs are free of charge and do not require authentication. You can configure advanced event selectors to filter events by eventName, readOnly, and resources.ARN. This helps you log only those events that matter to you. For more information, see AdvancedFieldSelector . Learn about Amazon Location Service log file entries When you configure a trail, CloudTrail delivers events as log files to an S3 bucket that you specify, or to Amazon CloudWatch Logs. For more information, see Working with CloudTrail log files in the AWS CloudTrail User Guide. CloudTrail log files can contain one or more log entries. Each event entry represents a single request from any source and includes details such as the requested operation, the date and time of the operation,
amazon-location-developer-guide-182
amazon-location-developer-guide.pdf
182
and resources.ARN. This helps you log only those events that matter to you. For more information, see AdvancedFieldSelector . Learn about Amazon Location Service log file entries When you configure a trail, CloudTrail delivers events as log files to an S3 bucket that you specify, or to Amazon CloudWatch Logs. For more information, see Working with CloudTrail log files in the AWS CloudTrail User Guide. CloudTrail log files can contain one or more log entries. Each event entry represents a single request from any source and includes details such as the requested operation, the date and time of the operation, request parameters, and more. Monitor and log with AWS CloudTrail 895 Amazon Location Service Developer Guide Note CloudTrail log files are not an ordered stack trace of API calls. They do not appear in chronological order. To determine the order of operations, use eventTime. Every event or log entry contains information about who made the request. This identity information helps you determine: • Whether the request was made with root or user credentials. • Whether the request was made with temporary security credentials for a role or a federated user. • Whether the request was made by another AWS service. Example: CloudTrail log file entry for an Amazon Location management event The following example shows a CloudTrail log entry for the CreateTracker operation, which creates a tracker resource. { "eventVersion": "1.05", "userIdentity": { "type": "AssumedRole", "principalId": "111122223333", "arn": "arn:aws:geo:us-east-1:111122223333:tracker/ExampleTracker", "accountId": "111122223333", "accessKeyId": "AKIAIOSFODNN7EXAMPLE", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "111122223333", "arn": "arn:aws:geo:us-east-1:111122223333:tracker/ExampleTracker", "accountId": "111122223333", "userName": "exampleUser" }, "webIdFederationData": {}, "attributes": { "mfaAuthenticated": "false", "creationDate": "2020-10-22T16:36:07Z" } } Monitor and log with AWS CloudTrail 896 Amazon Location Service }, "eventTime": "2020-10-22T17:43:30Z", "eventSource": "geo.amazonaws.com", "eventName": "CreateTracker", "awsRegion": "us-east-1", "sourceIPAddress": "SAMPLE_IP_ADDRESS", Developer Guide "userAgent": "aws-internal/3 aws-sdk-java/1.11.864 Linux/4.14.193-110.317.amzn2.x86_64 OpenJDK_64-Bit_Server_VM/11.0.8+10-LTS java/11.0.8 kotlin/1.3.72 vendor/Amazon.com_Inc. exec-env/AWS_Lambda_java11", "requestParameters": { "TrackerName": "ExampleTracker", "Description": "Resource description" }, "responseElements": { "TrackerName": "ExampleTracker", "Description": "Resource description", "TrackerArn": "arn:partition:service:region:account-id:resource-id", "CreateTime": "2020-10-22T17:43:30.521Z" }, "requestID": "557ec619-0674-429d-8e2c-eba0d3f34413", "eventID": "3192bc9c-3d3d-4976-bbef-ac590fa34f2c", "readOnly": false, "eventType": "AwsApiCall", "recipientAccountId": "111122223333" } Example: CloudTrail log file entry for an Amazon Location data event The following example shows a CloudTrail log entry for the Geocode operation, which retrieves coordinates, addresses, and other details about a place. { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "AROA6ODU7M35SFGUCGXHMSAMPLE", "arn": "arn:aws:sts::111122223333:assumed-role/Admin/vingu-Isengard", "accountId": "111122223333", "accessKeyId": "ASIA6ODU7M352GLR5CFMSAMPLE", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "AROA6ODU7M35SFGUCGXHMSAMPLE", Monitor and log with AWS CloudTrail 897 Amazon Location Service Developer Guide "arn": "arn:aws:iam::111122223333:role/Admin", "accountId": "111122223333", "userName": "Admin" }, "attributes": { "creationDate": "2024-09-16T14:41:33Z", "mfaAuthenticated": "false" } } }, "eventTime": "2024-09-16T14:42:16Z", "eventSource": "geo-places.amazonaws.com", "eventName": "Geocode", "awsRegion": "us-west-2", "sourceIPAddress": "52.94.133.129", "userAgent": "Amazon CloudFront", "requestParameters": { "Query": "***", "Filter": { "IncludeCountries": [ "USA" ] } }, "responseElements": null, "requestID": "1ef7e0b8-c9fc-4a20-80c3-b5340d634c4e", "eventID": "913d256c-3a9d-40d0-9bdf-705f12c7659f", "readOnly": true, "resources": [ { "accountId": "111122223333", "type": "AWS::GeoPlaces::Provider", "ARN": "arn:aws:geoplaces:us-west-2:111122223333:provider" } ], "eventType": "AwsApiCall", "managementEvent": false, "recipientAccountId": "111122223333", "eventCategory": "Data" } Monitor and log with AWS CloudTrail 898 Amazon Location Service Developer Guide CalculateRouteMatrix examples Use the following examples to understand how you can call the CalculateRouteMatrix operation with an unbounded routing boundary. Sample request { "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] }, { "Position": [-123.11179620827039, 49.3014761219] } ], "Destinations": [ { "Position": [-123.112317039, 49.28897192166] } ], "DepartureTime": "2024-05-28T21:27:56Z", "RoutingBoundary": { "Unbounded": true } } Sample response { "ErrorCount": 0, "RouteMatrix": [ [ { "Distance": 1907, "Duration": 343 } ], [ { "Distance": 5629, "Duration": 954 } Monitor and log with AWS CloudTrail 899 Amazon Location Service ] ], "RoutingBoundary": { "Unbounded": true } } cURL Developer Guide curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/route-matrix?key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] }, { "Position": [-123.11179620827039, 49.3014761219] } ], "Destinations": [ { "Position": [-123.112317039, 49.28897192166] } ], "DepartureTime": "2024-05-28T21:27:56Z", "RoutingBoundary": { "Unbounded": true } }' AWS CLI aws geo-routes calculate-route-matrix --key ${YourKey} \ --origins '[{"Position": [-123.11679620827039, 49.28147612192166]}, {"Position": [-123.11179620827039, 49.3014761219]}]' \ --destinations '[{"Position": [-123.11179620827039, 49.28897192166]}]' \ --departure-time "2024-05-28T21:27:56Z" \ --routing-boundary '{"Unbounded": true}' Monitor and log with AWS CloudTrail 900 Amazon Location Service Developer Guide CalculateRouteMatrix with a geometry-based routing boundary This example shows how you can specify a geometry-based routing boundary when you call CalculateRouteMatrix. Sample request { "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] }, { "Position": [-123.11179620827039, 49.3014761219] } ], "Destinations": [ { "Position": [-123.112317039, 49.28897192166] } ], "DepartureTime": "2024-05-28T21:27:56Z", "RoutingBoundary": { "Geometry": { "AutoCircle": { "Margin": 10000, "MaxRadius": 30000 } } } } Sample response { "ErrorCount": 0, "RouteMatrix": [ [ { "Distance": 1907, "Duration": 344 } ], Monitor and log with AWS CloudTrail 901 Developer Guide Amazon Location Service [ { "Distance": 5629, "Duration": 950 } ] ], "RoutingBoundary": { "Geometry": { "Circle": { "Center": [ -123.1142962082704, 49.29147612191083 ], "Radius": 11127 } }, "Unbounded": false } } cURL curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/route-matrix?key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] }, { "Position": [-123.11179620827039, 49.3014761219] } ], "Destinations":
amazon-location-developer-guide-183
amazon-location-developer-guide.pdf
183
], "DepartureTime": "2024-05-28T21:27:56Z", "RoutingBoundary": { "Geometry": { "AutoCircle": { "Margin": 10000, "MaxRadius": 30000 } } } } Sample response { "ErrorCount": 0, "RouteMatrix": [ [ { "Distance": 1907, "Duration": 344 } ], Monitor and log with AWS CloudTrail 901 Developer Guide Amazon Location Service [ { "Distance": 5629, "Duration": 950 } ] ], "RoutingBoundary": { "Geometry": { "Circle": { "Center": [ -123.1142962082704, 49.29147612191083 ], "Radius": 11127 } }, "Unbounded": false } } cURL curl --request POST \ --url 'https://routes.geo.eu-central-1.amazonaws.com/v2/route-matrix?key=Your_key' \ --header 'Content-Type: application/json' \ --data '{ "Origins": [ { "Position": [-123.11679620827039, 49.28147612192166] }, { "Position": [-123.11179620827039, 49.3014761219] } ], "Destinations": [ { "Position": [-123.112317039, 49.28897192166] } ], "DepartureTime": "2024-05-28T21:27:56Z", "RoutingBoundary": { Monitor and log with AWS CloudTrail 902 Amazon Location Service Developer Guide "Geometry": { "AutoCircle": { "Margin": 10000, "MaxRadius": 30000 } } } }' AWS CLI aws geo-routes calculate-route-matrix --key ${YourKey} \ --origins '[{"Position": [-123.11679620827039, 49.28147612192166]}, {"Position": [-123.11179620827039, 49.3014761219]}]' \ --destinations '[{"Position": [-123.11179620827039, 49.28897192166]}]' \ --departure-time "2024-05-28T21:27:56Z" \ --routing-boundary '{"Geometry": {"AutoCircle": {"Margin": 10000, "MaxRadius": 30000}}}' Best practices The following are a few best practices for integrating with Amazon Location Service. Resource management To help effectively manage your location resources in Amazon Location Service, consider the following best practices: • Use regional endpoints that are central to your expected user base to improve their experience. For information about region endpoints, see the section called “Supported regions”. • For resources that use data providers, such as map resources and place index resources, make sure to follow the terms of use agreement of the specific data provider. For more information, see the section called “Terms of use and data attribution”. • Minimize the creation of resources by having one resource for each configuration of map, place index, or routes. Within a region, you typically need only one resource per data provider or map style. Most applications use existing resources, and do not create resources at run time. • When using different resources in a single application, such as a map resource and a route calculator, use the same data provider in each resource to ensure that the data matches. For Best practices 903 Amazon Location Service Developer Guide example, that a route geometry you create with your route calculator aligns with the streets on the map drawn using the map resource. Billing and cost management To help manage your costs and billing, consider the following best practice: • Use monitoring tools, such as Amazon CloudWatch, to track your resource usage. You can set alerts that notify you when usage is about to exceed your specified limits. For more information, see Creating a Billing Alarm to Monitor Your Estimated AWS Charges in the Amazon CloudWatch User Guide. Quotas and usage You AWS account includes quotas that set a default limit your usage amount. You can set up alarms to alert you when your usage is getting close to your limit, and you can request a raise to a quota, when you need it. For information about how to work with quotas, see the following topics. • the section called “Manage quotas” • the section called “Create CloudWatch alarms” • Visualizing your service quotas and setting alarms in the Amazon CloudWatch User Guide. You can create alarms to give you advance warning when you are close to exceeding your limits. We recommend setting alarms for each quota in each AWS Region where you use Amazon Location. For example, you can monitor your use of the SearchPlaceIndexForText operation, and create an alarm when you exceed 80 percent of your current quota. When you get an alarm warning about your quota, you must decide what to do. You might be using additional resources because your customer base has grown. In that case you may want to request an increase to your quota, such as a 50 percent increase in the quota for an API call in that Region. Or, maybe there's an error in your service that causes you to make additional unnecessary calls to Amazon Location. In that case you'd want to solve the problem in your service. Billing and cost management 904 Amazon Location Service Developer Guide Security in Amazon Location Service Cloud security at AWS is the highest priority. As an AWS customer, you benefit from data centers and network architectures that are built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third- party auditors regularly test and verify the effectiveness of our security as part of the AWS Compliance Programs. To learn about the
amazon-location-developer-guide-184
amazon-location-developer-guide.pdf
184
you benefit from data centers and network architectures that are built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third- party auditors regularly test and verify the effectiveness of our security as part of the AWS Compliance Programs. To learn about the compliance programs that apply to Amazon Location Service, see AWS Services in Scope by Compliance Program. • Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your company’s requirements, and applicable laws and regulations. This documentation helps you understand how to apply the shared responsibility model when using Amazon Location. The following topics show you how to configure Amazon Location to meet your security and compliance objectives. You also learn how to use other AWS services that help you to monitor and secure your Amazon Location resources. Topics • Data protection in Amazon Location Service • Incident Response in Amazon Location Service • Compliance validation for Amazon Location Service • Resilience in Amazon Location Service • Infrastructure security in Amazon Location Service • AWS PrivateLink for Amazon Location • Configuration and vulnerability analysis in Amazon Location • Cross-service confused deputy prevention • Best practices for Amazon Location Service 905 Amazon Location Service Developer Guide Data protection in Amazon Location Service The AWS shared responsibility model applies to data protection in Amazon Location Service. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, see the Data Privacy FAQ. For information about data protection in Europe, see the AWS Shared Responsibility Model and GDPR blog post on the AWS Security Blog. For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways: • Use multi-factor authentication (MFA) with each account. • Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3. • Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see Working with CloudTrail trails in the AWS CloudTrail User Guide. • Use AWS encryption solutions, along with all default security controls within AWS services. • Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3. • If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see Federal Information Processing Standard (FIPS) 140-3. We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a Name field. This includes when you work with Amazon Location or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server. Data protection 906 Amazon Location Service Data privacy Developer Guide With Amazon Location Service, you retain control of your organization’s data. Amazon Location anonymizes all queries sent to data providers by removing customer metadata and account information. Amazon Location doesn't use data providers for tracking and geofencing. This means your sensitive data remains in your AWS account. This helps shield sensitive location information, such as facility, asset, and personnel location, from third parties, protect user privacy, and reduce your application's security risk. For additional information, see the AWS Data Privacy FAQ. Data retention in Amazon Location The following characteristics relate to how Amazon Location collects and stores data for the service: • Amazon Location Service Trackers – When you use the Trackers APIs to track the location of entities, their coordinates can
amazon-location-developer-guide-185
amazon-location-developer-guide.pdf
185
customer metadata and account information. Amazon Location doesn't use data providers for tracking and geofencing. This means your sensitive data remains in your AWS account. This helps shield sensitive location information, such as facility, asset, and personnel location, from third parties, protect user privacy, and reduce your application's security risk. For additional information, see the AWS Data Privacy FAQ. Data retention in Amazon Location The following characteristics relate to how Amazon Location collects and stores data for the service: • Amazon Location Service Trackers – When you use the Trackers APIs to track the location of entities, their coordinates can be stored. Device locations are stored for 30 days before being deleted by the service. • Amazon Location Service Geofences – When you use the Geofences APIs to define areas of interest, the service stores the geometries you provided. They must be explicitly deleted. Note Deleting your AWS account delete all resources within it. For additional information, see the AWS Data Privacy FAQ. Data encryption at rest for Amazon Location Service Amazon Location Service provides encryption by default to protect sensitive customer data at rest using AWS owned encryption keys. • AWS owned keys — Amazon Location uses these keys by default to automatically encrypt personally identifiable data. You can't view, manage, or use AWS owned keys, or audit their use. However, you don't have to take any action or change any programs to protect the keys that Data privacy 907 Amazon Location Service Developer Guide encrypt your data. For more information, see AWS owned keys in the AWS Key Management Service Developer Guide. Encryption of data at rest by default helps reduce the operational overhead and complexity involved in protecting sensitive data. At the same time, it enables you to build secure applications that meet strict encryption compliance and regulatory requirements. While you can't disable this layer of encryption or select an alternate encryption type, you can add a second layer of encryption over the existing AWS owned encryption keys by choosing a customer managed key when you create your tracker and geofence collection resources: • Customer managed keys — Amazon Location supports the use of a symmetric customer managed key that you create, own, and manage to add a second layer of encryption over the existing AWS owned encryption. Because you have full control of this layer of encryption, you can perform such tasks as: • Establishing and maintaining key policies • Establishing and maintaining IAM policies and grants • Enabling and disabling key policies • Rotating key cryptographic material • Adding tags • Creating key aliases • Scheduling keys for deletion For more information, see customer managed key in the AWS Key Management Service Developer Guide. The following table summarizes how Amazon Location encrypts personally identifiable data. Data type AWS owned key encryption Customer managed key encryption (Optional) Position Enabled Enabled A point geometry containing the device position details. Data at rest encryption 908 Amazon Location Service Developer Guide Data type AWS owned key encryption Customer managed key encryption (Optional) PositionProperties Enabled Enabled A set of key-value pairs associated with the position update. GeofenceGeometry Enabled Enabled A polygon geofence geometry representing the geofenced area. DeviceId The device identifier specified when uploading a device position update to a tracker resource. Enabled Not supported GeofenceId Enabled Not supported An identifier specified when storing a geofence geometry, or a batch of geofences in a given geofence collection. Note Amazon Location automatically enables encryption at rest using AWS owned keys to protect personally identifiable data at no charge. However, AWS KMS charges apply for using a customer managed key. For more information about pricing, see the AWS Key Management Service pricing. For more information on AWS KMS, see What is AWS Key Management Service? Data at rest encryption 909 Amazon Location Service Developer Guide How Amazon Location Service uses grants in AWS KMS Amazon Location requires a grant to use your customer managed key. When you create a tracker resource or geofence collection encrypted with a customer managed key, Amazon Location creates a grant on your behalf by sending a CreateGrant request to AWS KMS. Grants in AWS KMS are used to give Amazon Location access to a KMS key in a customer account. Amazon Location requires the grant to use your customer managed key for the following internal operations: • Send DescribeKey requests to AWS KMS to verify that the symmetric customer managed KMS key ID entered when creating a tracker or geofence collection is valid. • Send GenerateDataKeyWithoutPlaintext requests to AWS KMS to generate data keys encrypted by your customer managed key. • Send Decrypt requests to AWS KMS to decrypt the encrypted data keys so that they can be used to encrypt your data. You can revoke access to the grant, or remove the service's access to the
amazon-location-developer-guide-186
amazon-location-developer-guide.pdf
186
a customer account. Amazon Location requires the grant to use your customer managed key for the following internal operations: • Send DescribeKey requests to AWS KMS to verify that the symmetric customer managed KMS key ID entered when creating a tracker or geofence collection is valid. • Send GenerateDataKeyWithoutPlaintext requests to AWS KMS to generate data keys encrypted by your customer managed key. • Send Decrypt requests to AWS KMS to decrypt the encrypted data keys so that they can be used to encrypt your data. You can revoke access to the grant, or remove the service's access to the customer managed key at any time. If you do, Amazon Location won't be able to access any of the data encrypted by the customer managed key, which affects operations that are dependent on that data. For example, if you attempt to get device positions from an encrypted tracker that Amazon Location can't access, then the operation would return an AccessDeniedException error. Create a customer managed key You can create a symmetric customer managed key by using the AWS Management Console, or the AWS KMS APIs. To create a symmetric customer managed key Follow the steps for Creating symmetric customer managed key in the AWS Key Management Service Developer Guide. Key policy Key policies control access to your customer managed key. Every customer managed key must have exactly one key policy, which contains statements that determine who can use the key and how Data at rest encryption 910 Amazon Location Service Developer Guide they can use it. When you create your customer managed key, you can specify a key policy. For more information, see Managing access to customer managed keys in the AWS Key Management Service Developer Guide. To use your customer managed key with your Amazon Location resources, the following API operations must be permitted in the key policy: • kms:CreateGrant – Adds a grant to a customer managed key. Grants control access to a specified KMS key, which allows access to grant operations Amazon Location requires. For more information about Using Grants, see the AWS Key Management Service Developer Guide. This allows Amazon Location to do the following: • Call GenerateDataKeyWithoutPlainText to generate an encrypted data key and store it, because the data key isn't immediately used to encrypt. • Call Decrypt to use the stored encrypted data key to access encrypted data. • Set up a retiring principal to allow the service to RetireGrant. • kms:DescribeKey – Provides the customer managed key details to allow Amazon Location to validate the key. The following are policy statement examples you can add for Amazon Location: "Statement" : [ { "Sid" : "Allow access to principals authorized to use Amazon Location", "Effect" : "Allow", "Principal" : { "AWS" : "*" }, "Action" : [ "kms:DescribeKey", "kms:CreateGrant" ], "Resource" : "*", "Condition" : { "StringEquals" : { "kms:ViaService" : "geo.region.amazonaws.com", "kms:CallerAccount" : "111122223333" } }, { Data at rest encryption 911 Amazon Location Service Developer Guide "Sid": "Allow access for key administrators", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:root" }, "Action" : [ "kms:*" ], "Resource": "arn:aws:kms:region:111122223333:key/key_ID" }, { "Sid" : "Allow read-only access to key metadata to the account", "Effect" : "Allow", "Principal" : { "AWS" : "arn:aws:iam::111122223333:root" }, "Action" : [ "kms:Describe*", "kms:Get*", "kms:List*", "kms:RevokeGrant" ], "Resource" : "*" } ] For more information about specifying permissions in a policy, see the AWS Key Management Service Developer Guide. For more information about troubleshooting key access, see the AWS Key Management Service Developer Guide. Specifying a customer managed key for Amazon Location You can specify a customer managed key as a second layer encryption for the following resources: • the section called “Create a tracker” • the section called “Get started” When you create a resource, you can specify the data key by entering a KMS ID, which Amazon Location uses to encrypt the identifiable personal data stored by the resource. Data at rest encryption 912 Amazon Location Service Developer Guide • KMS ID — A key identifier for an AWS KMS customer managed key. Enter a key ID, key ARN, alias name, or alias ARN. Amazon Location Service encryption context An encryption context is an optional set of key-value pairs that contain additional contextual information about the data. AWS KMS uses the encryption context as additional authenticated data to support authenticated encryption. When you include an encryption context in a request to encrypt data, AWS KMS binds the encryption context to the encrypted data. To decrypt data, you include the same encryption context in the request. Amazon Location Service encryption context Amazon Location uses the same encryption context in all AWS KMS cryptographic operations, where the key is aws:geo:arn and the value is the resource Amazon Resource Name (ARN). Example "encryptionContext": { "aws:geo:arn": "arn:aws:geo:us-west-2:111122223333:geofence-collection/SAMPLE- GeofenceCollection" } Using encryption context for monitoring
amazon-location-developer-guide-187
amazon-location-developer-guide.pdf
187
contain additional contextual information about the data. AWS KMS uses the encryption context as additional authenticated data to support authenticated encryption. When you include an encryption context in a request to encrypt data, AWS KMS binds the encryption context to the encrypted data. To decrypt data, you include the same encryption context in the request. Amazon Location Service encryption context Amazon Location uses the same encryption context in all AWS KMS cryptographic operations, where the key is aws:geo:arn and the value is the resource Amazon Resource Name (ARN). Example "encryptionContext": { "aws:geo:arn": "arn:aws:geo:us-west-2:111122223333:geofence-collection/SAMPLE- GeofenceCollection" } Using encryption context for monitoring When you use a symmetric customer managed key to encrypt your tracker or geofence collection, you can also use the encryption context in audit records and logs to identify how the customer managed key is being used. The encryption context also appears in logs generated by AWS CloudTrail or Amazon CloudWatch Logs. Using encryption context to control access to your customer managed key You can use the encryption context in key policies and IAM policies as conditions to control access to your symmetric customer managed key. You can also use encryption context constraints in a grant. Amazon Location uses an encryption context constraint in grants to control access to the customer managed key in your account or region. The grant constraint requires that the operations that the grant allows use the specified encryption context. Data at rest encryption 913 Amazon Location Service Example Developer Guide The following are example key policy statements to grant access to a customer managed key for a specific encryption context. The condition in this policy statement requires that the grants have an encryption context constraint that specifies the encryption context. { "Sid": "Enable DescribeKey", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:role/ExampleReadOnlyRole" }, "Action": "kms:DescribeKey", "Resource": "*" }, { "Sid": "Enable CreateGrant", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:role/ExampleReadOnlyRole" }, "Action": "kms:CreateGrant", "Resource": "*", "Condition": { "StringEquals": { "kms:EncryptionContext:aws:geo:arn": "arn:aws:geo:us- west-2:111122223333:tracker/SAMPLE-Tracker" } } } Monitoring your encryption keys for Amazon Location Service When you use an AWS KMS customer managed key with your Amazon Location Service resources, you can use AWS CloudTrail or Amazon CloudWatch Logs to track requests that Amazon Location sends to AWS KMS. The following examples are AWS CloudTrail events for CreateGrant, GenerateDataKeyWithoutPlainText, Decrypt, and DescribeKey to monitor KMS operations called by Amazon Location to access data encrypted by your customer managed key: Data at rest encryption 914 Amazon Location Service CreateGrant Developer Guide When you use an AWS KMS customer managed key to encrypt your tracker or geofence collection resources, Amazon Location sends a CreateGrant request on your behalf to access the KMS key in your AWS account. The grant that Amazon Location creates are specific to the resource associated with the AWS KMS customer managed key. In addition, Amazon Location uses the RetireGrant operation to remove a grant when you delete a resource. The following example event records the CreateGrant operation: { "eventVersion": "1.08", "userIdentity": { "type": "AssumedRole", "principalId": "AROAIGDTESTANDEXAMPLE:Sampleuser01", "arn": "arn:aws:sts::111122223333:assumed-role/Admin/Sampleuser01", "accountId": "111122223333", "accessKeyId": "AKIAIOSFODNN7EXAMPLE3", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "AROAIGDTESTANDEXAMPLE:Sampleuser01", "arn": "arn:aws:sts::111122223333:assumed-role/Admin/Sampleuser01", "accountId": "111122223333", "userName": "Admin" }, "webIdFederationData": {}, "attributes": { "mfaAuthenticated": "false", "creationDate": "2021-04-22T17:02:00Z" } }, "invokedBy": "geo.amazonaws.com" }, "eventTime": "2021-04-22T17:07:02Z", "eventSource": "kms.amazonaws.com", "eventName": "CreateGrant", "awsRegion": "us-west-2", "sourceIPAddress": "172.12.34.56", "userAgent": "ExampleDesktop/1.0 (V1; OS)", "requestParameters": { "retiringPrincipal": "geo.region.amazonaws.com", "operations": [ Data at rest encryption 915 Amazon Location Service Developer Guide "GenerateDataKeyWithoutPlaintext", "Decrypt", "DescribeKey" ], "keyId": "arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE", "granteePrincipal": "geo.region.amazonaws.com" }, "responseElements": { "grantId": "0ab0ac0d0b000f00ea00cc0a0e00fc00bce000c000f0000000c0bc0a0000aaafSAMPLE" }, "requestID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "eventID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "readOnly": false, "resources": [ { "accountId": "111122223333", "type": "AWS::KMS::Key", "ARN": "arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE" } ], "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management", "recipientAccountId": "111122223333" } GenerateDataKeyWithoutPlainText When you enable an AWS KMS customer managed key for your tracker or geofence collection resource, Amazon Location creates a unique table key. It sends a GenerateDataKeyWithoutPlainText request to AWS KMS that specifies the AWS KMS customer managed key for the resource. The following example event records the GenerateDataKeyWithoutPlainText operation: { "eventVersion": "1.08", "userIdentity": { "type": "AWSService", "invokedBy": "geo.amazonaws.com" Data at rest encryption 916 Amazon Location Service }, "eventTime": "2021-04-22T17:07:02Z", "eventSource": "kms.amazonaws.com", "eventName": "GenerateDataKeyWithoutPlaintext", "awsRegion": "us-west-2", "sourceIPAddress": "172.12.34.56", "userAgent": "ExampleDesktop/1.0 (V1; OS)", "requestParameters": { "encryptionContext": { Developer Guide "aws:geo:arn": "arn:aws:geo:us-west-2:111122223333:geofence-collection/ SAMPLE-GeofenceCollection" }, "keySpec": "AES_256", "keyId": "arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE" }, "responseElements": null, "requestID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "eventID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "readOnly": true, "resources": [ { "accountId": "111122223333", "type": "AWS::KMS::Key", "ARN": "arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE" } ], "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management", "recipientAccountId": "111122223333", "sharedEventID": "57f5dbee-16da-413e-979f-2c4c6663475e" } Decrypt When you access an encrypted tracker or geofence collection,Amazon Location calls the Decrypt operation to use the stored encrypted data key to access the encrypted data. The following example event records the Decrypt operation: { "eventVersion": "1.08", Data at rest encryption 917 Amazon Location Service Developer Guide "userIdentity": { "type": "AWSService", "invokedBy": "geo.amazonaws.com" }, "eventTime": "2021-04-22T17:10:51Z", "eventSource": "kms.amazonaws.com", "eventName": "Decrypt", "awsRegion": "us-west-2", "sourceIPAddress": "172.12.34.56", "userAgent": "ExampleDesktop/1.0
amazon-location-developer-guide-188
amazon-location-developer-guide.pdf
188
west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE" }, "responseElements": null, "requestID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "eventID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "readOnly": true, "resources": [ { "accountId": "111122223333", "type": "AWS::KMS::Key", "ARN": "arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE" } ], "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management", "recipientAccountId": "111122223333", "sharedEventID": "57f5dbee-16da-413e-979f-2c4c6663475e" } Decrypt When you access an encrypted tracker or geofence collection,Amazon Location calls the Decrypt operation to use the stored encrypted data key to access the encrypted data. The following example event records the Decrypt operation: { "eventVersion": "1.08", Data at rest encryption 917 Amazon Location Service Developer Guide "userIdentity": { "type": "AWSService", "invokedBy": "geo.amazonaws.com" }, "eventTime": "2021-04-22T17:10:51Z", "eventSource": "kms.amazonaws.com", "eventName": "Decrypt", "awsRegion": "us-west-2", "sourceIPAddress": "172.12.34.56", "userAgent": "ExampleDesktop/1.0 (V1; OS)", "requestParameters": { "encryptionContext": { "aws:geo:arn": "arn:aws:geo:us-west-2:111122223333:geofence-collection/ SAMPLE-GeofenceCollection" }, "keyId": "arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE", "encryptionAlgorithm": "SYMMETRIC_DEFAULT" }, "responseElements": null, "requestID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "eventID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "readOnly": true, "resources": [ { "accountId": "111122223333", "type": "AWS::KMS::Key", "ARN": "arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE" } ], "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management", "recipientAccountId": "111122223333", "sharedEventID": "dc129381-1d94-49bd-b522-f56a3482d088" } DescribeKey Amazon Location uses the DescribeKey operation to verify if the AWS KMS customer managed key associated with your tracker or geofence collection exists in the account and region. Data at rest encryption 918 Amazon Location Service Developer Guide The following example event records the DescribeKey operation: { "eventVersion": "1.08", "userIdentity": { "type": "AssumedRole", "principalId": "AROAIGDTESTANDEXAMPLE:Sampleuser01", "arn": "arn:aws:sts::111122223333:assumed-role/Admin/Sampleuser01", "accountId": "111122223333", "accessKeyId": "AKIAIOSFODNN7EXAMPLE3", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "AROAIGDTESTANDEXAMPLE:Sampleuser01", "arn": "arn:aws:sts::111122223333:assumed-role/Admin/Sampleuser01", "accountId": "111122223333", "userName": "Admin" }, "webIdFederationData": {}, "attributes": { "mfaAuthenticated": "false", "creationDate": "2021-04-22T17:02:00Z" } }, "invokedBy": "geo.amazonaws.com" }, "eventTime": "2021-04-22T17:07:02Z", "eventSource": "kms.amazonaws.com", "eventName": "DescribeKey", "awsRegion": "us-west-2", "sourceIPAddress": "172.12.34.56", "userAgent": "ExampleDesktop/1.0 (V1; OS)", "requestParameters": { "keyId": "00dd0db0-0000-0000-ac00-b0c000SAMPLE" }, "responseElements": null, "requestID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "eventID": "ff000af-00eb-00ce-0e00-ea000fb0fba0SAMPLE", "readOnly": true, "resources": [ { "accountId": "111122223333", "type": "AWS::KMS::Key", Data at rest encryption 919 Amazon Location Service Developer Guide "ARN": "arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-123456SAMPLE" } ], "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management", "recipientAccountId": "111122223333" } Learn more The following resources provide more information about data encryption at rest. • For more information about AWS Key Management Service basic concepts, see the AWS Key Management Service Developer Guide. • For more information about Security best practices for AWS Key Management Service, see the AWS Key Management Service Developer Guide. Data in transit encryption for Amazon Location Service Amazon Location protects data in transit, as it travels to and from the service, by automatically encrypting all inter-network data using the Transport Layer Security (TLS) 1.2 encryption protocol. Direct HTTPS requests sent to the Amazon Location Service APIs are signed by using the AWS Signature Version 4 Algorithm to establish a secure connection. Incident Response in Amazon Location Service Security is the highest priority at AWS. As part of the AWS Cloud shared responsibility model, AWS manages a data center and network architecture that meets the requirements of the most security- sensitive organizations. As an AWS customer, you share a responsibility for maintaining security in the cloud. This means you control the security you choose to implement from the AWS tools and features you have access to. By establishing a security baseline that meets the objectives for your applications running in the cloud, you're able to detect deviations that you can respond to. Since security incident response can be a complex topic, we encourage you to review the following resources so that you are better able to understand the impact that incident response (IR) and your choices have on your corporate Data in transit encryption 920 Amazon Location Service Developer Guide goals: AWS Security Incident Response Guide, AWS Security Best Practices whitepaper, and the AWS Cloud Adoption Framework (AWS CAF). Logging and Monitoring in Amazon Location Service Logging and monitoring are an important part of incident response. It lets you establish a security baseline to detect deviations that you can investigate and respond to. By implementing logging and monitoring for Amazon Location Service, you're able to maintain the reliability, availability, and performance for your projects and resources. AWS provides several tools that can help you log and collect data for incident response: AWS CloudTrail Amazon Location Service integrates with AWS CloudTrail, which is a service that provides a record of actions taken by a user, role or AWS service. This includes actions from the Amazon Location Service console, and programmatic calls to Amazon Location API operations. These records of action are called events. For more information, see Logging and monitoring Amazon Location Service with AWS CloudTrail. Amazon CloudWatch You can use Amazon CloudWatch to collect and analyze metrics related to your Amazon Location Service account. You can enable CloudWatch alarms to notify you if a metric meets certain conditions, and has reached a specified threshold. When you create an alarm, CloudWatch sends a notification to an Amazon Simple Notification Service that you define. For more information, see the Monitoring Amazon Location Service with Amazon CloudWatch. AWS Health Dashboards Using AWS Health Dashboards, you can verify the status of the Amazon Location Service service.
amazon-location-developer-guide-189
amazon-location-developer-guide.pdf
189
For more information, see Logging and monitoring Amazon Location Service with AWS CloudTrail. Amazon CloudWatch You can use Amazon CloudWatch to collect and analyze metrics related to your Amazon Location Service account. You can enable CloudWatch alarms to notify you if a metric meets certain conditions, and has reached a specified threshold. When you create an alarm, CloudWatch sends a notification to an Amazon Simple Notification Service that you define. For more information, see the Monitoring Amazon Location Service with Amazon CloudWatch. AWS Health Dashboards Using AWS Health Dashboards, you can verify the status of the Amazon Location Service service. You can also monitor and view historical data about any events or issues that might affect your AWS environment. For more information, see the AWS Health User Guide. Compliance validation for Amazon Location Service To learn whether an AWS service is within the scope of specific compliance programs, see AWS services in Scope by Compliance Program and choose the compliance program that you are interested in. For general information, see AWS Compliance Programs. Logging and Monitoring 921 Amazon Location Service Developer Guide You can download third-party audit reports using AWS Artifact. For more information, see Downloading Reports in AWS Artifact. Your compliance responsibility when using AWS services is determined by the sensitivity of your data, your company's compliance objectives, and applicable laws and regulations. AWS provides the following resources to help with compliance: • Security Compliance & Governance – These solution implementation guides discuss architectural considerations and provide steps for deploying security and compliance features. • HIPAA Eligible Services Reference – Lists HIPAA eligible services. Not all AWS services are HIPAA eligible. • AWS Compliance Resources – This collection of workbooks and guides might apply to your industry and location. • AWS Customer Compliance Guides – Understand the shared responsibility model through the lens of compliance. The guides summarize the best practices for securing AWS services and map the guidance to security controls across multiple frameworks (including National Institute of Standards and Technology (NIST), Payment Card Industry Security Standards Council (PCI), and International Organization for Standardization (ISO)). • Evaluating Resources with Rules in the AWS Config Developer Guide – The AWS Config service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations. • AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS. Security Hub uses security controls to evaluate your AWS resources and to check your compliance against security industry standards and best practices. For a list of supported services and controls, see Security Hub controls reference. • Amazon GuardDuty – This AWS service detects potential threats to your AWS accounts, workloads, containers, and data by monitoring your environment for suspicious and malicious activities. GuardDuty can help you address various compliance requirements, like PCI DSS, by meeting intrusion detection requirements mandated by certain compliance frameworks. • AWS Audit Manager – This AWS service helps you continuously audit your AWS usage to simplify how you manage risk and compliance with regulations and industry standards. Resilience in Amazon Location Service The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with Resilience 922 Amazon Location Service Developer Guide low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure. In addition to the AWS global infrastructure, Amazon Location offers several features to help support your data resiliency and backup needs. Infrastructure security in Amazon Location Service As a managed service, Amazon Location Service is protected by AWS global network security. For information about AWS security services and how AWS protects infrastructure, see AWS Cloud Security. To design your AWS environment using the best practices for infrastructure security, see Infrastructure Protection in Security Pillar AWS Well‐Architected Framework. You use AWS published API calls to access Amazon Location through the network. Clients must support the following: • Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3. • Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. AWS PrivateLink for Amazon Location With AWS PrivateLink for Amazon Location, you can provision interface Amazon VPC endpoints (interface
amazon-location-developer-guide-190
amazon-location-developer-guide.pdf
190
We require TLS 1.2 and recommend TLS 1.3. • Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. AWS PrivateLink for Amazon Location With AWS PrivateLink for Amazon Location, you can provision interface Amazon VPC endpoints (interface endpoints) in your virtual private cloud (Amazon VPC). These endpoints are directly accessible from applications that are on premises over VPN and AWS Direct Connect, or in a different AWS Region over Amazon VPC peering. Using AWS PrivateLink and interface endpoints, you can simplify private network connectivity from your applications to Amazon Location. Infrastructure security 923 Amazon Location Service Developer Guide Applications in your VPC don't need public IP addresses to communicate with Amazon Location interface VPC endpoints for Amazon Location operations. Interface endpoints are represented by one or more elastic network interfaces (ENIs) that are assigned private IP addresses from subnets in your Amazon VPC. Requests to Amazon Location over interface endpoints stay on the Amazon network. You can also access interface endpoints in your Amazon VPC from on-premises applications through AWS Direct Connect or AWS Virtual Private Network (AWS VPN). For more information about how to connect your Amazon VPC with your on-premises network, see the AWS Direct Connect User Guide and the AWS Site-to-Site VPN User Guide. For general information about interface endpoints, see Interface Amazon VPC endpoints (AWS PrivateLink) in the AWS PrivateLink Guide. Topics • Types of Amazon VPC endpoints for Amazon Location Service • Considerations when using AWS PrivateLink for Amazon Location Service • Create an interface endpoint for Amazon Location Service • Access Amazon Location API operations from Amazon Location interface endpoints • Update an on-premises DNS configuration • Create an Amazon VPC endpoint policy for Amazon Location Types of Amazon VPC endpoints for Amazon Location Service You can use one type of Amazon VPC endpoint to access Amazon Location Service: interface endpoints (by using AWS PrivateLink). Interface endpoints use private IP addresses to route requests to Amazon Location from within your Amazon VPC, on premises, or from an Amazon VPC in another AWS Region by using Amazon VPC peering. For more information, see What is Amazon VPC peering? and Transit Gateway vs Amazon VPC peering. Interface endpoints are compatible with gateway endpoints. If you have an existing gateway endpoint in the Amazon VPC, you can use both types of endpoints in the same Amazon VPC. Interface endpoints for Amazon Location have the following properties: • Your network traffic remains on the AWS network • Use private IP addresses from your Amazon VPC to access Amazon Location Service • Allows access from on premises Types of Amazon VPC endpoints for Amazon Location Service 924 Amazon Location Service Developer Guide • Allows access from an Amazon VPC endpoint in another AWS Region by using Amazon VPC peering or AWS Transit Gateway • Interface endpoints are billed Considerations when using AWS PrivateLink for Amazon Location Service Amazon VPC considerations apply to AWS PrivateLink for Amazon Location Service. For more information, see Interface endpoint considerations and AWS PrivateLink quotas in the AWS PrivateLink Guide. In addition, the following restrictions apply. AWS PrivateLink for Amazon Location Service doesn't support the following: • Transport Layer Security (TLS) 1.1 • Private and Hybrid Domain Name System (DNS) services Amazon VPC endpoints: • Don't support Amazon Location Service Maps API operations, including: GetGlyphs, GetSprites, and GetStyleDescriptor • Don't support cross-region requests. Ensure that you create your endpoint in the same region where you plan to issue your API calls to Amazon Location Service. • Only support Amazon-provided DNS through Amazon Route 53. If you want to use your own DNS, use conditional DNS forwarding. For more information, see DHCP Options Sets in the Amazon VPC User Guide. • Must allow incoming connections on port 443 from the private subnet of the VPC through the security group attached to the VPC endpoint You can submit up to 50,000 requests per second for each AWS PrivateLink endpoint that you enable. Considerations when using AWS PrivateLink for Amazon Location Service 925 Amazon Location Service Developer Guide Note Network connectivity timeouts to AWS PrivateLink endpoints are not within the scope of Amazon Location error responses and need to be appropriately handled by your applications connecting to the AWS PrivateLink endpoints. Create an interface endpoint for Amazon Location Service You can create an interface endpoint for Amazon Location Service using either the Amazon VPC Console or the AWS Command Line Interface (AWS CLI). For more
amazon-location-developer-guide-191
amazon-location-developer-guide.pdf
191
endpoint You can submit up to 50,000 requests per second for each AWS PrivateLink endpoint that you enable. Considerations when using AWS PrivateLink for Amazon Location Service 925 Amazon Location Service Developer Guide Note Network connectivity timeouts to AWS PrivateLink endpoints are not within the scope of Amazon Location error responses and need to be appropriately handled by your applications connecting to the AWS PrivateLink endpoints. Create an interface endpoint for Amazon Location Service You can create an interface endpoint for Amazon Location Service using either the Amazon VPC Console or the AWS Command Line Interface (AWS CLI). For more information, see Create an interface endpoint in the AWS PrivateLink Guide. There are six different VPC endpoints, one for each feature offered by Amazon Location Service. Category Endpoint Maps Places Routes com.amazonaws. region.geo.maps com.amazonaws. region.geo.places com.amazonaws. region.geo.routes Geofences com.amazonaws. region.geo.geofencing Trackers com.amazonaws. region.geo.tracking Metadata com.amazonaws. region.geo.metadata For example: com.amazonaws.us-east-2.geo.maps After you create the endpoint, you have the option to enable a private DNS hostname. To enable, select Enable Private DNS Name in the Amazon VPC Console when you create the VPC endpoint. If you enable private DNS for the interface endpoint, you can make API requests to Amazon Location Service service using its default Regional DNS name. The following examples show the default Regional DNS names format. Create an interface endpoint for Amazon Location Service 926 Amazon Location Service Developer Guide • maps.geo.region.amazonaws.com • places.geo.region.amazonaws.com • routes.geo.region.amazonaws.com • tracking.geo.region.amazonaws.com • geofencing.geo.region.amazonaws.com • metadata.geo.region.amazonaws.com The previous DNS names are for IPv4 domains. The following IPV6 DNS names can also be used for interface endpoints. • maps.geo.region.api.aws • places.geo.region.api.aws • routes.geo.region.api.aws • tracking.geo.region.api.aws • geofencing.geo.region.api.aws • metadata.geo.region.api.aws Access Amazon Location API operations from Amazon Location interface endpoints You can use the AWS CLI or AWS SDKs to access Amazon Location API operations through Amazon Location interface endpoints. Example: Create a VPC endpoint aws ec2 create-vpc-endpoint \ --region us-east-1 \ --service-name location-service-name \ --vpc-id client-vpc-id \ --subnet-ids client-subnet-id \ --vpc-endpoint-type Interface \ --security-group-ids client-sg-id Example: Modify a VPC endpoint aws ec2 modify-vpc-endpoint \ Access Amazon Location API operations from Amazon Location interface endpoints 927 Amazon Location Service --region us-east-1 \ Developer Guide --vpc-endpoint-id client-vpc-endpoint-id \ --policy-document policy-document \ #example optional parameter --add-security-group-ids security-group-ids \ #example optional parameter # any additional parameters needed, see PrivateLink documentation for more details Update an on-premises DNS configuration When using endpoint-specific DNS names to access the interface endpoints for Amazon Location, you don't have to update your on-premises DNS resolver. You can resolve the endpoint-specific DNS name with the private IP address of the interface endpoint from the public Amazon Location DNS domain. Use interface endpoints to access Amazon Location without a gateway endpoint or an internet gateway in the Amazon VPC Interface endpoints in your Amazon VPC can route both in-Amazon VPC applications and on- premises applications to Amazon Location over the Amazon network. Create an Amazon VPC endpoint policy for Amazon Location You can attach an endpoint policy to your Amazon VPC endpoint that controls access to Amazon Location. The policy specifies the following information: • The AWS Identity and Access Management (IAM) principal that can perform actions • The actions that can be performed • The resources on which actions can be performed Example: Sample VPCe policy for accessing Amazon Location Service Places APIs: { "Version": "2012-10-17", "Statement": [ { "Sid": "Allow-access-to-location-service-places-opeartions", "Effect": "Allow", "Action": [ "geo-places:*", "geo:*" ], Update an on-premises DNS configuration 928 Amazon Location Service "Resource": [ "arn:aws:geo-places:us-east-1::provider/default", "arn:aws:geo:us-east-1:*:place-index/*" Developer Guide ] } ] } Configuration and vulnerability analysis in Amazon Location Configuration and IT controls are a shared responsibility between AWS and you, our customer. For more information, see the AWS shared responsibility model. Cross-service confused deputy prevention The confused deputy problem is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In AWS, cross-service impersonation can result in the confused deputy problem. Cross-service impersonation can occur when one service (the calling service) calls another service (the called service). The calling service can be manipulated to use its permissions to act on another customer's resources in a way it should not otherwise have permission to access. To prevent this, AWS provides tools that help you protect your data for all services with service principals that have been given access to resources in your account. Amazon Location Service does not act as a calling service on your behalf to other AWS services, so you do not need to add these protections in this case. To learn more about confused deputy, see The confused deputy problem in the AWS Identity and Access Management User Guide. Best practices for Amazon Location Service This topic provides best practices to help you use Amazon Location Service. While these best practices can help
amazon-location-developer-guide-192
amazon-location-developer-guide.pdf
192
prevent this, AWS provides tools that help you protect your data for all services with service principals that have been given access to resources in your account. Amazon Location Service does not act as a calling service on your behalf to other AWS services, so you do not need to add these protections in this case. To learn more about confused deputy, see The confused deputy problem in the AWS Identity and Access Management User Guide. Best practices for Amazon Location Service This topic provides best practices to help you use Amazon Location Service. While these best practices can help you take full advantage of the Amazon Location Service, they do not represent a complete solution. You should follow only the recommendations that are applicable for your environment. Topics • Security Configuration and vulnerability analysis 929 Amazon Location Service Security Developer Guide To help manage or even avoid security risks, consider the following best practices: • Use identity federation and IAM roles to manage, control, or limit access to your Amazon Location resources. For more information, see IAM Best Practices in the IAM User Guide. • Follow the Principle of Least Privilege to grant only the minimum required access to your Amazon Location Service resources. • For Amazon Location Service resources used in web applications, restrict access using an aws:referer IAM condition, limiting use by sites other than those included in the allow-list. • Use monitoring and logging tools to track resource access and usage. For more information, see the section called “Logging and Monitoring” and Logging Data Events for Trails in the AWS CloudTrail User Guide. • Use secure connections, such as those that begin with https:// to add security and protect users against attacks while data is being transmitted between the server and browser. Detective security best practices for Amazon Location Service The following best practices for Amazon Location Service can help detect security incidents: Implement AWS monitoring tools Monitoring is critical to incident response and maintains the reliability and security of Amazon Location Service resources and your solutions. You can implement monitoring tools from the several tools and services available through AWS to monitor your resources and your other AWS services. For example, Amazon CloudWatch allows you to monitor metrics for Amazon Location Service and enables you to setup alarms to notify you if a metric meets certain conditions you've set and has reached a threshold you've defined. When you create an alarm, you can set CloudWatch to sent a notification to alert using Amazon Simple Notification Service. For more information, see the section called “Logging and Monitoring”. Enable AWS logging tools Logging provides a record of actions taken by a user, role or an AWS service in Amazon Location Service. You can implement logging tools such as AWS CloudTrail to collect data on actions to detect unusual API activity. Security 930 Amazon Location Service Developer Guide When you create a trail, you can configure CloudTrail to log events. Events are records of resource operations performed on or within a resource such as the request made to Amazon Location, the IP address from which the request was made, who made the request, when the request was made, along with additional data. For more information, see Logging Data Events for Trails in the AWS CloudTrail User Guide. Preventive security best practices for Amazon Location Service The following best practices for Amazon Location Service can help prevent security incidents: Use secure connections Always use encrypted connections, such as those that begin with https:// to keep sensitive information secure in transit. Implement least privilege access to resources When you create custom policies to Amazon Location resources, grant only the permissions required to perform a task. It's recommended to start with a minimum set of permissions and grant additional permissions as needed. Implementing least privilege access is essential to reducing the risk and impact that could result from errors or malicious attacks. For more information, see the section called “Use IAM”. Use globally-unique IDs as device IDs Use the following conventions for device IDs. • Device IDs must be unique. • Device IDs should not be secret, because they can be used as foreign keys to other systems. • Device IDs should not contain personally-identifiable information (PII), such as phone device IDs or email addresses. • Device IDs should not be predictable. Opaque identifiers like UUIDs are recommended. Do not include PII in device position properties When sending device updates (for example, using DevicePositionUpdate), do not include personally-identifiable information (PII) such as phone number or email address in the PositionProperties. Security 931 Amazon Location Service Developer Guide Document history The following table describes the documentation for Amazon Location Service. For notification about updates you can subscribe to an RSS feed. Change Description Date Amazon Location Service releases enhanced version to Amazon
amazon-location-developer-guide-193
amazon-location-developer-guide.pdf
193
should not contain personally-identifiable information (PII), such as phone device IDs or email addresses. • Device IDs should not be predictable. Opaque identifiers like UUIDs are recommended. Do not include PII in device position properties When sending device updates (for example, using DevicePositionUpdate), do not include personally-identifiable information (PII) such as phone number or email address in the PositionProperties. Security 931 Amazon Location Service Developer Guide Document history The following table describes the documentation for Amazon Location Service. For notification about updates you can subscribe to an RSS feed. Change Description Date Amazon Location Service releases enhanced version to Amazon Location Service now offers enhanced Places, general availability Routes, and Maps functiona October 31, 2024 lity, enabling developers to add advanced location capabilities into their applicati ons more easily. These improvements introduce new capabilities and a new streamlined developer experience to support location-based use cases across industries such as healthcare, transportation & logistics, and retail. For more information, see the release. 932 Amazon Location Service Developer Guide AWS Glossary For the latest AWS terminology, see the AWS glossary in the AWS Glossary Reference. 933
amazon-lookout-for-equipment-ug-001
amazon-lookout-for-equipment-ug.pdf
1
User Guide Amazon Lookout for Equipment Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon Lookout for Equipment User Guide Amazon Lookout for Equipment: User Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Amazon Lookout for Equipment Table of Contents User Guide .......................................................................................................................................................... x What is Amazon Lookout for Equipment? ..................................................................................... 1 Are you a first-time user of Lookout for Equipment? ........................................................................... 2 Pricing for Amazon Lookout for Equipment ........................................................................................... 2 How it works .................................................................................................................................... 3 Step by step .................................................................................................................................................. 4 Setting up your account .................................................................................................................. 5 Sign up for an AWS account ...................................................................................................................... 5 Create a user with administrative access ................................................................................................ 5 Creating your project ...................................................................................................................... 8 Naming your project .................................................................................................................................. 8 Formatting your data .................................................................................................................... 10 ....................................................................................................................................................................... 12 Understanding the minimum date range ............................................................................................. 14 Adding your dataset ...................................................................................................................... 15 Setting your permissions .......................................................................................................................... 15 Logging your ingestion data .................................................................................................................... 15 Choosing your schema .............................................................................................................................. 16 Uploading your data to Amazon S3 ...................................................................................................... 17 Instructing Lookout for Equipment to ingest your data .................................................................... 18 Reviewing data ingestion .............................................................................................................. 19 Reviewing the job ...................................................................................................................................... 19 Checking the logs ................................................................................................................................. 20 Checking the files ....................................................................................................................................... 21 Anticipating schema detection problems ........................................................................................ 21 Evaluating sensor grades .......................................................................................................................... 22 Choosing the best sensors for your project .................................................................................... 25 Understanding labeling ................................................................................................................. 27 Training your model ...................................................................................................................... 29 Specifying model details .......................................................................................................................... 29 Configuring your input data .................................................................................................................... 30 Choosing your training and evaluation settings. ........................................................................... 30 Training, evaluating, and sampling ................................................................................................... 31 Labeling your data ............................................................................................................................... 31 iii Amazon Lookout for Equipment User Guide Starting the training process ............................................................................................................. 34 Evaluating your model .................................................................................................................. 35 Viewing the results for a model ............................................................................................................. 36 Getting pointwise model diagnostics (SDK) ......................................................................................... 37 Versioning your model .................................................................................................................. 41 Understanding model versioning ............................................................................................................ 41 Understanding model status ................................................................................................................... 42 Activating your model ............................................................................................................................... 42 Retraining your model .................................................................................................................. 43 Understanding retraining ......................................................................................................................... 43 Setting up your retraining scheduler ..................................................................................................... 43 Understanding retraining data ................................................................................................................ 44 Understanding retraining metrics ........................................................................................................... 45 Model metrics ........................................................................................................................................ 46 Model quality ......................................................................................................................................... 46 Importing your resources .............................................................................................................. 48 Importing a model ..................................................................................................................................... 48 Importing a model ............................................................................................................................... 48 APIs related to importing ................................................................................................................... 49 Importing a dataset ............................................................................................................................. 50 Controlling access to your model ...................................................................................................... 51 Comparing access to model versions with access to parent models .......................................... 53 Importing a model version with accumulated inference data ..................................................... 55 Bulk importing resources .......................................................................................................................... 57 Running the bulk import scripts ....................................................................................................... 57 Resource CSV file script ....................................................................................................................... 60 Resource configuration script ............................................................................................................. 63 Bulk import script ................................................................................................................................. 65 Scheduling inference ..................................................................................................................... 72 Starting inference ...................................................................................................................................... 72 Managing inference schedules ................................................................................................................ 73 Stopping inference ............................................................................................................................... 73 Resuming inference .............................................................................................................................. 73 Editing an active schedule .................................................................................................................. 73 Editing an active schedule .................................................................................................................. 74 Editing an active schedule .................................................................................................................. 74 iv Amazon Lookout for Equipment User Guide Editing an inactive schedule ............................................................................................................... 75 Understanding the inference process .................................................................................................... 75 Understanding inference scheduling windows ............................................................................... 75 The inference process .......................................................................................................................... 76 Reviewing inference results .......................................................................................................... 78 In the console ............................................................................................................................................. 78 Inference schedules page .................................................................................................................... 78 Inference schedules page .................................................................................................................... 79 In a JSON file .............................................................................................................................................. 81 Viewing your ingestion history .................................................................................................... 84 Replacing your dataset .................................................................................................................. 85 Best practices ................................................................................................................................. 86 Choosing the right application ................................................................................................................ 86 Choosing the right data .......................................................................................................................... 87 Filtering for normal data .................................................................................................................... 89 Using failure labels ............................................................................................................................... 89 Evaluating the output ............................................................................................................................... 89 Improving your results .............................................................................................................................. 90 Consulting subject matter experts ......................................................................................................... 91 Use case: fluid pump ..................................................................................................................... 92 Quotas ............................................................................................................................................ 96 Supported Regions .................................................................................................................................... 96 Quotas .......................................................................................................................................................... 96 Security .......................................................................................................................................... 99 Data protection ........................................................................................................................................... 99 Encryption at rest ............................................................................................................................... 100 Encryption in transit .......................................................................................................................... 101 Key management ................................................................................................................................ 101 Identity and access management ......................................................................................................... 103 Audience ............................................................................................................................................... 103 Authenticating with identities ......................................................................................................... 104 Managing access using policies ....................................................................................................... 107 AWS Identity and Access Management for Amazon Lookout for Equipment ......................... 110 Identity-based policy examples ....................................................................................................... 115 AWS managed policies ...................................................................................................................... 122 Troubleshooting .................................................................................................................................. 126 v Amazon Lookout for Equipment User Guide VPC endpoints (AWS PrivateLink) ........................................................................................................ 128 Considerations for Lookout for Equipment VPC endpoints ....................................................... 128 Creating an interface VPC endpoint for Lookout for Equipment ............................................. 129 Creating a VPC endpoint policy for Lookout for Equipment ..................................................... 129 Compliance validation ............................................................................................................................ 130 Resilience ................................................................................................................................................... 131 Infrastructure security in Amazon Lookout for Equipment ............................................................. 131 Monitoring Amazon Lookout for Equipment ............................................................................. 133 Monitoring with CloudWatch ................................................................................................................ 133 AWS CloudFormation resources
amazon-lookout-for-equipment-ug-002
amazon-lookout-for-equipment-ug.pdf
2
AWS Identity and Access Management for Amazon Lookout for Equipment ......................... 110 Identity-based policy examples ....................................................................................................... 115 AWS managed policies ...................................................................................................................... 122 Troubleshooting .................................................................................................................................. 126 v Amazon Lookout for Equipment User Guide VPC endpoints (AWS PrivateLink) ........................................................................................................ 128 Considerations for Lookout for Equipment VPC endpoints ....................................................... 128 Creating an interface VPC endpoint for Lookout for Equipment ............................................. 129 Creating a VPC endpoint policy for Lookout for Equipment ..................................................... 129 Compliance validation ............................................................................................................................ 130 Resilience ................................................................................................................................................... 131 Infrastructure security in Amazon Lookout for Equipment ............................................................. 131 Monitoring Amazon Lookout for Equipment ............................................................................. 133 Monitoring with CloudWatch ................................................................................................................ 133 AWS CloudFormation resources ................................................................................................. 135 Lookout for Equipment and AWS CloudFormation templates ....................................................... 135 Learn more about AWS CloudFormation ............................................................................................ 135 Python SDK examples ................................................................................................................. 136 Creating a schema from multiple .csv files ........................................................................................ 136 Creating a schema from a single .csv file ........................................................................................... 138 Adding a dataset to your project ......................................................................................................... 140 Viewing a model ...................................................................................................................................... 142 Managing your datasets ......................................................................................................................... 144 Labeling your data .................................................................................................................................. 146 Managing your labels .............................................................................................................................. 147 Training a model ...................................................................................................................................... 149 Schedule inference .................................................................................................................................. 152 API Reference ............................................................................................................................... 157 Actions ........................................................................................................................................................ 157 CreateDataset ...................................................................................................................................... 159 CreateInferenceScheduler ................................................................................................................. 164 CreateLabel .......................................................................................................................................... 171 CreateLabelGroup ............................................................................................................................... 176 CreateModel ......................................................................................................................................... 181 CreateRetrainingScheduler ............................................................................................................... 189 DeleteDataset ...................................................................................................................................... 194 DeleteInferenceScheduler ................................................................................................................. 197 DeleteLabel .......................................................................................................................................... 200 DeleteLabelGroup ............................................................................................................................... 203 DeleteModel ......................................................................................................................................... 206 DeleteResourcePolicy ......................................................................................................................... 209 vi Amazon Lookout for Equipment User Guide DeleteRetrainingScheduler ............................................................................................................... 212 DescribeDataIngestionJob ................................................................................................................. 215 DescribeDataset .................................................................................................................................. 221 DescribeInferenceScheduler ............................................................................................................. 227 DescribeLabel ....................................................................................................................................... 233 DescribeLabelGroup ........................................................................................................................... 238 DescribeModel ..................................................................................................................................... 242 DescribeModelVersion ........................................................................................................................ 254 DescribeResourcePolicy ..................................................................................................................... 265 DescribeRetrainingScheduler ............................................................................................................ 268 ImportDataset ..................................................................................................................................... 272 ImportModelVersion ........................................................................................................................... 277 ListDataIngestionJobs ........................................................................................................................ 284 ListDatasets .......................................................................................................................................... 288 ListInferenceEvents ............................................................................................................................. 292 ListInferenceExecutions ..................................................................................................................... 296 ListInferenceSchedulers ..................................................................................................................... 301 ListLabelGroups ................................................................................................................................... 305 ListLabels .............................................................................................................................................. 309 ListModels ............................................................................................................................................ 314 ListModelVersions ............................................................................................................................... 319 ListRetrainingSchedulers ................................................................................................................... 325 ListSensorStatistics ............................................................................................................................. 329 ListTagsForResource ........................................................................................................................... 334 PutResourcePolicy ............................................................................................................................... 337 StartDataIngestionJob ....................................................................................................................... 341 StartInferenceScheduler .................................................................................................................... 345 StartRetrainingScheduler .................................................................................................................. 349 StopInferenceScheduler .................................................................................................................... 353 StopRetrainingScheduler ................................................................................................................... 357 TagResource ......................................................................................................................................... 361 UntagResource .................................................................................................................................... 364 UpdateActiveModelVersion ............................................................................................................... 367 UpdateInferenceScheduler ................................................................................................................ 372 UpdateLabelGroup ............................................................................................................................. 376 UpdateModel ....................................................................................................................................... 379 vii Amazon Lookout for Equipment User Guide UpdateRetrainingScheduler .............................................................................................................. 383 Data Types ................................................................................................................................................. 386 CategoricalValues ................................................................................................................................ 388 CountPercent ....................................................................................................................................... 389 DataIngestionJobSummary ............................................................................................................... 390 DataPreProcessingConfiguration ..................................................................................................... 392 DataQualitySummary ........................................................................................................................ 394 DatasetSchema .................................................................................................................................... 396 DatasetSummary ................................................................................................................................ 397 DuplicateTimestamps ......................................................................................................................... 399 InferenceEventSummary ................................................................................................................... 400 InferenceExecutionSummary ............................................................................................................ 402 InferenceInputConfiguration ............................................................................................................ 406 InferenceInputNameConfiguration .................................................................................................. 408 InferenceOutputConfiguration ......................................................................................................... 409 InferenceS3InputConfiguration ........................................................................................................ 410 InferenceS3OutputConfiguration .................................................................................................... 411 InferenceSchedulerSummary ........................................................................................................... 412 IngestedFilesSummary ....................................................................................................................... 415 IngestionInputConfiguration ............................................................................................................ 417 IngestionS3InputConfiguration ........................................................................................................ 418 InsufficientSensorData ....................................................................................................................... 420 InvalidSensorData ............................................................................................................................... 421 LabelGroupSummary ......................................................................................................................... 422 LabelsInputConfiguration .................................................................................................................. 424 LabelsS3InputConfiguration ............................................................................................................. 425 LabelSummary .................................................................................................................................... 426 LargeTimestampGaps ........................................................................................................................ 429 MissingCompleteSensorData ............................................................................................................ 430 MissingSensorData ............................................................................................................................. 431 ModelDiagnosticsOutputConfiguration .......................................................................................... 432 ModelDiagnosticsS3OutputConfiguration ..................................................................................... 433 ModelSummary ................................................................................................................................... 435 ModelVersionSummary ...................................................................................................................... 440 MonotonicValues ................................................................................................................................. 443 MultipleOperatingModes ................................................................................................................... 444 viii Amazon Lookout for Equipment User Guide RetrainingSchedulerSummary .......................................................................................................... 445 S3Object ............................................................................................................................................... 447 SensorStatisticsSummary .................................................................................................................. 448 SensorsWithShortDateRange ........................................................................................................... 452 Tag ......................................................................................................................................................... 453 UnsupportedTimestamps .................................................................................................................. 454 Common Errors ........................................................................................................................................ 454 Common Parameters ............................................................................................................................... 456 Document history ........................................................................................................................ 459 ix Amazon Lookout for Equipment User Guide Amazon Lookout for Equipment is no longer open to new customers. Existing customers can continue to use the service as normal. For capabilities similar to Amazon Lookout for Equipment see our blog post. x Amazon Lookout for Equipment User Guide What is Amazon Lookout for Equipment? Amazon Lookout for Equipment is a machine learning (ML) service for monitoring industrial equipment that detects abnormal equipment behavior and identifies potential failures. With Lookout for Equipment, you can implement predictive maintenance programs and identify suboptimal equipment processes. Amazon Lookout for Equipment doesn't require extensive ML knowledge or experience. You upload historical data generated by your industrial equipment to train a custom ML model that finds potential failures by leveraging up to 300 sensors into a single model. Lookout for Equipment automatically creates the best model to learn your equipment's normal operating conditions. The model is optimized to find abnormal equipment behavior that occurred in the historical data. Using either the AWS console or the AWS SDK, you run the model to process new sensor data in real time. To use Amazon Lookout for Equipment, you do the following: 1. Format and upload your historical data to an Amazon Simple Storage Service (Amazon S3) bucket. You can use data from process historians, Supervisory Control and Data Acquisition (SCADA) systems, or another condition monitoring system. Format and upload data showing the periods of failures or abnormal behavior in your historical data, if you have it. 2. Create a dataset from the data that you've uploaded. 3. Choose the data in the dataset that is relevant to the asset whose behavior you want to analyze. 4. Add the periods of historical failures shown in the data, if you have it. 5. Train your ML model using Lookout for Equipment. 6. After fine-tuning the model, deploy it to monitor data
amazon-lookout-for-equipment-ug-003
amazon-lookout-for-equipment-ug.pdf
3
use data from process historians, Supervisory Control and Data Acquisition (SCADA) systems, or another condition monitoring system. Format and upload data showing the periods of failures or abnormal behavior in your historical data, if you have it. 2. Create a dataset from the data that you've uploaded. 3. Choose the data in the dataset that is relevant to the asset whose behavior you want to analyze. 4. Add the periods of historical failures shown in the data, if you have it. 5. Train your ML model using Lookout for Equipment. 6. After fine-tuning the model, deploy it to monitor data in real time. Lookout for Equipment is designed to monitor fixed and stationary industrial equipment that operates with limited variability in operating conditions. This includes rotating equipment such as pumps, compressors, motors, computer numerical control (CNC) machines, and turbines. It is also targeted for Process industries with applications such as heat exchangers, boilers, and inverters. Lookout for Equipment is a back-end analytics service, meant to supplement, and plug into, existing maintenance systems. Topics • Are you a first-time user of Lookout for Equipment? • Pricing for Amazon Lookout for Equipment 1 Amazon Lookout for Equipment User Guide Are you a first-time user of Lookout for Equipment? If you are a first-time user of Lookout for Equipment, we recommend that you read the following sections in the listed order: 1. How Amazon Lookout for Equipment works – Explains how Lookout for Equipment works and shows you how you can build a predictive maintenance system that meets your specific needs. 2. Best practices with Lookout for Equipment – Explains some basic Lookout for Equipment concepts and shows you how to get started with analyzing your data. Pricing for Amazon Lookout for Equipment For information, see Amazon Lookout for Equipment Pricing. Are you a first-time user of Lookout for Equipment? 2 Amazon Lookout for Equipment User Guide How Amazon Lookout for Equipment works Amazon Lookout for Equipment uses machine learning to detect abnormal behavior in your equipment and identify potential failures. Each piece of industrial equipment is referred to as an industrial asset, or asset. To use Lookout for Equipment to monitor your asset, you do the following: 1. Provide Lookout for Equipment with your asset's data. The data come from sensors that measure different features of your asset. For example, you could have one sensor that measures temperature and another that measures pressure. 2. Train an anomaly detection model on the data. 3. Monitor your asset with the model that you've trained. You need to train a model for each of your assets because they each have their own data signatures. A data signature indicates the distinct behavior and characteristics of an individual asset. This signature depends on the age of the equipment, its operating environment, what sensors are installed (including process data), who operates it, and many other factors. You use Amazon Lookout for Equipment to build a custom ML model for each asset. For example, you would build a custom model for each of two assets of the same asset type, Pump 1 and Pump 2. The model is trained to use data to establish a baseline for the asset. It's trained to know what constitutes normal behavior. As it monitors your equipment, it can identify abnormal behavior that might indicate a precursor to an asset failure. Amazon Lookout for Equipment uses machine learning to interpret the relationships between sensors, and to detect deviations from normal behavior because asset failures are rare and even the same failure type might have its own unique data pattern. Detected failures are preceded by behavior or conditions that fall out of the normal behavior of the equipment, and Lookout for Equipment is designed to look for those behaviors or conditions. Additionally, if available, you can highlight abnormal equipment behavior using labels. The trained model can use the anomalous behavior in the dataset to improve its performance. When you train a model, Amazon Lookout for Equipment evaluates how different types of ML models perform with your asset's data. It chooses the model that performs the best on the dataset to monitor your equipment. 3 Amazon Lookout for Equipment User Guide You can now use the model to monitor your asset. You can also schedule the frequency with which Amazon Lookout for Equipment monitors the asset. Amazon Lookout for Equipment step by step 1. Set up your account. 2. Create your project. 3. Format your data. 4. Add your dataset to your project. 5. Review the dataset ingestion. 6. Train your model. 7. Evaluate your model. 8. Schedule inference. 9. Review your inference results. You repeat the preceding steps for each asset that you want to monitor. Step by step 4 Amazon Lookout for Equipment User Guide Setting up your AWS account Before you can start
amazon-lookout-for-equipment-ug-004
amazon-lookout-for-equipment-ug.pdf
4
model to monitor your asset. You can also schedule the frequency with which Amazon Lookout for Equipment monitors the asset. Amazon Lookout for Equipment step by step 1. Set up your account. 2. Create your project. 3. Format your data. 4. Add your dataset to your project. 5. Review the dataset ingestion. 6. Train your model. 7. Evaluate your model. 8. Schedule inference. 9. Review your inference results. You repeat the preceding steps for each asset that you want to monitor. Step by step 4 Amazon Lookout for Equipment User Guide Setting up your AWS account Before you can start with Lookout for Equipment, you must sign up for an AWS account if you don't already have one. When you sign up for Amazon Web Services (AWS), your AWS account is automatically signed up for all AWS services, including Lookout for Equipment. If you already have an AWS account, skip to the next topic. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Sign up for an AWS account 5 Amazon Lookout for Equipment User Guide Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. Create a user with administrative access 6 Amazon Lookout for Equipment User Guide 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Lookout for Equipment User Guide Creating your project After you set up your account, the next step is to create a project. A project is a collection of resources associated with a single industrial asset that you want to monitor. Each project contains a dataset: a collection of historical data that you ingest into Lookout for Equipment. To create a project 1. Open Lookout for Equipment console. 2. Choose Create project. Tagging your project Optionally, you can assign tags to your project. Each tag is a label consisting of a user-defined key and value. Tags can help you associate your project with other resources in your AWS account. To learn more, see Tagging AWS resources. Naming your project 8 Amazon Lookout for Equipment User Guide Now that you've created your project, you'll need to check the formatting of your data. Then you'll
amazon-lookout-for-equipment-ug-005
amazon-lookout-for-equipment-ug.pdf
5
contains a dataset: a collection of historical data that you ingest into Lookout for Equipment. To create a project 1. Open Lookout for Equipment console. 2. Choose Create project. Tagging your project Optionally, you can assign tags to your project. Each tag is a label consisting of a user-defined key and value. Tags can help you associate your project with other resources in your AWS account. To learn more, see Tagging AWS resources. Naming your project 8 Amazon Lookout for Equipment User Guide Now that you've created your project, you'll need to check the formatting of your data. Then you'll need to organize your files before you upload them to Amazon S3. Naming your project 9 Amazon Lookout for Equipment User Guide Formatting your data You've set up your account and created your project. Soon, you'll organize your data to help Lookout for Equipment determine an appropriate schema. But first, you must ensure that your data is formatted properly. To monitor your equipment, you must provide Amazon Lookout for Equipment with time- series data from the sensors on your equipment. The data that you're providing to Lookout for Equipment is a series of numerical measurements from the sensors. You provide this data from either a data historian or Amazon Simple Storage Service (Amazon S3). A data historian is a software program that records and retrieves sensor data from your equipment. To provide Amazon Lookout for Equipment with time-series data from the sensors, you must use properly formatted .csv files to create a dataset. Creating a dataset aggregates the data in a format that is suitable for analysis. You create a dataset for a single piece of equipment, or asset. You train an ML model on the dataset that you create. You then use that model to monitor your asset. You don't have to use all the data from the sensors to train a model. You train a model using data from some of the sensors in the dataset. You can store the data for your asset in one of the following ways: • Storing all of the sensor data in one .csv file (recommended) • Using one .csv file for each sensor Each .csv file must have at least two columns. The first column of the file is a timestamp that indicates the date and time. You must have at least one additional column containing the data from a sensor. Each subsequent column can have data from a different sensor. Formatting all the data for an asset in one .csv file To store the data for your asset in one .csv file, you arrange the data in the following format. AssetData.csv Timestamp 2020/1/1 0:00 Sensor Sensor 2 1 212 Formatting all the data for an asset in one .csv file 10 Amazon Lookout for Equipment Timestamp 2020/1/1 0:05 2020/1/1 0:10 2020/1/1 0:15 2020/1/1 0:20 User Guide Sensor Sensor 2 1 311 510 39 412 The following example shows the information from the preceding table as a .csv file. Timestamp,Sensor 1,Sensor 2 2020/1/1 0:00,2,12 2020/1/1 0:05,3,11 2020/1/1 0:10,5,10 2020/1/1 0:15,3,9 2020/1/1 0:20,4,12 You can choose your column names. We recommend using "Timestamp" as the name for the column with the time-series data. For the names of the columns with data from your sensors, we recommend using names that distinguish one sensor from another. Formatting your data with one .csv file for each sensor If your are storing the data from each sensor in one .csv file, use the following table to see how to format the data. SensorData.csv Timestamp 2020/1/1 0:00 2020/1/1 0:05 Formatting your data with one .csv file for each sensor Sensor 3 34 33 11 Amazon Lookout for Equipment Timestamp 2020/1/1 0:10 2020/1/1 0:15 2020/1/1 0:20 User Guide Sensor 3 35 33 34 The following example shows the information from the preceding table as a .csv file. Timestamp,Sensor 3 2020/1/1 0:00,34 2020/1/1 0:05,33 2020/1/1 0:10,35 2020/1/1 0:15,33 2020/1/1 0:20,34 We recommend using "Timestamp" as the name for the column with the time-series data. For the column with data from the sensor, we recommend using a name that distinguishes it from other sensors. You must have a double (numerical) as the data type for your sensor data. You can only train your model on numeric data. When you are preparing your data, you should keep the following information in mind: Category minimum date range maximum sensors per dataset maximum sensors per model Limit 14 days 3000 300 maximum length of a sensor name 200 characters maximum size of each .csv file 5 GB Formatting your data with one .csv file for each sensor 12 Amazon Lookout for Equipment Category minimum date range maximum historical dataset size (combined .csv files) Limit 14 days 50 GB maximum files per historical dataset 1,000 User Guide • You can use the following delimiters
amazon-lookout-for-equipment-ug-006
amazon-lookout-for-equipment-ug.pdf
6
only train your model on numeric data. When you are preparing your data, you should keep the following information in mind: Category minimum date range maximum sensors per dataset maximum sensors per model Limit 14 days 3000 300 maximum length of a sensor name 200 characters maximum size of each .csv file 5 GB Formatting your data with one .csv file for each sensor 12 Amazon Lookout for Equipment Category minimum date range maximum historical dataset size (combined .csv files) Limit 14 days 50 GB maximum files per historical dataset 1,000 User Guide • You can use the following delimiters for the data in the timestamp column: _ - (hyphen) and space • The timestamp column can use the following formats: • yyyy-MM-dd-HH-mm-ss • yyyy-MM-dd'T'HH:mm:ss • yyyy-MM-dd HH:mm:ss • yyyy-MM-dd-HH:mm:ss • yyyy-MM-dd'T'HH:mm • yyyy-MM-dd HH:mm • yyyy-MM-dd-HH:mm • yyyy/MM/dd'T'HH:mm:ss • yyyyMMdd'T'HH:mm • yyyyMMdd HH:mm • yyyyMMddHHmm • yyyy/MM/dd HH:mm:ss • yyyyMMdd'T'HH:mm:ss • yyyyMMdd HH:mm:ss • yyyyMMddHHmmss • yyyy/MM/dd'T'HH:mm • yyyy/MM/dd HH:mm • yyyy MM dd'T'HH:mm:ss • yyyy MM dd HH:mm:ss Formatting your data with one .csv file for each sensor • yyyy MM dd'T'HH:mm 13 Amazon Lookout for Equipment • yyyy MM dd HH:mm User Guide • The valid characters that you can use in the column names of the dataset are A-Z, a-z, 0-9, and . _ - (hyphen) To learn more about the formats listed above, see the ISO 86021 standard. Now that your data is formatted properly, it's time to organize your files. Understanding the minimum date range The minimum data range for a dataset in Lookout for Equipment is 14 days. However, there are situations in which you should include more than 14 days' worth of data. The dataset that you submit should cover a period of time during which your asset functioned in all of its normal operating modes. This is necessary for Lookout for Equipment to recognize the difference between normal operation and anomalies. If your dataset does not include examples of all of your asset's normal operating modes, then Lookout for Equipment may find more false positives. In other words, it may identify some of your operating modes, with which it is not familiar, as anomalies. In such cases, you can help Lookout for Equipment accurately identify anomalies by labeling your data. For more information, see Understanding labeling. Understanding the minimum date range 14 Amazon Lookout for Equipment User Guide Adding your dataset Note You can also add a dataset to your project or manage your dataset using the SDK. You've created a project and you've uploaded your properly formatted data to Amazon S3. Now it's time to add the data to the project. Setting your permissions Lookout for Equipment requires permissions to access your data in Amazon S3, and to publish information about ingestion validation to CloudWatch Logs. On the Ingest dataset page, under Data source details, under IAM role, select your preferred method of giving Lookout for Equipment the appropriate permissions. • Create an IAM role is the default. If you select this option. Lookout for Equipment will create a role for you with the appropriate permissions. • Use an existing role. If you have previously created an IAM role that you want to use with this dataset, you can select it here. • Enter a custom IAM role ARN. This is another way to choose an existing role. Logging your ingestion data If you are creating a new role, you can check the box indicating that you want Lookout for Equipment to store log data in Amazon CloudWatch Logs. You can also enable logging by modifying an existing role. When you enable logging, Lookout for Equipment will record information about errors in the files submitted for ingestion. For example, the logs may help you identify duplicate timestamps, missing or invalid data, or rejected files. For more information, see Viewing your ingestion history. Setting your permissions 15 Amazon Lookout for Equipment User Guide Choosing your schema You have multiple options in how to structure your data in Amazon S3. Your choice of those options should be guided by one of two approaches. Approach A: Your data is already organized in a particular way, and you prefer to keep it that way. In this case, choose the option below that best matches the way your data is currently organized. Approach B: You haven’t yet organized your data. In that case, examine the options below, and choose one that looks easier to implement. Then organize your data according to that option. Before you proceed, be sure your data is formatted correctly. Note The following options assume that your files and folders have been organized by asset, which is what we recommend. However, organizing them by sensor, according to the same pattern, is also possible. • Option 1 (by filename): • The name of
amazon-lookout-for-equipment-ug-007
amazon-lookout-for-equipment-ug.pdf
7
In this case, choose the option below that best matches the way your data is currently organized. Approach B: You haven’t yet organized your data. In that case, examine the options below, and choose one that looks easier to implement. Then organize your data according to that option. Before you proceed, be sure your data is formatted correctly. Note The following options assume that your files and folders have been organized by asset, which is what we recommend. However, organizing them by sensor, according to the same pattern, is also possible. • Option 1 (by filename): • The name of the asset is the complete name of the CSV file. • All sensors from that asset are represented in that one CSV file. • The rest of the hierarchy of your Amazon S3 bucket doesn’t affect the ingestion of data for this asset. • You can place multiple asset files into one folder. • There is one CSV file per asset. This is a good option if you have a small set of files, each named after a specific asset. • Option 2 (by part of filename): • The name of the asset is part of the name of the CSV file. (Specifically, it's the part of the filename that precedes the delimiter.) • The rest of the hierarchy of your Amazon S3 bucket doesn’t affect the ingestion of data for this asset. • There are multiple CSV files per asset. Choosing your schema 16 Amazon Lookout for Equipment User Guide This is a good option if you have to break up large files and give the smaller files similar names, such as pump1_january.csv and pump1_february.csv. If you choose this option, then you must choose a delimiter. The delimiter indicates which character you are using, within the filename, to separate the name of the asset from the name of the sensor. If applicable, select your delimiter from the dropdown menu at the bottom of the console window. • Option 3 (by folder name): • The name of the asset is the complete name of the folder containing one or more CSV files. • The hierarchy in Amazon S3 is as follows: • Inside the Amazon S3 bucket is the folder you select when you specify the Amazon S3 location of your data source. Within that folder is a folder named after the asset. • Inside that folder are all the CSV files for that asset. • There can be multiple CSV files per asset. This is a good option if you have many files with long or inconsistent names, or a custom folder heirarchy that you want to retain. Uploading your data to Amazon S3 You have organized the .csv files that contain your data. Now, the next step is to upload those files to Amazon S3. Moving your data to Amazon S3 is a prerequisite to ingesting your data. 1. Open the Amazon S3 console. 2. Choose Create bucket 3. Under Bucket name, enter the name of your bucket. It might be useful to give your bucket the same name as your project, but that's optional. 4. Choose Create bucket 5. On the page with the list of buckets, choose your new bucket. 6. Choose Create folder. 7. Name your folder. Uploading your data to Amazon S3 17 Amazon Lookout for Equipment User Guide • If you chose to use one file for each asset, then the folder should be named after the facility. • If you chose to use one file for each sensor, then the folder should be named after the asset. 8. Choose Create folder. 9. Choose the folder you created. 10.Choose one of the Upload buttons. 11.On the Upload page, choose Add files. 12.Add the appropriate files from your computer. 13.Choose Upload. 14.Return to the Lookout for Equipment console. 15.On the Ingest dataset page, under Data source details, indicate the location of the files you uploaded to Amazon S3. So far, you've created your project, and (on this page) you've uploaded your well-organized data. Now it's time to integrate those steps by adding your uploaded data to your project. Instructing Lookout for Equipment to ingest your data You've set your permissions, chosen your schema, and (if applicable) chosen your delimiter. Now it is time for Lookout for Equipment to ingest your data. 1. Return to the Ingest dataset page. 2. Choose Ingest dataset. You've ingested your data, but it's possible that there was an issue with the files, the sensors, or the ingestion job as a whole. To find out, you must now review data ingestion. Instructing Lookout for Equipment to ingest your data 18 Amazon Lookout for Equipment User Guide Reviewing data ingestion Lookout for Equipment has ingested your data. Now it's time to make sure everything went according to plan. Note After ingestion,
amazon-lookout-for-equipment-ug-008
amazon-lookout-for-equipment-ug.pdf
8
(if applicable) chosen your delimiter. Now it is time for Lookout for Equipment to ingest your data. 1. Return to the Ingest dataset page. 2. Choose Ingest dataset. You've ingested your data, but it's possible that there was an issue with the files, the sensors, or the ingestion job as a whole. To find out, you must now review data ingestion. Instructing Lookout for Equipment to ingest your data 18 Amazon Lookout for Equipment User Guide Reviewing data ingestion Lookout for Equipment has ingested your data. Now it's time to make sure everything went according to plan. Note After ingestion, a red or green status bar will appear at the top of the console screen. Although a green status bar indicates success, there may still be issues with specific files or sensors. It is still necessary to review the data validation summary. Topics • Reviewing the job • Checking the files • Evaluating sensor grades Next steps: • If your entire job did not succeed, then a red bar has appeared at the top of the Ingest datasetpage. In that case, it's time to review the job. • If the job itself succeeded, but not every file was ingested, then you'll find yourself on the details page for your dataset, with an error message indicating that there was a problem ingesting certain files. In that case, it's time to check the files. • If you did not receive any error messages regarding the ingestion job as a whole, or with issues with ingesting specific files, then it's time to look at your data's details by sensor. • If you want to make changes to your dataset based on what you've learned so far, and then re- ingest it, skip to replacing your dataset. Reviewing the job Few datasets are perfectly formed. Missing or incorrectly formatted values are common. Therefore, it's not feasible to fail an ingestion job because of a single error. Reviewing the job 19 Amazon Lookout for Equipment User Guide Lookout for Equipment operates with a bias toward complete ingestion. In other words, when it encounters a problem in the ingested data, Lookout for Equipment attempts to fix that problem automatically. Then it alerts you to whatever issues it encountered, and lets you know what fixes it implemented. If your entire job fails, consider the following possibilities: 1. The files are not .csv files, or they are corrupted, or they are unreadable for some other reason. 2. The files were not named or organized as explained under Adding your data. 3. The files contain no data, or 100% of the data they contain is not formatted in a way that Lookout for Equipment recognizes. If your ingestion job fails, check the issues above and make the appropriate adjustments. When you’re ready to try again, go back to Adding your dataset. Important This page is about troubleshooting the ingestion of an entire job. You can also read about why some specific files don't get ingested, and about evaluating the data from specific sensors. Checking the logs If you enabled CloudWatch Logs, then the logs may help you troubleshoot ingestion issues. The published logs may include the following error codes: • COMPLETE_SENSOR_DATA_MISSING : A sensor has no valid data assosicated with it. The log contains the sensor name and the associated component name. • DATA_MISSING_IN_COLUMN : Data associated with a sensor is invalid at a particular timestamp. Along with the sensor name and associated component name, the log contains details about the timestamp and the associated file path. • UNSUPPORTED_DATE_FORMATS : A value in the timestamp column is invalid. The log contains details about the timestamp string, the path of the file, and the associated component name. • INSUFFICIENT_SENSOR_DATA : A sensor is associated with less than 14 days of data. The log contains the sensor name, the component name, and the date range of data (in days) associated with the sensor. Checking the logs 20 Amazon Lookout for Equipment User Guide • DUPLICATE_TIMESTAMPS : A value in the timestamp column of the data is a duplicate entry. The timestamp in question and the associated file path are part of the log. • FILES_NOT_INGESTED : A file was not ingested during the ingestion workflow. The log contains details about the file's path. Checking the files If Lookout for Equipment fails to ingest a particular file, consider the following possibilities: • None of the sensors listed in the file have any data that can be ingested. • The file is not a .csv file, or the file is corrupted, or the file cannot be read for some other reason. To troubleshoot files that were not ingested: 1. From the Job details tab of the main console page for your dataset, note the names of any files that failed the ingestion process.
amazon-lookout-for-equipment-ug-009
amazon-lookout-for-equipment-ug.pdf
9
during the ingestion workflow. The log contains details about the file's path. Checking the files If Lookout for Equipment fails to ingest a particular file, consider the following possibilities: • None of the sensors listed in the file have any data that can be ingested. • The file is not a .csv file, or the file is corrupted, or the file cannot be read for some other reason. To troubleshoot files that were not ingested: 1. From the Job details tab of the main console page for your dataset, note the names of any files that failed the ingestion process. 2. To address issues with file formatting, see Formatting your data. 3. To address issues with individual sensors, see Understanding sensor quality. 4. When you’re ready to try again, see Replacing your dataset. Important This page is about troubleshooting the ingestion of specific files. You can also read about why the ingestion of an entire job can fail, and about evaluating the data from specific sensors. Anticipating schema detection problems The following circumstances will lead to the failure of an entire ingestion job: • One or more column headers contain one or more invalid characters. A single invalid character in a single column in a single file is enough to fail an entire job involving multiple files. • In a job consisting of a single file, that file has a formatting issue that prevents ingestion. Checking the files 21 Amazon Lookout for Equipment User Guide • In a job consisting of multiple files, every single file has a formatting issue that prevents ingestion. The easiest way to prevent problems with file ingestion is to take the following precautions: • Make sure your headers don't include any invalid characters, such as spaces. Valid characters are: 0-9, a-z, A-Z, and # $ . \ - (hyphen) _ (underscore) • Make sure that the timestamp column is the one furthest to the left in your CSV file. • Make sure that you don't have any duplicated column headers. Evaluating sensor grades This is where you can dive deep and troubleshoot exactly why you’re getting the error codes, and make some decisions about whether you want to remove some sensors from your dataset. Even if your ingestion job succeeds as a whole, and all your individual files also ingest successfully, you may decide not to use all the data from your sensors. For each sensor, Lookout for Equipment tallies up the number of issues that arise. Based on how many issues occur for each sensor, Lookout for Equipment issues that sensor a grade. Important This page is about evaluating the quality of the data coming from specific sensors. You can also read about why the ingestion of an entire job can fail, and about why the ingestion of a particular file can fail. Sensor grades • High No validation errors were detected in the data during ingestion. Data from sensors in this category is considered the most reliable for model training and evaluation. • Medium Evaluating sensor grades 22 Amazon Lookout for Equipment User Guide One or more potentially harmful validation errors were detected in the data during ingestion. Data from sensors in this category is considered less reliable for model training and evaluation. • Low One or more significant validation errors were detected in the data during ingestion. There's a high probability that training a model on data from sensors in this category will result in poor model performance. Individual sensor errors Error Explanation Data quality Action taken by Lookout for Action recommended Equipment for customer No data found No data is present for this Low Cannot use data from this sensor Do not use this sensor. Insufficient data Monotonic values detected Low Low sensor. Less than 14 days of data provided. Data only goes up, only goes down, or remains virtually static. Large data gaps detected Data has at least one gap longer than 30 days. Medium Lookout for Equipment This sensor cannot be used. cannot use data from this sensor. Lookout for equipment can Review this sensor and use this sensor update sensor but there is if necessary. a risk of high number of false positive alerts. We recommend that you do not use monotonic sensors. Lookout for Equipment will forward fill all the missing values. Review missing values and update sensor if necessary. The data gaps Evaluating sensor grades 23 Amazon Lookout for Equipment User Guide Error Explanation Data quality Multiple operating modes Data shifts between ranges. Medium detected Action taken by Lookout for Action recommended Equipment for customer may cause false alerts. Lookout for Equipment can Multiple operating modes use this sensor add variabili but there is ty. Ensure all a risk of high normal modes number of false of operation are positive alerts. present in both the training
amazon-lookout-for-equipment-ug-010
amazon-lookout-for-equipment-ug.pdf
10
do not use monotonic sensors. Lookout for Equipment will forward fill all the missing values. Review missing values and update sensor if necessary. The data gaps Evaluating sensor grades 23 Amazon Lookout for Equipment User Guide Error Explanation Data quality Multiple operating modes Data shifts between ranges. Medium detected Action taken by Lookout for Action recommended Equipment for customer may cause false alerts. Lookout for Equipment can Multiple operating modes use this sensor add variabili but there is ty. Ensure all a risk of high normal modes number of false of operation are positive alerts. present in both the training dataset and the evaluation dataset. Missing values detected Total number of missing values is Medium above 10% If used, the missing values Review the missing values will be forward and update Categorical values detected This sensor has N=<10 distinct Medium values. filled. Lookout for Equipment can use this sensor but there is a risk of high number of false positive alerts. the sensor if necessary. Review categoric al values and update sensor if necessary . Categorical values may lead to a higher number of false positive alerts. Evaluating sensor grades 24 Amazon Lookout for Equipment User Guide Error Explanation Data quality Constant values detected The value does not change over Medium time. Non-numberical values detected Non-numerical data is present Medium in this sensor. Duplicate timestamps detected Medium There are two or more rows that have the exact same timestamp . Action taken by Lookout for Action recommended Equipment for customer This sensor can be used, but it is not likely to add value. The unsupport ed data will Review the non numerical be removed data and and treated as update sensor if missing values, then forward fill necessary ed. The last encountered Review the duplicate data point will timestamps be ingested, and and update the remaining d the sensor if uplicates will be necessary. omitted. Choosing the best sensors for your project Use this information to decide which sensors are right for your project. A high-grade sensor, from the point of view of Lookout for Equipment, is a sensor that did not trigger any errors in the table above. However, just because it's eligible to contribute doesn't mean it should. For example, suppose that the sensor is not actually attached to the asset that you're trying to monitor. Suppose that the sensor is attached, instead, to the leg of the table that the asset sits on. The sensor might collect data related to vibration or heat, and the data it collects may not trigger any of the errors in the table above. But that doesn't mean that the data is actually useful. The data the Choosing the best sensors for your project 25 Amazon Lookout for Equipment User Guide sensor is collecting may not be relevant to the operation of your asset. Even if the data is revelant, another sensor, nearby but better positioned, may already be collecting the most useful data for that part of the asset. Just because the data from a particular sensor doesn't trigger any of the errors above, doesn't mean that it ought to be selected for your model. A medium-grade sensor collects data that triggers at least one error from the table above. But that doesn't necessarily mean that you shouldn't use that sensor in your model. For example, your sensor may have been labeled as medium-grade because it duplicated a timestamp once over the course of 14 days. Based on your knowledge of the asset and how the data was collected, you may decide that Lookout for Equipment's method of remediation (deleting all but the first record collected for duplicate timestamps) is appropriate and productive. On the other hand, after receiving the alert, you may review the data, find many duplicate timestamps, and decide that the duplications indicate a problem with how the data was collected. You may then decide not to use data from that sensor in your model. Data from a low-grade sensor contains a problem that may interfere with the accuracy of your model. We recommend that you do not include sensors with low-grade data when building your model. However, you may still choose to do so. Next Steps: • If you've just chosen Create model, then it's time to Train your model. • If you've changed your mind and decided to start over the data ingestion process, choose Replace your dataset. • If this isn't the first time you've ingested a dataset with Lookout for Equipment, you may want to View your ingestion history. Choosing the best sensors for your project 26 Amazon Lookout for Equipment User Guide Understanding labeling Amazon Lookout for Equipment takes an input dataset, which it assumes is under normal operating conditions, and trains a model to detect deviations from this baseline, normal operation. However,
amazon-lookout-for-equipment-ug-011
amazon-lookout-for-equipment-ug.pdf
11
If you've just chosen Create model, then it's time to Train your model. • If you've changed your mind and decided to start over the data ingestion process, choose Replace your dataset. • If this isn't the first time you've ingested a dataset with Lookout for Equipment, you may want to View your ingestion history. Choosing the best sensors for your project 26 Amazon Lookout for Equipment User Guide Understanding labeling Amazon Lookout for Equipment takes an input dataset, which it assumes is under normal operating conditions, and trains a model to detect deviations from this baseline, normal operation. However, if there are known periods of abnormal behavior in the input dataset, then that abnormal behavior can lead to less accurate models. To address this, we recommend that you use labels to identify the abnormal behavior in the input dataset and Lookout for Equipment can exclude that labeled data from model training. For example, if it is known that historical data for a machine contains data for planned or unplanned downtime states, you can use labels to identify and exclude the downtime state data from model training. By using the labels as inputs to the model, Lookout for Equipment can use additional modeling techniques that can improve the accuracy of the model. As an example, the following image shows the time intervals of known healthy equipment behavior and the time intervals of abnormal equipment behavior (that is, the width of the bars in the image). In your labeling data, you define the abnormal time interval (bar width in image) from the actual failure point (for example, Failure 1). You provide the labeled data as a CSV file to model training. Each line of the CSV indicates the time intervals when your equipment did not function properly. For more information, see Labeling your data. By consulting with Subject Matter Experts (SMEs) and understanding the various failure modes of the equipment you can provide a “lookahead” window indicating the amount of time the onset of the problem could have been detected. You typically get information for labeling abnormal behavior data from two sources: 27 Amazon Lookout for Equipment User Guide • Work orders which have been reported on the equipment. Work orders are notoriously subjective and inconsistent. That is why with Lookout for Equipment you only need to provide the approximate time of the failure and the approximate lookahead window in the labeling data you provide to Lookout for Equipment for model training. • SMEs who work and maintain the equipment often have in depth knowledge about when machinery was in an erroneous state. For information on how to apply labels when training a model, and the format of the label file, see Labeling your data. 28 Amazon Lookout for Equipment User Guide Training your model Note You can also train your model with the SDK. You've ingested your dataset, and you've reviewed any issues with the job, the files, or the sensors. You've also decided which sensors are providing the data that will be used to train your model. Now it's time to move forward with creating the model. First, you'll specify the details of your model, such as its name, encryption settings, and tags. Then, you'll configure your input data. During that process, you'll make decisions about the balance between your training dataset and your evaluation dataset, and whether or not to use data labels. Topics • Specifying model details • Configuring your input data Specifying model details Note You can also view and train your model with the SDK. This page describes the process of confirming your sensor inputs, naming your model, and choosing your tags. 1. On the Dataset page, under Details by sensor, use the checkboxes to select the sensor inputs that you want to include in your model. 2. Under Model details, enter a name for your model. 3. Optionally, customize your encryption settings. To learn more, see Data protection in Lookout for Equipment. Specifying model details 29 Amazon Lookout for Equipment User Guide 4. Optionally, associate tags with your model. To learn more, see Tagging AWS resources in the Amazon Web Services General Reference. 5. Choose Next. Configuring your input data Choosing your training and evaluation settings. You can use Lookout for Equipment to train a model in one of the following ways: • Training set, no evaluation set, and no labels The data you have ingested so far becomes, in its entirety, the entire basis for creating the model. Lookout for Equipment gets its concept of normal equipment behavior from the one set of data that has been ingested. All of the data uploaded during the ingestion phase becomes training data, no labeled data is used in the model training process. No data is designated for evaluating the model. Once the model has been
amazon-lookout-for-equipment-ug-012
amazon-lookout-for-equipment-ug.pdf
12
evaluation settings. You can use Lookout for Equipment to train a model in one of the following ways: • Training set, no evaluation set, and no labels The data you have ingested so far becomes, in its entirety, the entire basis for creating the model. Lookout for Equipment gets its concept of normal equipment behavior from the one set of data that has been ingested. All of the data uploaded during the ingestion phase becomes training data, no labeled data is used in the model training process. No data is designated for evaluating the model. Once the model has been created, its first use will be in production, on the real-time data streaming from your equipment. This setup requires the least amount of time and effort. But in the long run, a model set up this way may be less accurate than using one of the following methods. • Training set, evaluation set, and no labels You divide the data you've uploaded so far (during the ingestion phase) into two parts: training data and evaluation data. Lookout for Equipment uses the training data to learn about normal behavior for your equipment. Then, Lookout for Equipment puts the model to the test on the evaluation data. You examine the model's performance on the evaluation data, and on that basis, you decide if the model is useful. You don't give Lookout for Equipment any direct indication of what you consider to be anomalous behavior for your equipment. • Training set, no evaluation set, and labels You don't divide the ingested data into training data and evaluation data. It's all training data. But you do provide labeled data that indicates anomalous behavior. • Training set, evaluation set, and labels You identify some of the ingested data as training data, and the rest of it as evaluation data. You also provide labeled data that indicates periods of anomalous behavior. This option may be Configuring your input data 30 Amazon Lookout for Equipment User Guide the most work to set up in the short term, but it may lead to a more accurate model in the long term. Training, evaluating, and sampling Now you'll need to decide how to split up your data between the training subset and the evaluation subset. The bigger the training set, the more data contributes to building your model. The bigger the evaluation set, the more chances you’ll get to see how your model functions before you deploy it to production. A common breakdown is 80% training and 20% evaluation. 1. Choose the time range indicating your training data subset. 2. Choose the time range indicating your evaluation data subset. 3. Choose your sample rate. This is the rate at which the data will be sampled. A lower sample rate means that less data will be used, but the model will build faster. A higher sample rate means that more data will be used, but the model will take longer to build. 4. Enter your off-time indicators (optional). When your asset is off, Lookout for Equipment may interpret the absence of data as a behavioral anomaly (or as normal behavior). In order to prevent this, it's helpful to give Lookout for Equipment a clear indicator of whether or not your asset has been turned off. Choose one particular sensor whose status is indicative of whether your asset is active. Now that you've configured your input data, the next step is to decide whether or not to use data labels. If you already know that you do not want to label your data, you can skip ahead to ???. Labeling your data You've made a decision about your training and evaluation settings. If you decided to use labeled data, now is the time to upload it. Lookout for Equipment takes labeling information in as two timestamps in a CSV file stored in an Amazon Simple Storage Service (Amazon S3) bucket. The first timestamp indicates when abnormal behavior is expected to have started. The second timestamp is when the failure or abnormal behavior was first noticed. Alternatively, the second timestamp can indicate a maintenance event. Lookout for Equipment uses this window as the basis for looking for signs of an upcoming event Training, evaluating, and sampling 31 Amazon Lookout for Equipment User Guide so it can better understand what those events look like on this machine. Ideally, the timestamps correspond to data during a maintenance event. We recommend that you filter out data from any restart procedure. The following is an example of such a CSV file. Row 1 2 3 Timestamp Timestamp 2 1 1/1/2020 1/3/2020 0:00 0:00 2/2/2020 2/7/2020 0:05 0:05 4/11/2020 4/21/2020 0:10 0:10 Row 1 represents a maintenance event on January 3rd with a 2-day window for Lookout for Equipment to look for abnormal behavior. Row 2 represents
amazon-lookout-for-equipment-ug-013
amazon-lookout-for-equipment-ug.pdf
13
Training, evaluating, and sampling 31 Amazon Lookout for Equipment User Guide so it can better understand what those events look like on this machine. Ideally, the timestamps correspond to data during a maintenance event. We recommend that you filter out data from any restart procedure. The following is an example of such a CSV file. Row 1 2 3 Timestamp Timestamp 2 1 1/1/2020 1/3/2020 0:00 0:00 2/2/2020 2/7/2020 0:05 0:05 4/11/2020 4/21/2020 0:10 0:10 Row 1 represents a maintenance event on January 3rd with a 2-day window for Lookout for Equipment to look for abnormal behavior. Row 2 represents a maintenance event on February 7th with a 5-day window for Lookout for Equipment to look for abnormal behavior. Row 3 represents a maintenance event on April 21st with a 10-day window for Lookout for Equipment to look for abnormal behavior. Lookout for Equipment uses all of these time windows to look for an optimal model that finds abnormal behavior within these windows. Note that not all events are detectable and most are highly dependent on the data provided. To label your data 1. Create your labeled data. Store the label data as a .csv file that consists of two columns. The file has no header. The first column has the start time of the abnormal behavior. The second column has the end time. The following example shows how your label data should appear as a .csv file. Labeling your data 32 Amazon Lookout for Equipment User Guide 2020-02-01T20:00:00.000000,2020-02-03T00:00:00.000000 2020-07-01T20:00:00.000000,2020-07-03T00:01:00.000000 2. Upload your data labels to Amazon S3. Here you'll follow the same procedure as in Uploading your data to Amazon S3. You can use the same Amazon S3 bucket or a different one. If you use the same one, it's a good practice to create a separate folder for your data labels. 3. In the Lookout for Equipment console, on the Provide data labels page, indicate the location of your data labels. 4. Choose your IAM role. This is the role that authorizes Lookout for Equipment to access the Amazon S3 bucket where your data labels are stored. If you're using the same bucket as before, you can choose the role that you already created. You can also select Create an IAM role, and the proper role will be created for you. 5. Choose Next. 6. Review your training settings and then train the model. Labeling your data 33 Amazon Lookout for Equipment User Guide Starting the training process The Review and train page gives you a chance to change some of your settings before you start training your model. • To review model details such as the name, the encryption key, or the AWS tags, see Specifying model details. • To review your input data configuration, which is where you (optionally) differentiated your training data from your evaluation data and set your sample rate and off time parameters, see Configuring your input data. • To review data labels, see Labeling your data When you're ready to train your model, choose Train model. Starting the training process 34 Amazon Lookout for Equipment User Guide Evaluating your model You can view the ML models you've trained on the datasets containing the data from your equipment. If you've used part of your dataset for training and the other part for evaluation, you can see and evaluate the model's performance. You can also see which sensors were used to create a model. If you need better performance, you can use different sensors for training your next model. Amazon Lookout for Equipment provides an overview of the model's performance and detailed information about abnormal equipment behavior events. An abnormal equipment behavior event is a situation where the model detected an anomaly in the sensor data that could lead to your asset malfunctioning or failing. You can see how well the model performed in detecting those events. If you've provided Amazon Lookout for Equipment with label data for your dataset, you can see how the model's predictions compare to the label data. It shows the average forewarning time across all true positives. Forewarning time is the average length of time between when the model first finds evidence that something might be going wrong and when it actually detects the equipment abnormality. For example, you can have a circumstance where Amazon Lookout for Equipment detects six of the seven abnormal behavior events in your labeled evaluation data. In six out of the seven events, on average, it might have provided an indication that something was off 32 hours before it detected an abnormality. For this situation, we would say that Lookout for Equipment averaged 32 hours of forewarning. Amazon Lookout for Equipment also reports the results where it incorrectly identified an abnormal behavior event in the label data. The label data that you provide when
amazon-lookout-for-equipment-ug-014
amazon-lookout-for-equipment-ug.pdf
14
when it actually detects the equipment abnormality. For example, you can have a circumstance where Amazon Lookout for Equipment detects six of the seven abnormal behavior events in your labeled evaluation data. In six out of the seven events, on average, it might have provided an indication that something was off 32 hours before it detected an abnormality. For this situation, we would say that Lookout for Equipment averaged 32 hours of forewarning. Amazon Lookout for Equipment also reports the results where it incorrectly identified an abnormal behavior event in the label data. The label data that you provide when you create a dataset has a time range for abnormal equipment events. You specify the duration of the abnormal events in the label data. In the evaluation data, the model used by Lookout for Equipment could incorrectly identify abnormal events outside of the equipment range. You can see how often the model identifies these events when you evaluate the model's performance. Pointwise model diagnostics for an Amazon Lookout for Equipment model provides an evaluation of the model's performance at the individual events level. You can use the AWS SDK to get the pointwise model diagnostics for a model. Topics • Viewing the results for a model 35 Amazon Lookout for Equipment User Guide • Getting pointwise model diagnostics for a model (SDK) Viewing the results for a model Note You can get the evaluation results for a Lookout for Equipment model with the SDK. To view the results for a model: You can use this procedure to view model metrics in the console. To evaluate how the model performed, you must provide data labels. If you provide data labels, you can see when the model detected abnormal equipment behavior events. 1. Sign in to AWS Management Console and open the Amazon Lookout for Equipment console at Amazon Lookout for Equipment console. 2. Choose a dataset. 3. Choose a model. You can see whether the model is ready to monitor the equipment. 4. Navigate to Training and evaluation. In the following image, you can see metrics related to the performance. You can see how many times the model identified abnormal equipment behavior events incorrectly. You can also see which sensors played the largest role in the model identifying the abnormal equipment behavior events. The console displays the top 15 sensors that contributed to the model identifying an abnormal equipment behavior event. Viewing the results for a model 36 Amazon Lookout for Equipment User Guide Getting pointwise model diagnostics for a model (SDK) Pointwise model diagnostics for an Amazon Lookout for Equipment model is an evaluation of the model performance at the individual events. During training, Amazon Lookout for Equipment generates an anomaly score and sensor contribution diagnostics for each row in the input dataset. A higher anomaly score indicates a higher likelihood of an abnormal event. You get pointwise diagnostics when you train a model with CreateModel. The period for which Lookout for Equipment generates model diagnostics depends on the following: • If you specify the EvaluationDataStartTime and EvaluationDataEndTime request parameters, Lookout for Equipment generates model diagnostics for the period of time between EvaluationDataStartTime and EvaluationDataEndTime. • If you supply TrainingDataStartTime and TrainingDataEndTime, but don't supply EvaluationDataEndTime and EvaluationDataStartTime, Lookout for Equipment generates model diagnostics for the period between TrainingDataStartTime and TrainingDataEndTime. • If you don't specify an Evaluation or Training time range, Lookout for Equipment generates model diagnostics for the entire ingested data range in the input dataset. Getting pointwise model diagnostics (SDK) 37 Amazon Lookout for Equipment User Guide If you want pointwise diagnostics for an existing model, use UpdateModel to provide the model diagnostics configuration. Lookout for Equipment then creates pointwise diagnostics for the entire retraining period. For both CreateModel and UpdateModel operations, you need to specify the ModelDiagnosticsOutputConfiguration request parameter. The S3OutputConfiguration field specifies the Amazon S3 location where you want Lookout for Equipment to save the pointwise model diagnostics for the training period. If you don't specify ModelDiagnosticsOutputConfiguration, Lookout for Equipment doesn't create pointwise model diagnostics for the model. If you update ModelDiagnosticsOutputConfiguration with UpdateModel, Lookout for Equipment only generates pointwise model diagnostics for future model versions. You must specify an IAM role in the RoleArn request parameter with permission to access the Amazon S3 bucket that you reference in ModelDiagnosticsOutputConfiguration. Amazon Lookout for Equipment creates pointwise model diagnostics for a model as a JSON format file. Lookout for Equipment stores the JSON file as a compressed file (model_diagnostics_results.json.gz) in the location you specify in ModelDiagnosticsOutputConfiguration. The following is example JSON for a model evaluation. {"timestamp": "2021-03-11T22:24:00.000000", "prediction": 0, "prediction_reason": "MACHINE_OFF"} {"timestamp": "2021-03-11T22:25:00.000000", "prediction": 1, "prediction_reason": "ANOMALY_DETECTED", "anomaly_score": 0.72385, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.02346}, {"name": "component_5feceb66\ \sensor1", "value": 0.10011}, {"name": "component_5feceb66\\sensor2", "value": 0.11162}, {"name": "component_5feceb66\\sensor3", "value": 0.14419}, {"name": "component_5feceb66\\sensor4", "value": 0.12219}, {"name": "component_5feceb66\ \sensor5", "value": 0.14936},
amazon-lookout-for-equipment-ug-015
amazon-lookout-for-equipment-ug.pdf
15
request parameter with permission to access the Amazon S3 bucket that you reference in ModelDiagnosticsOutputConfiguration. Amazon Lookout for Equipment creates pointwise model diagnostics for a model as a JSON format file. Lookout for Equipment stores the JSON file as a compressed file (model_diagnostics_results.json.gz) in the location you specify in ModelDiagnosticsOutputConfiguration. The following is example JSON for a model evaluation. {"timestamp": "2021-03-11T22:24:00.000000", "prediction": 0, "prediction_reason": "MACHINE_OFF"} {"timestamp": "2021-03-11T22:25:00.000000", "prediction": 1, "prediction_reason": "ANOMALY_DETECTED", "anomaly_score": 0.72385, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.02346}, {"name": "component_5feceb66\ \sensor1", "value": 0.10011}, {"name": "component_5feceb66\\sensor2", "value": 0.11162}, {"name": "component_5feceb66\\sensor3", "value": 0.14419}, {"name": "component_5feceb66\\sensor4", "value": 0.12219}, {"name": "component_5feceb66\ \sensor5", "value": 0.14936}, {"name": "component_5feceb66\\sensor6", "value": 0.17829}, {"name": "component_5feceb66\\sensor7", "value": 0.00194}, {"name": "component_5feceb66\\sensor8", "value": 0.05446}, {"name": "component_5feceb66\ \sensor9", "value": 0.11437}]} {"timestamp": "2021-03-11T22:26:00.000000", "prediction": 0, "prediction_reason": "NO_ANOMALY_DETECTED", "anomaly_score": 0.41227, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.03533}, {"name": "component_5feceb66\ \sensor1", "value": 0.24063}, {"name": "component_5feceb66\\sensor2", "value": 0.06327}, {"name": "component_5feceb66\\sensor3", "value": 0.08303}, {"name": "component_5feceb66\\sensor4", "value": 0.18598}, {"name": "component_5feceb66\ Getting pointwise model diagnostics (SDK) 38 Amazon Lookout for Equipment User Guide \sensor5", "value": 0.10839}, {"name": "component_5feceb66\\sensor6", "value": 0.08721}, {"name": "component_5feceb66\\sensor7", "value": 0.06792}, {"name": "component_5feceb66\\sensor8", "value": 0.1309}, {"name": "component_5feceb66\ \sensor9", "value": 0.07735}]} {"timestamp": "2021-03-11T22:27:00.000000", "prediction": 0, "prediction_reason": "NO_ANOMALY_DETECTED", "anomaly_score": 0.10541, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.02533}, {"name": "component_5feceb66\ \sensor1", "value": 0.34063}, {"name": "component_5feceb66\\sensor2", "value": 0.07327}, {"name": "component_5feceb66\\sensor3", "value": 0.03303}, {"name": "component_5feceb66\\sensor4", "value": 0.18598}, {"name": "component_5feceb66\ \sensor5", "value": 0.10839}, {"name": "component_5feceb66\\sensor6", "value": 0.08721}, {"name": "component_5feceb66\\sensor7", "value": 0.06792}, {"name": "component_5feceb66\\sensor8", "value": 0.1309}, {"name": "component_5feceb66\ \sensor9", "value": 0.07735}]} {"timestamp": "2021-03-11T22:28:00.000000", "prediction": 0, "prediction_reason": "NO_ANOMALY_DETECTED", "anomaly_score": 0.24867, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.04533}, {"name": "component_5feceb66\ \sensor1", "value": 0.14063}, {"name": "component_5feceb66\\sensor2", "value": 0.08327}, {"name": "component_5feceb66\\sensor3", "value": 0.07303}, {"name": "component_5feceb66\\sensor4", "value": 0.18598}, {"name": "component_5feceb66\ \sensor5", "value": 0.10839}, {"name": "component_5feceb66\\sensor6", "value": 0.08721}, {"name": "component_5feceb66\\sensor7", "value": 0.06792}, {"name": "component_5feceb66\\sensor8", "value": 0.1309}, {"name": "component_5feceb66\ \sensor9", "value": 0.07735}]} {"timestamp": "2021-03-11T22:29:00.000000", "prediction": 1, "prediction_reason": "ANOMALY_DETECTED", "anomaly_score": 0.79376, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.04533}, {"name": "component_5feceb66\ \sensor1", "value": 0.14063}, {"name": "component_5feceb66\\sensor2", "value": 0.08327}, {"name": "component_5feceb66\\sensor3", "value": 0.07303}, {"name": "component_5feceb66\\sensor4", "value": 0.18598}, {"name": "component_5feceb66\ \sensor5", "value": 0.10839}, {"name": "component_5feceb66\\sensor6", "value": 0.08721}, {"name": "component_5feceb66\\sensor7", "value": 0.06792}, {"name": "component_5feceb66\\sensor8", "value": 0.1309}, {"name": "component_5feceb66\ \sensor9", "value": 0.07735}]} The JSON fields are as follows: • timestamp – The date and time (in ISO 8601 format) that the event occurred. • prediction – The prediction that the model made for the event. 0 for a normal event. 1 for an abnormal event. • prediction_reason – The reason for the prediction. Valid values are ANOMALY_DETECTED, NO_ANOMALY_DETECTED, MACHINE_OFF. Getting pointwise model diagnostics (SDK) 39 Amazon Lookout for Equipment User Guide • anomaly_score – The anomaly score for the event. anomaly_score is a float value (0-1) where higher values indicate a higher likelihood that the event is abnormal. • diagnostics – diagnostics information for the event. Note The model evaluation JSON format is the same as the JSON file in which Lookout for Equipment returns inference results. For more information, see Reviewing inference results in a JSON file. If you have previously created pointwise model diagnostics, you can get the Amazon S3 location of the model diagnostics files by calling the DescribeModel or DescribeModelVersion operations and checking the ModelDiagnosticsOutputConfiguration response field. If you have not previously created an evaluation for the model, the operations don't return the ModelDiagnosticsOutputConfiguration field. Getting pointwise model diagnostics (SDK) 40 Amazon Lookout for Equipment User Guide Versioning your model Machines' (assets') operating modes and health change over the course of their lifetimes. This is often referred to as data drift. Machines also go through expected, and unexpected, maintenance operations. Models developed for these machines must therefore be updated periodically to reflect these changes. In the past, each training resulted in a separate model. When multiple models where related to the same asset, the only way to indicate that association was with the naming of the models (for example, pumpA_model1, pumpA_model2, and so forth), and you had to manage that association on your own. With model versioning, you can store different versions of a model under the same model name, and then decide which model version you want to maintain as your active version. After you set your active version, Lookout for Equipment utilizes that version when it runs inference on your asset's sensor data. Model versioning also helps maintain traceability and history of a given model, and its corresponding machine, over time. Understanding model versioning Currently, there are three ways to generate a model version: • Training a model for the first time. In this case, as the parent model is created, so is a corresponding model version, which may be called Version 1. • Importing a model from another account. In this case, if a model of the same name does not already exist in the target account, then the imported model becomes Version 1. If the imported model does already exist in the target account (and uses the same name), then the imported model gets the next available
amazon-lookout-for-equipment-ug-016
amazon-lookout-for-equipment-ug.pdf
16
over time. Understanding model versioning Currently, there are three ways to generate a model version: • Training a model for the first time. In this case, as the parent model is created, so is a corresponding model version, which may be called Version 1. • Importing a model from another account. In this case, if a model of the same name does not already exist in the target account, then the imported model becomes Version 1. If the imported model does already exist in the target account (and uses the same name), then the imported model gets the next available version number. • Retraining a model. In this case, a new version is created. It has the same name as the parent model, but a version number incremented by 1. Note that the new version number will be 1 more than the most recent version number, regardless of which version is currently active. The following APIs will help you work with, and understand, model versioning. Understanding model versioning 41 Amazon Lookout for Equipment User Guide • ListModelVersions: all model versions for a given model, including the model version, model version ARN, and status. This list appears in the data type ModelVersionSummary. • DescribeModelVersion: This API gives you relevant information (such as the data start and end times and the creation time). If the model fails, then this API will indicate why it failed. Understanding model status During the importation of a model, it will be in the state: IMPORT_IN_PROGRESS. After you import a model, it will be in one of three states: • SUCCESS • FAILED • CANCELED Activating your model This section describes how to set one of your model versions as the active model. The active model is the one that the inference scheduler uses during inferencing. By default, in managed mode, the most recent version is active. However, if you are not satisfied with the most recent version, you can select a previous version. Caveats to consider: • If you try activating a model while inference is currently running, then that inference execution will continue to use the model that was active when inference began. Lookout for Equipment will pick up the newly activated model the next time you run inference. • You cannot activate a model in the FAILED state. The following API will help you in activating a model: UpdateActiveModelVersion: This activates a particular model. You can only activate a model that is in the SUCCESS state. Understanding model status 42 Amazon Lookout for Equipment User Guide Retraining your model Understanding retraining This section explains model retraining in the context of Lookout for Equipment. Because machines operating modes and health change over time (leading to data drift ), models developed for these machines should be updated periodically to reflect these changes. Retraining is the process of updating a machine learning model to take more recent information (that is, data and labels) about the machine into consideration. Retraining is the preferred method of addressing data drift. When retraining a model Lookout for Equipment does not require you to run a new ingestion job. This is an important benefit, because you may have many assets running in your factory and setting up new ingestion jobs on thousands of machines could become an inconvenience. Note You may have been running inference on some models before AWS released the retraining feature for Lookout for Equipment. In that case, your inference data has not been collected and will not be available for retraining. In order to facilitate the retraining process, you should run a new ingestion job on those models. By enabling retraining in Lookout for Equipment, you can schedule to have the system generate updated models on an ongoing basis without pausing your data-gathering process. Once a model is retrained it creates a new model version. You may choose to manually control the activation of new models utilizing retraining metrics, or you may choose to allow Lookout for Equipment to activate your new models immediately utilizing managed mode, when appropriate. Setting up your retraining scheduler This section describes how set up your retraining scheduler. The following APIs will help you manage your retraining scheduler: Understanding retraining 43 Amazon Lookout for Equipment • CreateRetrainingScheduler • DescribeRetrainingScheduler • ListRetrainingSchedulers • StartRetrainingScheduler • StopRetrainingScheduler • DeleteRetrainingScheduler User Guide When you set up a retraining schedule, there are two modes to be aware of for managing the selection of newly trained versions: • In manual mode, the model is periodically retrained, but the new model versions are not activated until you indicate that it's time to activate them. This might be because you want to provide your own methodology, using metrics that describe the model, for determining if the newly trained version is better than the current version, or you might have a
amazon-lookout-for-equipment-ug-017
amazon-lookout-for-equipment-ug.pdf
17
for Equipment • CreateRetrainingScheduler • DescribeRetrainingScheduler • ListRetrainingSchedulers • StartRetrainingScheduler • StopRetrainingScheduler • DeleteRetrainingScheduler User Guide When you set up a retraining schedule, there are two modes to be aware of for managing the selection of newly trained versions: • In manual mode, the model is periodically retrained, but the new model versions are not activated until you indicate that it's time to activate them. This might be because you want to provide your own methodology, using metrics that describe the model, for determining if the newly trained version is better than the current version, or you might have a custom process to have extra testing done in a production environment which needs user sign-off. • In managed mode, the model is periodically retrained, and then Lookout for Equipment automatically compares the metrics from the new version with the metrics version that is currently running. If Lookout for Equipment determines that the new version is more accurate than the current version, then Lookout for Equipment automatically activates the new version. Both of these modes are set using the PromoteMode parameter in the CreateRetrainingScheduler API. Understanding retraining data This section explains how data is used for retraining, including the way that inference data is accumulated and stored. When the inference scheduler is running, Lookout for Equipment accumulates and manages the inference data that it successfully processes. This allows Lookout for Equipment to use inference data as an input during retraining without the user having to manage providing the service updated data. Lookout for Equipment encrypts the stored data using either a customer-owned AWS KMS key configured as the model's ServerSideKmsKeyId, or, if there is no customer-owned AWS KMS key provided, then using a Lookout for Equipment-owned AWS KMS key. Understanding retraining data 44 Amazon Lookout for Equipment User Guide Sensor data used for retraining comes from both a) the dataset associated with the model being retrained and b) the accumulated inference data for that model. Lookout for Equipment only uses the data from those two sources that falls within the LookbackWindow of the model’s retraining scheduler. If, within that window, there is an overlap between the dataset and the accumulated inference data, the dataset takes priority. During the retraining process, Lookout for Equipment also fetches labels from the location configured in the model's LabelsInputConfiguration. Understanding retraining metrics This section describes retraining metrics in the context of Lookout for Equipment. If you are retraining in manual mode, then you may use these metrics to help you decide whether to activate a new model version. The following table lists the model promotion criterion. New data has labels? Model promotion criterion Metrics shown? Old data has labels? Yes Yes Yes No No Yes Select best model based on comparison metrics Yes for both models Select old model No for both models Yes for new model only Select new model, if the new model meets the required quality threshold No No Select new model No for both models Understanding retraining metrics 45 Amazon Lookout for Equipment Model metrics User Guide The following Model Metrics are exposed in the DescribeModelVersion response. If a retrained model is the current active model version, then the same information is also returned in the DescribeModel response. • Recall: The proportion of events that Lookout for Equipment correctly identified to the events that you labeled during the same period. For example, you may have labeled 10 events, but Lookout for Equipment only identified 9 of them. In this case, the recall is 90%. • Precision: The proportion of true positives to total identified events. For example, if Lookout for Equipment identifies 10 events, but only 7 of those events correspond to events you labeled, then the precision is 70%. • MeanFractionalLeadTime: A measurement of how quickly (relative to the length of the event), on average, Lookout for Equipment detects each event. For example, a typical event at your facility may last 10 hours. On average, it may take the model 3 hours to identify the event. In this case, the mean fractional lead time is 0.7. • AUC: Area Under the ROC Curve (AUC) measures the ability of a machine learning model to predict a higher score for positive examples as compared to negative examples. A value between 0 and 1 that indicates how well your model is able to separate the categories in your dataset. A value of 1 indicates that it was able to separate the categories perfectly. For more information, see "A Visual Explanation of Receiver Operating Characteristic Curves and Area Under the Curve" at the MLU Explain website. Model quality If new data has labels, Lookout for Equipment uses the metrics to perform a quality assessment of the model. To get the quality assessment, check the ModelQuality field in the response from DescribeModel, DescribeModelVersion, ListModels, ListModelVersions, or
amazon-lookout-for-equipment-ug-018
amazon-lookout-for-equipment-ug.pdf
18
compared to negative examples. A value between 0 and 1 that indicates how well your model is able to separate the categories in your dataset. A value of 1 indicates that it was able to separate the categories perfectly. For more information, see "A Visual Explanation of Receiver Operating Characteristic Curves and Area Under the Curve" at the MLU Explain website. Model quality If new data has labels, Lookout for Equipment uses the metrics to perform a quality assessment of the model. To get the quality assessment, check the ModelQuality field in the response from DescribeModel, DescribeModelVersion, ListModels, ListModelVersions, or CreateInferenceScheduler. If Lookout for Equipment determines that the model quality is poor based on training metrics, the value is POOR_QUALITY_DETECTED. Otherwise, the value is QUALITY_THRESHOLD_MET. If the model is unlabeled, the model quality can't be assessed and the value of ModelQuality Model metrics 46 Amazon Lookout for Equipment User Guide is CANNOT_DETERMINE_QUALITY. In this situation, you can get a model quality assessment by adding labels to the input dataset and retraining the model. If the previous model was labeled, Lookout for Equipment compares the metrics of each model on the new data to determine if the new model should be promoted. The quality assessment for the new model does not affect this comparison. If the previous model was unlabeled, Lookout for Equipment promotes the new model if the quality threshold is met. For information about using labels with your models, see Understanding labeling. For information about improving the quality of a model, see Best practices with Amazon Lookout for Equipment. Model quality 47 Amazon Lookout for Equipment User Guide Importing your resources You can copy existing Amazon Lookout for Equipment resources from one AWS account to another by using the import API operations. Additionally, we provide scripts that you can use to bulk import resources (datasets and models) from one AWS account to another. Topics • Importing a model • Bulk importing resources Importing a model Topics • Importing a model • APIs related to importing • Importing a dataset • Controlling access to your model • Comparing access to model versions with access to parent models • Importing a model version with accumulated inference data Importing a model This section describes how to copy existing Lookout for Equipment resources from one user account to another. For instance, as a user you might want to do this if you maintain different accounts for Development, QA, and Production pipelines to restrict user access at the various stages. Or, as an integrator you want to develop models in your user account and then provide them to your end users in their own AWS accounts. Importing is the mechanism allowing you to move Lookout for Equipment resources across these account boundaries. In this guide, the term resources indicates the machine learning models that Lookout for Equipment generates, as well as the user datasets that you provide to train those models. The following resources can be associated with a model version: Importing a model 48 Amazon Lookout for Equipment • the model version metadata • the inference scheduler • the training dataset • the accumulated inference data • the model performance metrics • the retraining scheduler User Guide The import resources APIs allow users to import the model version metadata, training datasets, accumulated inference data, and model metrics (if available). However, the inference scheduler and retraining schedulers are not copied over, and must be re-created in the target account. In the context of performing an import, there is a source account and a target account. The API must be called from the target account, and it references information about the resources in the source account that you want to import. In order for a target to be able to import resource from a source account, the source account must grant the appropriate permissions to the target account. See Controlling access to your model. APIs related to importing The following APIs will help you to import a model: • ImportDataset: Imports the data that was used to train the original model. • ImportModelVersion: Imports a model from another account. Use the attribute SourceModelVersionArn to indicate the version of the model that you want to import. Note If you plan to import both a model and the dataset that was used to create it, then you should first call ImportDataset, and then ImportModelVersion. Whether or not you call both of these APIs depends on your use case. You may choose to import a model, but not the dataset that was used to create it. In that case, you would only call ImportModelVersion. You might do this because you already have a version of the same model in your account, and you are importing an improved version of the same model. APIs related to importing 49 Amazon
amazon-lookout-for-equipment-ug-019
amazon-lookout-for-equipment-ug.pdf
19
Note If you plan to import both a model and the dataset that was used to create it, then you should first call ImportDataset, and then ImportModelVersion. Whether or not you call both of these APIs depends on your use case. You may choose to import a model, but not the dataset that was used to create it. In that case, you would only call ImportModelVersion. You might do this because you already have a version of the same model in your account, and you are importing an improved version of the same model. APIs related to importing 49 Amazon Lookout for Equipment User Guide Note If you plan to import both a model and the dataset that was used to create it, then you should first call ImportDataset, and then ImportModelVersion. Importing a dataset This section explains how to import your dataset using the Lookout for Equipment APIs. For the purposes of this example, let us suppose that target account 2222222222 wants to import a dataset from source account 111111111111. Note If the source account and the target account are the same, then you can skip the first two steps of this procedure. 1. The source account gives the target account permission to import the dataset testDataset with the following policy, using the PutResourcePolicy API. { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::2222222222:role/Admin"}, "Action": [ "lookoutequipment:ImportDataset" ], "Resource": "arn:aws:lookoutequipment:us-west-2:111111111111:dataset/ testDataset/00af0697-095b-433a-889c-9f4eed39db8b" } } 2. Users of the source account may have used a AWS Key Management Service key to encrypt the original ingestion data. If that is the case, then the source account must give the target account permission to encrypt and decrypt the AWS KMS key. Importing a dataset 50 Amazon Lookout for Equipment User Guide For more information, see Authentication and access control for AWS Key Management Service in the AWS Key Management Service Developer Guide 3. The target account calls the ImportDataset API, supplying the dataset ARN (arn:aws:lookoutequipment:us-west-2:111111111111:dataset/ testDataset/00af0697-095b-433a-889c-9f4eed39db8b). This action triggers the importation of the dataset. Note Labels associated with the source model will not be copied. Therefore, if labels are needed, the target account must explicitly provide them through the LabelsInputConfiguration parameter of the ImportModelVersion API. Controlling access to your model This section explains how a customer controls access to a model. In order for a target to import resources from a source account, the source account must give permissions to the target account. These permissions are granted by applying resource policies to either the model, the model version, or the dataset resources. Only the source account can apply, view or delete resource policies. The following APIs will help you in controlling access to your model: • PutResourcePolicy • DescribeResourcePolicy • DeleteResourcePolicy Here is an example resource policy for setting the import permissions for a dataset: { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::2222222222:role/Admin"}, Controlling access to your model 51 Amazon Lookout for Equipment "Action": [ "lookoutequipment:ImportDataset" ], "Resource": "arn:aws:lookoutequipment:us-west-2:111111111111:dataset/ testDataset/00af0697-095b-433a-889c-9f4eed39db8b" } } This is an example policy for setting permissions for importing a specific model version: User Guide { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::2222222222:role/Admin"}, "Action": [ "lookoutequipment:ImportModelVersion" ], "Resource": "arn:aws:lookoutequipment:us-west-2:111111111111:model/ testModel/00af0697-095b-433a-889c-9f4eed39dbbc/model-version/1" } } This is an example policy to set the permissions to import all model versions (setting the permissions on a parent model): { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::2222222222:role/Admin"}, "Action": [ "lookoutequipment:ImportModelVersion" ], "Resource": "arn:aws:lookoutequipment:us-west-2:111111111111:model/ testModel/00af0697-095b-433a-889c-9f4eed39dbbc" } } By default, when you import a model version, you also accumulate inference data along with it. For information about changing that option, see Importing a model version with accumulated inference data. Controlling access to your model 52 Amazon Lookout for Equipment User Guide Note The policies above only support ImportDataset and ImportModelVersion. They cannot be used to give cross-account permissions to any other APIs associated with Lookout for Equipment. What follows are explanations of several elements contained in the policies above. • Effect: The effect can be Allow or Deny. By default, IAM users don't have permission to use resources and API actions, so all requests are denied. An explicit Allow overrides the default. An explicit Deny overrides any Allows. • Action: The action is the specific Lookout for Equipment action for which you are granting or denying permission. • Resource: The resource that's affected by the action. • Condition: Conditions are optional. They can be used to control when your policy is in effect. You may use the Lookout for Equipment ResourcePolicy APIs to control access to models, model versions, and datasets. For more information, see the API references for PutResourcePolicy and DeleteResourcePolicy. Lookout for Equipment access control policies follow the same format as IAM policies. However, Lookout for Equipment policies will not appear in the IAM console, nor in the context of using IAM APIs. For more information, see Policies and
amazon-lookout-for-equipment-ug-020
amazon-lookout-for-equipment-ug.pdf
20
you are granting or denying permission. • Resource: The resource that's affected by the action. • Condition: Conditions are optional. They can be used to control when your policy is in effect. You may use the Lookout for Equipment ResourcePolicy APIs to control access to models, model versions, and datasets. For more information, see the API references for PutResourcePolicy and DeleteResourcePolicy. Lookout for Equipment access control policies follow the same format as IAM policies. However, Lookout for Equipment policies will not appear in the IAM console, nor in the context of using IAM APIs. For more information, see Policies and permissions in IAM in the IAM User Guide. Comparing access to model versions with access to parent models When you give another account access to a model, you are giving that account access to all versions of that model. When two policies exist, one for the model, and one for a version of that model, the more restrictive of the two policies applies. If an account attempts to access a particular model or version, and no IAM policy exists for either the model itself or any version of that model, then access is not allowed. For example, suppose you have a model called Pump_1. This model will serve as the parent model. Comparing access to model versions with access to parent models 53 Amazon Lookout for Equipment This model has two versions: • Pump_1 version 1 • Pump_1 version 2 User Guide Now suppose that we set a policy only at the level of the parent model (Pump_1). { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::2222222222:role/Admin"}, "Action": [ "lookoutequipment:ImportModelVersion" ], "Resource": "arn:aws:lookoutequipment:us-west-2:111111111111:model/ Pump_1/00af0697-095b-433a-889c-9f4eed39dbbc" } } This policy indicates that all versions under model Pump_1 can be imported. No policies are specified at the level of the model version. Therefore, Lookout for Equipment will look at the permissions on the parent model level and apply them to all the versions. Now, let us suppose that you also set a policy at the model version level. In this case, the model version will be Pump_1 Version 2. { "Version": "2012-10-17", "Statement": { "Effect": "Deny", "Principal": {"AWS": "arn:aws:iam::2222222222:role/Admin"}, "Action": [ "lookoutequipment:ImportModelVersion" ], "Resource": "arn:aws:lookoutequipment:us-west-2:111111111111:model/ Pump_1/00af0697-095b-433a-889c-9f4eed39dbbc/model-version/2 } } This policy indicates that Version 1 can be imported, but that Version 2 cannot be imported. Comparing access to model versions with access to parent models 54 Amazon Lookout for Equipment User Guide Lookout for Equipment looks at the permission at the model level and sees that it is set to Allow. Then, Lookout for Equipment will examine the permission for Version 2, and find that it is set to Deny. Lookout for Equipment will then apply the more restrictive of the two permissions. Thus, Version 2 cannot be imported. Finally, since there is no explicit permission on Version 1, Lookout for Equipment continues to apply the permission from the parent model (Allow). Therefore, Version 1 can be imported. The table below illustrates the relationship between parent model permissions and model version permissions. Importing a model version with accumulated inference data When you're importing a model version, you may want to also import the accumulated inference data along with it. For example, if the retraining scheduler had the lookback window set to P360D, then the retraining execution would use data up to 360 days up to the current day of the retraining execution. If the inference data imported from the source account falls in that time period, then it would be used to retrain the model. You can set three options with InferenceDataImportStrategy while calling the ImportModelVersion API: • NO_IMPORT: No data with regard to inference will be imported • ADD_WHEN_EMPTY: Only when the target model version has no inference data associated with it, then the inference data will be imported. Importing a model version with accumulated inference data 55 Amazon Lookout for Equipment User Guide • OVERWRITE: Even if the target model version has some inference data associated with it, the inference data from the source account will overwrite it. If nothing is set as input for InferenceDataImportStrategy, then the default setting is NO_IMPORT. Before you can import a model version with the accumulated inference data, you must verify that the resource policy allows the importing of data related to the model version. If you do not want to allow ImportModelVersions requests that import the inference data (that is, InferenceDataImportStrategy is set to NO_IMPORT in the request) then you should set the condition key lookoutequipment:IsImportingData to false on the resource policy of a model or model version that allows ImportModelVersion action. If you want to allow ImportModelVersions requests with any InferenceDataImportStrategy, you don’t need to additionally set lookoutequipment:IsImportingData on a resource policy of a model or model version that allows the ImportModelVersion action, because it is the default behavior when lookoutequipment:IsImportingData is not
amazon-lookout-for-equipment-ug-021
amazon-lookout-for-equipment-ug.pdf
21
allows the importing of data related to the model version. If you do not want to allow ImportModelVersions requests that import the inference data (that is, InferenceDataImportStrategy is set to NO_IMPORT in the request) then you should set the condition key lookoutequipment:IsImportingData to false on the resource policy of a model or model version that allows ImportModelVersion action. If you want to allow ImportModelVersions requests with any InferenceDataImportStrategy, you don’t need to additionally set lookoutequipment:IsImportingData on a resource policy of a model or model version that allows the ImportModelVersion action, because it is the default behavior when lookoutequipment:IsImportingData is not set. It is unusual to only allow ImportModelVersions requests that import the inference data (that is InferenceDataImportStrategy is set to ADD_WHEN_EMPTY or OVERWRITE in the request), but if you have such a use case, you can explicitly set lookoutequipment:IsImportingData to true to achieve this permission control. This is an example policy that will prevent the inference data from being imported: { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::2222222222:role/Admin"}, "Action": [ "lookoutequipment:ImportModelVersion" ], "Resource": "arn:aws:lookoutequipment:us-west-2:111111111111:model/ testModel/00af0697-095b-433a-889c-9f4eed39dbbc", "Condition": { "Bool": { "lookoutequipment:IsImportingData": "false" Importing a model version with accumulated inference data 56 Amazon Lookout for Equipment User Guide } } } } Bulk importing resources You can import Amazon Lookout for Equipment resources (datasets and models) from a source AWS account to a target AWS account by using the ImportDataset (datasets) or ImportModelVersion (models) operations. If you need to import multiple resources, we recommend that you use the following scripts to bulk import resources. • Resource CSV file script — Scans the source AWS account to get a list of all datasets and their respective active model versions. It then writes the list to an editable CSV file. You run the script in the source AWS account. • Resource configuration script — Reads the CSV file generated by the Resource CSV file script and configures the resource policy for the target AWS account. The resource policy grants the target AWS account permissions to import resources from the CSV file. You run this script in the source AWS account. • Bulk import script — Reads the CSV file that Resource CSV file script generates and calls ImportDataset on all datasets, and calls ImportModelVersion on the respective model versions. You run the script in the target AWS account, and after first running the Resource configuration script in the source AWS account. Topics • Running the bulk import scripts • Resource CSV file script • Resource configuration script • Bulk import script Running the bulk import scripts Although you can run the scripts in any environment that supports Python and boto3, we recommend that you run the scripts in an Amazon SageMaker AI notebook instance in Jupyter Lab. For more information, see https://jupyter.org/. Bulk importing resources 57 Amazon Lookout for Equipment Topics • Creating the Amazon SageMaker AI notebook instances • Getting the resources from the source AWS account • Importing the resources to the target AWS account User Guide Creating the Amazon SageMaker AI notebook instances Use the following procedure to create Amazon SageMaker AI notebook instances in the source AWS account and the target AWS account. To create the Amazon SageMaker AI notebook instances 1. In the AWS account that you want to import resources from (source AWS account), open the Amazon SageMaker AI console and Create a Notebook Instance. For more information, JupyterLab versioning. Enter a name for the new notebook and use the default configurations. 2. Make sure that the IAM role that you use has following managed policy permissions: • AmazonSageMakerFullAccess. • AmazonLookoutEquipmentFullAccess. Alternatively, grant permissions to call the following Lookout for Equipment operations: ListModels, DescribeModelVersion, PutResourcePolicy,importModelVersion,ImportDataset. 3. In the target AWS account that you want to bulk import resources into, repeat steps 1 and 2. Getting the resources from the source AWS account Use the following procedures to get an editable CSV file of resources that you can import from a source AWS account and configure them for import into a target AWS account. To get the resources from the source AWS account 1. In the source AWS account, open Jupyter Lab in the Amazon Sagemaker notebook instance that you created in step 1 of Creating the Amazon SageMaker AI notebook instances. 2. Copy each of the following scripts into separate cells within the notebook. • Resource CSV file script • Resource configuration script 3. Run the Resource CSV file script. The script prompts for the following: Running the bulk import scripts 58 Amazon Lookout for Equipment User Guide • The AWS Region in which you want to run the script. • The ID of the target AWS account to which you want to import the resources. The script generates a CSV file (import_input_file_{current_time}.csv) that you use in the next step. If necessary you can make changes
amazon-lookout-for-equipment-ug-022
amazon-lookout-for-equipment-ug.pdf
22
Amazon SageMaker AI notebook instances. 2. Copy each of the following scripts into separate cells within the notebook. • Resource CSV file script • Resource configuration script 3. Run the Resource CSV file script. The script prompts for the following: Running the bulk import scripts 58 Amazon Lookout for Equipment User Guide • The AWS Region in which you want to run the script. • The ID of the target AWS account to which you want to import the resources. The script generates a CSV file (import_input_file_{current_time}.csv) that you use in the next step. If necessary you can make changes to the CSV before continuing. For more information, see Resource CSV file script 4. Run the Resource configuration script. The script prompts for the following information. • The AWS region in which you want to run the script. • Permission to update the existing policy, if the policy already exists for the source resource Amazon Resource Name (ARN). • The name and path of the csv file (import_input_file_{current_time}.csv) that you created in step 3. For more information, see Resource configuration script. Importing the resources to the target AWS account Use the following procedure to import the resources to the target AWS account. To import the resources into the target AWS account 1. In the target AWS account, open Jupyter Lab in the Amazon SageMaker AI notebook instance that you created in step 3 of Creating the Amazon SageMaker AI notebook instances. 2. Copy the Bulk import script into a notebook cell. 3. Copy the file import_input_file_{current_time}.csv from the source AWS account to the target AWS account, in the same location where this script is located in the jupyter lab. 4. Run the Bulk import script. The script prompts for the following: • The AWS Region in which you want to run the script. • The name and path of the csv file (import_input_file_{current_time}.csv) that you copied in step 3. 5. After the script finishes, check the import results in the CSV file ( import_result_file_{current_time}.csv) that the script creates. For more information, see Bulk import script. Running the bulk import scripts 59 Amazon Lookout for Equipment Resource CSV file script User Guide The script scans the source AWS account to get a list of active datasets and their respective active model versions. The script writes the list to a CSV file named import_input_file_{current_time}.csv. You use the CSV file as input to the next script (Resource configuration script). The script populates the required fields and populates optional fields with None. If desired, you can supply your own values. Make sure you match datasets with the corresponding respective model version. You must not delete optional columns from the CSV file. • Current_model_name — (Required) The current name of the model in the source AWS account. • New_model_name — (Required) A name for the model in the target AWS account. By default the model name is the current model name. You can rename the model, if desired. • Current_dataset_name — (Required) The current name of the active dataset in the source AWS account. This is the dataset name related to the model populated in Current_model_name field. • New_dataset_name — (Required) The name for the imported dataset in the target AWS account. By default the dataset name is the value in Current_dataset_name. You can rename the dataset, if desired. If you only want to import the model and not import the dataset, use the existing active dataset name that's in the target AWS account. Additionally change the value of Source_dataset_arn to None. • Version(s) — (Required) The total number of versions that the model has. • Version_to_import — (Required) The model version which will be imported. By default the script populates Version_to_import with the active model version. You can specify a different model version, if desired. • Import?(Yes/No) — (Required) Specifies if the script will import the dataset and model. By default the value is Yes If you don't want to import the dataset and model, change the value to No. • Target_account_id — (Required) The ID of the target AWS account ID to which the script will import the resources. You enter this value when you run the script, but you can change the value as desired. • Source_dataset_arn — (Required) The ARN of the dataset that will be imported. At the target AWS account in case If you don’t want to the import dataset and just want to perform import model, do the following: • Change the value of Source_dataset_arn to None. Resource CSV file script 60 Amazon Lookout for Equipment User Guide • Change the value of New_dataset_name to the existing active dataset name, in the target AWS account. • Source_model_arn — (Required) The ARN of the source model that the script will import. • Label_s3_bucket — The name of the Amazon
amazon-lookout-for-equipment-ug-023
amazon-lookout-for-equipment-ug.pdf
23
value as desired. • Source_dataset_arn — (Required) The ARN of the dataset that will be imported. At the target AWS account in case If you don’t want to the import dataset and just want to perform import model, do the following: • Change the value of Source_dataset_arn to None. Resource CSV file script 60 Amazon Lookout for Equipment User Guide • Change the value of New_dataset_name to the existing active dataset name, in the target AWS account. • Source_model_arn — (Required) The ARN of the source model that the script will import. • Label_s3_bucket — The name of the Amazon S3 bucket in the target AWS account where the label file exists. By default the script populates this value as None. We recommend that you leave this value unchanged, unless you want to use a different Amazon S3 bucket. • Label_s3_prefix — The Amazon S3 bucket prefix path in the target AWS account where the label exists. By default the script populates this value as None. We recommend that you leave this value unchanged, unless you want to use a different Amazon S3 prefix. • Role_arn — The ARN of the role that grants permission to read the label file at the target AWS account. By default the script populates this value as None. We recommend that you leave this value unchanged, unless you want to use a different role ARN. • kms_key_id — The ID of the server-side AWS Key Management Service key. By default, the script populates this value as None. We recommend that you leave this value unchanged, unless you want to use a different server-side AWS KMS key ID. Script import boto3 import os import csv import time import json from botocore.config import Config from datetime import datetime import sys import datetime # By default these optional parameters are populated as None label_s3_bucket = "None" label_s3_prefix = "None" kms_key_id = "None" role_arn = "None" def getTotalNumberOfModelVersions(model_name): total_length = 0 Resource CSV file script 61 User Guide Amazon Lookout for Equipment try: response = lookoutequipment_client.list_model_versions( ModelName=model_name) total_length = len(response.get('ModelVersionSummaries')) next_token = response.get("NextToken") while next_token is not None: response = lookoutequipment_client.list_model_versions( ModelName=model_name, NextToken=next_token) next_token += len(response.get('ModelVersionSummaries')) return total_length except Exception as e: print("Exception thrown while listing models for model name:", model_name) config = Config(connect_timeout=30, read_timeout=30, retries={'max_attempts': 3}) region_name = input( "Please enter the region to run the script('us-east-1', 'ap-northeast-2', 'eu- west-1'): ") lookoutequipment_client = boto3.client( service_name='lookoutequipment', region_name=region_name, config=config, endpoint_url='https://lookoutequipment.{region_name}.amazonaws.com'.format( region_name=region_name), ) response = lookoutequipment_client.list_models() target_account = None current_time = datetime.datetime.now() formatted_time = current_time.strftime("%Y-%m-%d_%H-%M-%S") file_name = f"import_input_file_{formatted_time}.csv" target_account = input("Please enter the target account id: ") if len(target_account) != 12: print("Target account id is not valid hence terminating the script execution..") sys.exit() with open(file_name, "a") as f: f.write("Current_model_name,New_model_name,Current_dataset_name,New_dataset_name,Version(s),Version_to_import,Import? (yes/ no),Target_account_id,Source_dataset_arn,Source_model_arn,Label_s3_bucket,Label_s3_prefix,Role_arn,kms_key_id" + '\n') Resource CSV file script 62 Amazon Lookout for Equipment User Guide for model in response.get('ModelSummaries'): with open(file_name, "a") as f: f.write(model.get('ModelName') + "," + model.get('ModelName') + "," + model.get('DatasetName') + "," + model.get('DatasetName') + "," + str(getTotalNumberOfModelVersions(model.get('ModelName'))) + "," + str(model.get( 'ActiveModelVersion')) + "," + "yes" + "," + target_account + "," + model.get('DatasetArn') + "," + model.get('ModelArn') + "," + label_s3_bucket + "," + label_s3_prefix + "," + role_arn + "," + kms_key_id + '\n') next_token = response.get("NextToken") while next_token is not None: response = lookoutequipment_client.list_models(NextToken=next_token) for model in response.get('ModelSummaries'): with open(file_name, "a") as f: f.write(model.get('ModelName') + "," + model.get('ModelName') + "," + model.get('DatasetName') + "," + model.get('DatasetName') + "," + str(getTotalNumberOfModelVersions(model.get('ModelName'))) + "," + str(model.get( 'ActiveModelVersion')) + "," + "yes" + "," + target_account + "," + model.get('DatasetArn') + "," + model.get('ModelArn') + "," + label_s3_bucket + "," + label_s3_prefix + "," + role_arn + "," + kms_key_id + '\n') next_token = response.get("NextToken") print("All the active models have been scanned and written to a file:", file_name) Resource configuration script This script configures the resource policies to let the target AWS account bulk import the resources. By using the CSV file (import_input_file_{current_time}.csv ) that the Resource CSV file script creates, the script configures the resource policy for each dataset and model version ARN. The script updates existing resource policies for source datasets and model version ARNs to grant permissions to the target AWS account, along with any existing conditions. After running this script, you can bulk import resources to the target AWS account by running the Bulk import script. Script import boto3 import os import csv import time import json from botocore.config import Config Resource configuration script 63 Amazon Lookout for Equipment User Guide from datetime import datetime import sys import datetime # By default these optional parameters are populated as None label_s3_bucket = "None" label_s3_prefix = "None" kms_key_id = "None" role_arn = "None" def getTotalNumberOfModelVersions(model_name): total_length = 0 try: response = lookoutequipment_client.list_model_versions( ModelName=model_name) total_length = len(response.get('ModelVersionSummaries')) next_token = response.get("NextToken") while next_token is not None: response = lookoutequipment_client.list_model_versions( ModelName=model_name, NextToken=next_token) next_token += len(response.get('ModelVersionSummaries')) return total_length except
amazon-lookout-for-equipment-ug-024
amazon-lookout-for-equipment-ug.pdf
24
import resources to the target AWS account by running the Bulk import script. Script import boto3 import os import csv import time import json from botocore.config import Config Resource configuration script 63 Amazon Lookout for Equipment User Guide from datetime import datetime import sys import datetime # By default these optional parameters are populated as None label_s3_bucket = "None" label_s3_prefix = "None" kms_key_id = "None" role_arn = "None" def getTotalNumberOfModelVersions(model_name): total_length = 0 try: response = lookoutequipment_client.list_model_versions( ModelName=model_name) total_length = len(response.get('ModelVersionSummaries')) next_token = response.get("NextToken") while next_token is not None: response = lookoutequipment_client.list_model_versions( ModelName=model_name, NextToken=next_token) next_token += len(response.get('ModelVersionSummaries')) return total_length except Exception as e: print("Exception thrown while listing models for model name:", model_name) config = Config(connect_timeout=30, read_timeout=30, retries={'max_attempts': 3}) region_name = input( "Please enter the region to run the script('us-east-1', 'ap-northeast-2', 'eu- west-1'): ") lookoutequipment_client = boto3.client( service_name='lookoutequipment', region_name=region_name, config=config, endpoint_url='https://lookoutequipment.{region_name}.amazonaws.com'.format( region_name=region_name), ) response = lookoutequipment_client.list_models() Resource configuration script 64 Amazon Lookout for Equipment target_account = None current_time = datetime.datetime.now() formatted_time = current_time.strftime("%Y-%m-%d_%H-%M-%S") file_name = f"import_input_file_{formatted_time}.csv" target_account = input("Please enter the target account id: ") if len(target_account) != 12: User Guide print("Target account id is not valid hence terminating the script execution..") sys.exit() with open(file_name, "a") as f: f.write("Current_model_name,New_model_name,Current_dataset_name,New_dataset_name,Version(s),Version_to_import,Import? (yes/ no),Target_account_id,Source_dataset_arn,Source_model_arn,Label_s3_bucket,Label_s3_prefix,Role_arn,kms_key_id" + '\n') for model in response.get('ModelSummaries'): with open(file_name, "a") as f: f.write(model.get('ModelName') + "," + model.get('ModelName') + "," + model.get('DatasetName') + "," + model.get('DatasetName') + "," + str(getTotalNumberOfModelVersions(model.get('ModelName'))) + "," + str(model.get( 'ActiveModelVersion')) + "," + "yes" + "," + target_account + "," + model.get('DatasetArn') + "," + model.get('ModelArn') + "," + label_s3_bucket + "," + label_s3_prefix + "," + role_arn + "," + kms_key_id + '\n') next_token = response.get("NextToken") while next_token is not None: response = lookoutequipment_client.list_models(NextToken=next_token) for model in response.get('ModelSummaries'): with open(file_name, "a") as f: f.write(model.get('ModelName') + "," + model.get('ModelName') + "," + model.get('DatasetName') + "," + model.get('DatasetName') + "," + str(getTotalNumberOfModelVersions(model.get('ModelName'))) + "," + str(model.get( 'ActiveModelVersion')) + "," + "yes" + "," + target_account + "," + model.get('DatasetArn') + "," + model.get('ModelArn') + "," + label_s3_bucket + "," + label_s3_prefix + "," + role_arn + "," + kms_key_id + '\n') next_token = response.get("NextToken") print("All the active models have been scanned and written to a file:", file_name) Bulk import script This script scans the CSV file that the Resource CSV file script creates. For each row the script calls ImportDataset on the source dataset ARN. After the dataset import successfully finishes, the script then calls ImportModelVersion on the dataset’s respective model version. If desired, you can call ImportModelVersion on an existing active dataset by populating the existing dataset Bulk import script 65 Amazon Lookout for Equipment User Guide name in the columns Current_dataset_name and New_dataset_name. You must also set the Source_dataset_arn value to None. The script outputs an import results CSV file (import_result_file_{current_time}.csv) that lists the following: • Source_resource_arn — The ARN of the source dataset or source model. • Is_import_successful? — Yes, if the resource import was successful. Otherwise, No. • type — The type of the dataset (dataset or model_version). • Source_resource_name — The name of the source resource. • New resource_name — The new name for the resource in the target AWS account. • Version_to_import — The model version in the source AWS account that was identified for import. • Failed_reason — If the value of Is_import_successful is No, provides a reason for the failure. Script import boto3 import os import csv import time import string import random import json from botocore.config import Config from datetime import datetime import sys import datetime def activate_model_version(model_name, version, model_version_arn): try: response = lookoutequipment_client.update_active_model_version( ModelName=model_name, ModelVersion=version) print("Activated the model version: {} for the copied model:{}:".format( version, model_name)) except Exception as e: print("Error while activating the model version:", e) Bulk import script 66 Amazon Lookout for Equipment User Guide with open(final_result_file, "a") as f: f.write(f"{model_version_arn},No,{e}\n") config = Config(connect_timeout=30, read_timeout=30, retries={'max_attempts': 3}) region_name = input( "Please enter the region to run the script('us-east-1', 'ap-northeast-2', 'eu- west-1'): ") lookoutequipment_client = boto3.client( service_name='lookoutequipment', region_name=region_name, config=config, endpoint_url=f'https://lookoutequipment.{region_name}.amazonaws.com' ) labels_configuration = { 'S3InputConfiguration': { 'Bucket': 's3-amzn-demo-bucket', 'Prefix': 'path/to/label_files/' } } source_input_file = input( "Please enter the source file name to start the import: ") current_time = datetime.datetime.now() formatted_time = current_time.strftime("%Y-%m-%d_%H-%M-%S") final_result_file = f"import_result_file_{formatted_time}.csv" with open(final_result_file, "a") as f: f.write("Source_resource_arn,Is_import_successful?,Type,Source_resource_name,New_resource_name,Version_to_import,Failed_reason" + '\n') with open(source_input_file) as csvfile: csvReader = csv.reader(csvfile, delimiter=',') for row in csvReader: client_token = ''.join(random.choices( string.ascii_lowercase + string.digits, k=10)) if len(row) < 14 or len(row) > 14: print( "Skipping this Row as it doesn't match the format: Current_model_name,New_model_name,Current_dataset_Name,New_dataset_name,Version(s),Version_to_import,Import? Bulk import script 67 Amazon Lookout for Equipment (yes/ User Guide no),Target_account_id,Source_dataset_arn,Source_model_arn,Label_s3_bucket,Label_s3_prefix,Role_arn,kms_key_id") continue if row[6].lower() == "no": print(f"skipping import for model {row[9]}") with open(final_result_file, "a") as f: f.write( f"{row[9]},No,skipped import as the input file says 'no' for import \n") continue if row[0] == "Current_model_name" and row[1] == "New_model_name": continue is_dataset_import_success = True # import dataset logic if 'dataset' in row[8]: is_dataset_import_success = False print("Triggering import for
amazon-lookout-for-equipment-ug-025
amazon-lookout-for-equipment-ug.pdf
25
csvReader = csv.reader(csvfile, delimiter=',') for row in csvReader: client_token = ''.join(random.choices( string.ascii_lowercase + string.digits, k=10)) if len(row) < 14 or len(row) > 14: print( "Skipping this Row as it doesn't match the format: Current_model_name,New_model_name,Current_dataset_Name,New_dataset_name,Version(s),Version_to_import,Import? Bulk import script 67 Amazon Lookout for Equipment (yes/ User Guide no),Target_account_id,Source_dataset_arn,Source_model_arn,Label_s3_bucket,Label_s3_prefix,Role_arn,kms_key_id") continue if row[6].lower() == "no": print(f"skipping import for model {row[9]}") with open(final_result_file, "a") as f: f.write( f"{row[9]},No,skipped import as the input file says 'no' for import \n") continue if row[0] == "Current_model_name" and row[1] == "New_model_name": continue is_dataset_import_success = True # import dataset logic if 'dataset' in row[8]: is_dataset_import_success = False print("Triggering import for dataset:", row[8]) datasetnamefinal = None if row[3] == "None": datasetnamefinal = row[8].split(":")[5].split("/")[1] else: datasetnamefinal = row[3] import_status = None request = { 'SourceDatasetArn': row[8], 'DatasetName': datasetnamefinal, 'ClientToken': client_token } if row[13] != "None": request['ServerSideKmsKeyId'] = row[13] try: response = lookoutequipment_client.import_dataset(**request) print("Latest response for the import dataset is:", response) import_status = response.get("Status") if import_status == "SUCCESS": is_dataset_import_success = True except Exception as e: print("Error while importing a dataset:", e) with open(final_result_file, "a") as f: f.write( f"{row[8]},No,dataset,{row[2]},{row[3]},None,{e}\n") Bulk import script 68 Amazon Lookout for Equipment continue User Guide timeout_seconds = 900 # 15 minutes in seconds start_time = time.time() print("Latest import_status for dataset is:", import_status) while import_status != "SUCCESS" and is_dataset_import_success != True: response = lookoutequipment_client.import_dataset(**request) print("Latest response for the import dataset is:", response) import_status = response.get("Status") if import_status == "SUCCESS": is_dataset_import_success = True print("Import dataset completed for arn:", row[8]) with open(final_result_file, "a") as f: f.write( f"{row[8]},Yes,dataset,{row[2]},{row[3]},None,\n") if import_status == "FAILED": print("import dataset has failed hence skipping the import model") with open(final_result_file, "a") as f: f.write( f"{row[8]},No,dataset,{row[2]},{row[3]},None,check ingestion job {response.get('JobId')} failure reason\n") continue elapsed_time = time.time() - start_time if elapsed_time >= timeout_seconds: print("Timeout reached. Exiting..") is_dataset_import_success = False with open(final_result_file, "a") as f: f.write( f"{row[8]},No,dataset,{row[2]},{row[3]},None,Timed out checking the success status for import\n") continue time.sleep(15) # import model logic if 'model' in row[9] and is_dataset_import_success: is_model_import_success = False model_version_arn = row[9] + "/model-version/" + row[5] print("Triggering import for model version:", model_version_arn) new_model_name = row[1] request = { 'SourceModelVersionArn': model_version_arn, 'DatasetName': datasetnamefinal, 'ModelName': new_model_name, Bulk import script 69 Amazon Lookout for Equipment User Guide 'ClientToken': client_token } if row[13] != "None": request['ServerSideKmsKeyId'] = row[13] if row[12] != "None": request['RoleArn'] = row[12] if row[10] != "None" and row[11] != "None": # populate label bucket and prefix if provided labels_configuration['S3InputConfiguration']['Bucket'] = row[10] labels_configuration['S3InputConfiguration']['Prefix'] = row[11] request['LabelsInputConfiguration'] = labels_configuration import_status = None try: response = lookoutequipment_client.import_model_version( **request) print("Latest response for the import model is:", response) import_status = response.get("Status") if import_status == "SUCCESS": is_model_import_success = True except Exception as e: print("Error while importing the model:", e) with open(final_result_file, "a") as f: f.write( f"{model_version_arn},No,model_version,{row[0]},{row[1]}, {row[5]},{e}\n") continue timeout_seconds = 900 # 15 minutes in seconds start_time = time.time() while import_status != "SUCCESS" and is_model_import_success != True: response = lookoutequipment_client.import_model_version( **request) import_status = response.get("Status") print("Latest response for the import model is:", response) if import_status == "SUCCESS": is_model_import_success = True activate_model_version(response.get("ModelName"), response.get( "ModelVersion"), model_version_arn) with open(final_result_file, "a") as f: Bulk import script 70 Amazon Lookout for Equipment User Guide f.write( f"{model_version_arn},Yes,model_version,{row[0]},{row[1]}, {row[5]},None\n") if import_status == "FAILED": print("Import model failed for arn:", model_version_arn) with open(final_result_file, "a") as f: f.write( f"{model_version_arn},No,model_version,{row[0]},{row[1]}, {row[5]},check model version arn {response.get('ModelVersionArn')} details to know the failure reason\n") continue elapsed_time = time.time() - start_time if elapsed_time >= timeout_seconds: print("Timeout reached. Exiting..") with open(final_result_file, "a") as f: f.write( f"{model_version_arn},No,Timed out checking the success status for import\n") continue time.sleep(15) print("Import model completed for arn:", model_version_arn) print( f"Import for all the dataset/models in the input file is completed, Check the results file {final_result_file} for details") Bulk import script 71 Amazon Lookout for Equipment User Guide Scheduling inference Note You can also schedule inference with the AWS SDK for Python (Boto). Starting inference After you create a model, you can use it to monitor your asset in real time. To use your model to monitor your asset, you do the following. To schedule inference, you specify the model, the schedule, the Amazon S3 location of where the model is reading the data, and where it outputs the results of the inference. 1. Sign in to AWS Management Console and open the Amazon Lookout for Equipment console at Amazon Lookout for Equipment console. 2. Choose Models. Then choose the model that monitors your asset. 3. Choose Schedule inference. 4. 5. 6. 7. 8. 9. For Inference schedule name, specify the name for the inference schedule. For Model, choose the model that is monitoring the data coming from your asset. For S3 location under Input data, specify the Amazon S3 location of the input data coming from the asset. For Data upload frequency, specify how often your asset sends the data to the Amazon S3 bucket. For S3 location under Output data, specify the Amazon S3 location to store the output of the inference results. For IAM role under Access Permissions, specify the IAM role that
amazon-lookout-for-equipment-ug-026
amazon-lookout-for-equipment-ug.pdf
26
Choose Schedule inference. 4. 5. 6. 7. 8. 9. For Inference schedule name, specify the name for the inference schedule. For Model, choose the model that is monitoring the data coming from your asset. For S3 location under Input data, specify the Amazon S3 location of the input data coming from the asset. For Data upload frequency, specify how often your asset sends the data to the Amazon S3 bucket. For S3 location under Output data, specify the Amazon S3 location to store the output of the inference results. For IAM role under Access Permissions, specify the IAM role that provides Amazon Lookout for Equipment with access to your data in Amazon S3. 10. Choose Schedule inference. Starting inference 72 Amazon Lookout for Equipment User Guide Managing inference schedules Stopping inference This section explains how to halt the inference process. 1. 2. 3. From the AWS console, under Lookout for Equipment, from the left nav, choose Inference schedules. If necessary, choose the Active schedules tab. Select the schedule that you want to stop. 4. Choose Stop. 5. Choose Stop schedule. 6. Your stopped schedule will appear on the Inactive schedules tab. Resuming inference This section explains how to resume a stopped inference schedule. 1. From the AWS console, under Lookout for Equipment, from the left nav, choose Inference schedules. 2. If necessary, choose the Inactive schedules tab. 3. Choose Set as active. 4. Your stopped schedule will appear on the Active schedules tab. Editing an active schedule 1. 2. 3. From the AWS console, under Lookout for Equipment, from the left nav, choose Inference schedules. If necessary, choose the Active schedules tab. Select the schedule that you want to edit. 4. Choose Edit. 5. On the pop-up window, choose edit. Managing inference schedules 73 Amazon Lookout for Equipment User Guide Note After you finish editing an inference schedule, the schedule returns to the activation status that it was in before you started editing. A schedule that was active before editing will return to active status after editing. Editing an inactive schedule 1. 2. 3. From the AWS console, under Lookout for Equipment, from the left nav, choose Inference schedules. If necessary, choose the Inactive schedules tab. Select the schedule that you want to edit. 4. Choose Edit. 5. On the pop-up window, choose edit. Note After you finish editing an inference schedule, the schedule returns to the activation status that it was in before you started editing. A schedule that was inactive before editing will remain inactive after editing. To re-activate it, you must select the schedule on the Inactive schedules page and choose Set as active. Delete an active schedule 1. 2. 3. From the AWS console, under Lookout for Equipment, from the left nav, choose Inference schedules. If necessary, choose the Active schedules tab. Select the schedule that you want to delete. 4. Choose Delete. 5. In the pop-up window, choose Stop to indicate that you are going to stop the schedule before deleting it. 6. In the pop-up window, enter delete in the text field. Editing an active schedule 74 Amazon Lookout for Equipment User Guide 7. In the pop-up window, choose delete. Delete an inactive schedule 1. 2. 3. From the AWS console, under Lookout for Equipment, from the left nav, choose Inference schedules. If necessary, choose the Inactive schedules tab. Select the schedule that you want to delete. 4. Choose Delete. 5. 6. In the pop-up window, enter delete in the text field. In the pop-up window, choose delete. Understanding the inference process When you're planning your use of Lookout for Equipment, it may be useful to understand exactly what happens at each step of the inference process. Understanding inference scheduling windows • When you schedule inference, you may set your data upload frequency time to any of the following values, in minutes: 5, 10, 15, 30, 60. • Lookout for Equipment then calculates the base number of segments per hour by dividing 60 by the length of your segments. • You may also set an offset window in increments of minutes, from 0 to 60. • At the beginning of each segment, Lookout for Equipment waits for the offset window to close before running inference. • At the top of the hour, the process begins again. Inference interval Inferences per hour 5 12 First inference after 09:00 (with no First inference after 09:00 (with a 5- offset) 09:05 minute offset) 09:10 Editing an inactive schedule 75 Amazon Lookout for Equipment User Guide Inference interval Inferences per hour First inference after 09:00 (with no First inference after 09:00 (with a 5- offset) minute offset) 10 15 30 60 6 4 2 1 The inference process 09:10 09:15 09:30 10:00 09:15 09:20 09:35 10:05 1. Lookout for Equipment looks for the component name (which can be
amazon-lookout-for-equipment-ug-027
amazon-lookout-for-equipment-ug.pdf
27
inference. • At the top of the hour, the process begins again. Inference interval Inferences per hour 5 12 First inference after 09:00 (with no First inference after 09:00 (with a 5- offset) 09:05 minute offset) 09:10 Editing an inactive schedule 75 Amazon Lookout for Equipment User Guide Inference interval Inferences per hour First inference after 09:00 (with no First inference after 09:00 (with a 5- offset) minute offset) 10 15 30 60 6 4 2 1 The inference process 09:10 09:15 09:30 10:00 09:15 09:20 09:35 10:05 1. Lookout for Equipment looks for the component name (which can be the name of an asset or a sensor, depending on how your data was ingested). 2. Once the component name is found in the file name, Lookout for Equipment looks at the time stamp in the CSV file name. 3. The timestamp in the file name must be within the range of time that your scheduler is running. For example, if the scheduler is running every 5 minutes, then at 9:05, Lookout for Equipment The inference process 76 Amazon Lookout for Equipment User Guide will look for any files that have a timestamp from 9:00 to 9:05. Any files with timestamps outside this range will be ignored for the inference run. 4. Lookout for Equipment automatically ingests the files with the right component name, and within the right time range. 5. Lookout for Equipment opens the CSV file and runs inference on any rows in the CSV file with timestamps that fit within the scheduler window. For example, if the scheduler is running every 5 minutes, and the current time is 9:05, then Lookout for Equipment will grab any files with the timestamp in the file name from 9:00 to 9:05, and will then run inference on any rows in the CSV with timestamps between 9:00 to 9:05. 6. The inference results are placed into your designated output bucket in a JSON file. 7. The steps above are repeated in perpetuity until the scheduler is turned off. The inference process 77 Amazon Lookout for Equipment User Guide Reviewing inference results After you've scheduled inference, you are able to see how your equipment is operating. Topics • Reviewing inference results in the console • Reviewing inference results in a JSON file Reviewing inference results in the console Using the main inference schedules page On the inference schedules main page you'll find your list of inference schedules, both active and inactive (on different tabs). For each schedule, you'll find the model name, data upload frequency, and latest results. In this context, latest results means the results from the most recent inference run. In the console 78 Amazon Lookout for Equipment User Guide To edit, delete, stop, or restart a schedule, see Managing inference schedules. Using the inference schedule detail page On the inference schedule detail page you'll find details about the anomalous behavior of your assets, as presented in the context of a particular inference schedule. You'll also find metadata about the schedule itself. Inference schedules page 79 Amazon Lookout for Equipment User Guide At the top of the results tab are the 7-day inference results. These results provide information about anomalous behavior that occurred over the past week. Latest results refers to results from the latest inference run. 7-day results indicates the percentage of hours during the last seven days, during which an anomaly was detected. Use the slider to zoom in on a particular event (red bar). Click on a particular event (red bar) to view details about it. Inference schedules page 80 Amazon Lookout for Equipment User Guide After you click on a particular event, the Event details tab indicates which sensors contributed the most to that event. Note Lookout for Equipment only records events that last longer than 5 minutes. Reviewing inference results in a JSON file The JSON file containing the inference results is stored in the Amazon Simple Storage Service (Amazon S3) bucket that you've specified. For the sensor data that your asset sends to Amazon S3, Amazon Lookout for Equipment marks the group of readings as either normal or abnormal. For each group of abnormal readings, you can see the sensors that Lookout for Equipment used to indicate that the equipment is behaving abnormally. The following shows example JSON output. {"timestamp": "2021-03-11T22:24:00.000000", "prediction": 0, "prediction_reason": "MACHINE_OFF"} {"timestamp": "2021-03-11T22:25:00.000000", "prediction": 1, "prediction_reason": "ANOMALY_DETECTED", "anomaly_score": 0.72385, "diagnostics": [{"name": In a JSON file 81 Amazon Lookout for Equipment User Guide "component_5feceb66\\sensor0", "value": 0.02346}, {"name": "component_5feceb66\ \sensor1", "value": 0.10011}, {"name": "component_5feceb66\\sensor2", "value": 0.11162}, {"name": "component_5feceb66\\sensor3", "value": 0.14419}, {"name": "component_5feceb66\\sensor4", "value": 0.12219}, {"name": "component_5feceb66\ \sensor5", "value": 0.14936}, {"name": "component_5feceb66\\sensor6", "value": 0.17829}, {"name": "component_5feceb66\\sensor7", "value": 0.00194}, {"name": "component_5feceb66\\sensor8", "value": 0.05446}, {"name": "component_5feceb66\ \sensor9", "value": 0.11437}]} {"timestamp": "2021-03-11T22:26:00.000000", "prediction": 0, "prediction_reason": "NO_ANOMALY_DETECTED", "anomaly_score": 0.41227, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.03533}, {"name": "component_5feceb66\ \sensor1",
amazon-lookout-for-equipment-ug-028
amazon-lookout-for-equipment-ug.pdf
28
used to indicate that the equipment is behaving abnormally. The following shows example JSON output. {"timestamp": "2021-03-11T22:24:00.000000", "prediction": 0, "prediction_reason": "MACHINE_OFF"} {"timestamp": "2021-03-11T22:25:00.000000", "prediction": 1, "prediction_reason": "ANOMALY_DETECTED", "anomaly_score": 0.72385, "diagnostics": [{"name": In a JSON file 81 Amazon Lookout for Equipment User Guide "component_5feceb66\\sensor0", "value": 0.02346}, {"name": "component_5feceb66\ \sensor1", "value": 0.10011}, {"name": "component_5feceb66\\sensor2", "value": 0.11162}, {"name": "component_5feceb66\\sensor3", "value": 0.14419}, {"name": "component_5feceb66\\sensor4", "value": 0.12219}, {"name": "component_5feceb66\ \sensor5", "value": 0.14936}, {"name": "component_5feceb66\\sensor6", "value": 0.17829}, {"name": "component_5feceb66\\sensor7", "value": 0.00194}, {"name": "component_5feceb66\\sensor8", "value": 0.05446}, {"name": "component_5feceb66\ \sensor9", "value": 0.11437}]} {"timestamp": "2021-03-11T22:26:00.000000", "prediction": 0, "prediction_reason": "NO_ANOMALY_DETECTED", "anomaly_score": 0.41227, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.03533}, {"name": "component_5feceb66\ \sensor1", "value": 0.24063}, {"name": "component_5feceb66\\sensor2", "value": 0.06327}, {"name": "component_5feceb66\\sensor3", "value": 0.08303}, {"name": "component_5feceb66\\sensor4", "value": 0.18598}, {"name": "component_5feceb66\ \sensor5", "value": 0.10839}, {"name": "component_5feceb66\\sensor6", "value": 0.08721}, {"name": "component_5feceb66\\sensor7", "value": 0.06792}, {"name": "component_5feceb66\\sensor8", "value": 0.1309}, {"name": "component_5feceb66\ \sensor9", "value": 0.07735}]} {"timestamp": "2021-03-11T22:27:00.000000", "prediction": 0, "prediction_reason": "NO_ANOMALY_DETECTED", "anomaly_score": 0.10541, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.02533}, {"name": "component_5feceb66\ \sensor1", "value": 0.34063}, {"name": "component_5feceb66\\sensor2", "value": 0.07327}, {"name": "component_5feceb66\\sensor3", "value": 0.03303}, {"name": "component_5feceb66\\sensor4", "value": 0.18598}, {"name": "component_5feceb66\ \sensor5", "value": 0.10839}, {"name": "component_5feceb66\\sensor6", "value": 0.08721}, {"name": "component_5feceb66\\sensor7", "value": 0.06792}, {"name": "component_5feceb66\\sensor8", "value": 0.1309}, {"name": "component_5feceb66\ \sensor9", "value": 0.07735}]} {"timestamp": "2021-03-11T22:28:00.000000", "prediction": 0, "prediction_reason": "NO_ANOMALY_DETECTED", "anomaly_score": 0.24867, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.04533}, {"name": "component_5feceb66\ \sensor1", "value": 0.14063}, {"name": "component_5feceb66\\sensor2", "value": 0.08327}, {"name": "component_5feceb66\\sensor3", "value": 0.07303}, {"name": "component_5feceb66\\sensor4", "value": 0.18598}, {"name": "component_5feceb66\ \sensor5", "value": 0.10839}, {"name": "component_5feceb66\\sensor6", "value": 0.08721}, {"name": "component_5feceb66\\sensor7", "value": 0.06792}, {"name": "component_5feceb66\\sensor8", "value": 0.1309}, {"name": "component_5feceb66\ \sensor9", "value": 0.07735}]} {"timestamp": "2021-03-11T22:29:00.000000", "prediction": 1, "prediction_reason": "ANOMALY_DETECTED", "anomaly_score": 0.79376, "diagnostics": [{"name": "component_5feceb66\\sensor0", "value": 0.04533}, {"name": "component_5feceb66\ \sensor1", "value": 0.14063}, {"name": "component_5feceb66\\sensor2", "value": 0.08327}, {"name": "component_5feceb66\\sensor3", "value": 0.07303}, {"name": "component_5feceb66\\sensor4", "value": 0.18598}, {"name": "component_5feceb66\ In a JSON file 82 Amazon Lookout for Equipment User Guide \sensor5", "value": 0.10839}, {"name": "component_5feceb66\\sensor6", "value": 0.08721}, {"name": "component_5feceb66\\sensor7", "value": 0.06792}, {"name": "component_5feceb66\\sensor8", "value": 0.1309}, {"name": "component_5feceb66\ \sensor9", "value": 0.07735}]} For the prediction field, a value of 1 indicates abnormal equipment behavior. A value of 0 indicates normal equipment behavior. If the value of prediction_reason isn't MACHINE_OFF, Amazon Lookout for Equipment returns an object that contains a diagnostics list, regardless of the value of prediction. The diagnostics list has the name of the sensors and the weights of the sensors' contributions in indicating abnormal equipment behavior. For each sensor, the name field indicates the name of the sensor. The value field indicates the percentage of the sensor's contribution to the prediction value. By seeing the percentage of each sensor's contribution to the prediction value, you can see how the data from each sensor was weighted. The anomaly score is a value between 0 and 1 that indicates the intensity of the anomaly. The prediction reason can be ANOMALY_DETECTED, NO_ANOMALY_DETECTED or MACHINE_OFF. In a JSON file 83 Amazon Lookout for Equipment User Guide Viewing your ingestion history To view your ingestion history: 1. Go to the main page for your dataset. (Amazon Lookout for Equipment -> Projects -> [asset name] -> Dataset 2. Select the Ingestion history tab. For each ingestion job that succeeded, you may also view the associated logs in Amazon CloudWatch. The log group for your Lookout for Equipment logs will be /aws/ lookoutequipment/ingestion. The logstream name will be the ingestion job ID. For more information, see Publishing information about ingestion validation to Amazon CloudWatch Logs. 84 Amazon Lookout for Equipment User Guide Replacing your dataset Replacing your dataset allows you to change the data without re-creating the project from the beginning. You may want to do this after reviewing the ingestion of your dataset, and addressing problems with the job, files, or sensors. Note If you want to change your schema, then you must start over with a new project. The Replace dataset page is similar to the Ingest dataset page that you visited earlier in the workflow. The main difference is that the Replace dataset page does not ask you for information about how you named your .csv files. When you replace a dataset, Lookout for Equipment re-uses the schema detection information that you entered before. To replace your dataset: • From the Dataset details screen, choose Replace dataset. • On the Replace dataset page, indicate the location of your data on Amazon S3 and choose your IAM role. • Choose Start ingestion. After the procedure above, you'll review your dataset ingestion once again. 85 Amazon Lookout for Equipment User Guide Best practices with Lookout for Equipment Training a machine learning (ML) model can involve inputs from up to 300 sensors, and you can have up to 3000 sensors represented in a single dataset. We highly recommend that you consult a subject matter expert (SME) when setting up Lookout for Equipment to monitor your equipment. This will help you get the most out of Lookout for Equipment. We also recommend that you understand and follow the best practices described in this topic. There
amazon-lookout-for-equipment-ug-029
amazon-lookout-for-equipment-ug.pdf
29
After the procedure above, you'll review your dataset ingestion once again. 85 Amazon Lookout for Equipment User Guide Best practices with Lookout for Equipment Training a machine learning (ML) model can involve inputs from up to 300 sensors, and you can have up to 3000 sensors represented in a single dataset. We highly recommend that you consult a subject matter expert (SME) when setting up Lookout for Equipment to monitor your equipment. This will help you get the most out of Lookout for Equipment. We also recommend that you understand and follow the best practices described in this topic. There are three key pillars essential to setting up Lookout for Equipment for the best possible results: • Selecting the right application • Selecting the right data inputs • Working with SMEs to select the inputs and evaluate the results Choosing the right application Choosing the right application of Lookout for Equipment involves finding the right combination of business value, equipment operations, and available data. You determine this by working directly with a subject matter experts (SME) on your equipment. Your team should consider the following: • The high cost of downtime – Equipment that can either be costly to fix or that is critical to a process is a prime candidate for monitoring. • Consistency in operations – Lookout for Equipment works best on equipment that is stationary and primarily does a continuous, stable task. A heavy duty pump that is permanently installed in a location is a good example. • Relevant data – Having data that is relevant to the critical aspects of the equipment is essential. Your equipment should have sensors that monitor these critical aspects, so that they can provide data that is relevant to how your equipment could fail. Having this data can make the difference between inference results that can effectively catch potential failures and abnormal behavior, and results that don't. • Significant historical data – Ideally, the data you use to train the machine learning (ML) model should represent all of the equipment's operating modes. For instance, when creating a model for a pump with variable speeds, the dataset should contain measurements that include an adequate amount of historical data for all of the pump speeds. For effective analysis, Choosing the right application 86 Amazon Lookout for Equipment User Guide Lookout for Equipment should have at least six months of historical data, although a longer history is preferred. For equipment affected by seasonality, at least one year of data is highly recommended. • List of historical failures (that is, labels) – Lookout for Equipment uses data on historical failures to enhance the model's knowledge of normal equipment conditions. It looks for abnormal behavior that occurred ahead of historical failures. With more examples of historical failures, Lookout for Equipment can better develop its knowledge of healthy conditions and the unhealthy conditions that occur prior to failures. The definition of a failure can be subjective, but we have found that looking for issues that cause unplanned downtime is a good method to identify failure. For best results, give Lookout for Equipment label data for every known time period where the equipment had issues or abnormal behavior. Note Lookout for Equipment is ultimately dependent on your data. We cannot guarantee that there are patterns in your data that will enable Lookout for Equipment to detect failures. Determining the right set of inputs might require multiple iterations through the Lookout for Equipment model training and monitoring process. For the greatest chance of success, we highly recommend working with a subject matter expert to identify the right application and data. Choosing the right data Your dataset should contain time-series data that's generated from an industrial asset such as a pump, compressor, motor, and so on. Each asset should be generating data from one or more sensors. The data that Lookout for Equipment uses for training should be representative of the condition and operation of the asset. Making sure that you have the right data is crucial. We recommend that you work with a SME. A SME can help you make sure that the data is relevant to the aspect of the asset that you're trying to analyze. We recommend that you remove unnecessary sensor data. With data from too few sensors, you might miss critical information. With data from too many sensors, your model might overfit the data and it might miss out on key patterns. Important Choosing the right input data is crucial to the success of using Lookout for Equipment. It might take multiple iterations of trial and error to find the right inputs. We cannot Choosing the right data 87 Amazon Lookout for Equipment User Guide guarantee results. Success is highly dependent on the relevancy of your data to equipment issues. Use these guidelines to choose
amazon-lookout-for-equipment-ug-030
amazon-lookout-for-equipment-ug.pdf
30
that you remove unnecessary sensor data. With data from too few sensors, you might miss critical information. With data from too many sensors, your model might overfit the data and it might miss out on key patterns. Important Choosing the right input data is crucial to the success of using Lookout for Equipment. It might take multiple iterations of trial and error to find the right inputs. We cannot Choosing the right data 87 Amazon Lookout for Equipment User Guide guarantee results. Success is highly dependent on the relevancy of your data to equipment issues. Use these guidelines to choose the right data: • Use only numerical data – Remove nonnumerical data. Lookout for Equipment can't use non- numerical data for analysis. • Use only analog data – Use only analog data (that is, many values that vary over time). Using digital values (also known as categorical values, or values that can be only one of a limited number of options), such as valve positions or set points, can lead to inconsistent or misleading results. • Remove continuously increasing data – Remove data that is just an ever-increasing number, such as operating hours or mileage. • Use data for the relevant component or subcomponent – You can use Lookout for Equipment to monitor an entire asset (such as a pump) or just a subcomponent (such as a pump motor). Determine where your downtime issues occur and choose the component or subcomponent that has the greater effect on that. When formatting a predictive maintenance problem, consider these guidelines: • Data size – Although Lookout for Equipment can ingest more than 50 GB of data, it can use only 7 GB with a model. Factors such as the number of sensors used, how far back in history the dataset goes, and the sample rate of the sensors can all determine how many measurements this amount of data can include. This amount of data also includes the missing data imputed by Lookout for Equipment. • Missing data – Lookout for Equipment automatically fills in missing data (known as imputing). It does this by forward filling previous sensor readings. However, if too much original data is missing, it might affect your results. • Sample rate – Sample rate is the interval at which the sensor readings are recorded. Use the highest frequency sample rate possible without exceeding the data size limit. The sample rate and data size might also increase your ML model training time. Lookout for Equipment handles any timestamp misalignment. • Number of sensors – Lookout for Equipment can train a model with data from up to 300 sensors. However, having the right data is more important than the quantity of data. More is not necessarily better. Choosing the right data 88 Amazon Lookout for Equipment User Guide • Vibration – Although vibration data is usually important for identifying potential failure, Lookout for Equipment does not work with raw high-frequency data. When using high-frequency vibration data, first generate the key values from the vibration data, such as RMS and FFT. Filtering for normal data Make sure that you use only data from normal (standard) operations. To do this, identify a key operating metric that indicates that the equipment is operating in a standard fashion. For example, when operating a compressor in a refinery, the key metric is usually production flow rate. In this case, you would need to filter out times when the production flow rate is below normal due to reduced production or any reason other than abnormal behavior. Other examples of key metrics might be RPM, fuel efficiency, "run" state, availability, and so on. Lookout for Equipment assumes that the data is normal. Making sure that the data fits this assumption is very important. Using failure labels To provide insight into past events, Lookout for Equipment uses labels that call out these events for the ML model. Providing this data is optional, but if it's available, it can help train your model more accurately and efficiently. For information about using labels, see Understanding labeling and Labeling your data. Evaluating the output After a model is trained, Lookout for Equipment evaluates its performance on a subset of the dataset that you've specified for evaluation purposes. It displays results that provide an overview of the performance and detailed information about the abnormal equipment behavior events and how well the model performed when detecting those. Using the data and failure labels that you provided for training and evaluating the model, Lookout for Equipment reports how many times the model's predictions were true positives (how often the model found the equipment anomaly that was noted within the ranges shown in the labels). Within a labeled time range, the forewarning time represents the duration between the earliest time when the model found an anomaly and
amazon-lookout-for-equipment-ug-031
amazon-lookout-for-equipment-ug.pdf
31
specified for evaluation purposes. It displays results that provide an overview of the performance and detailed information about the abnormal equipment behavior events and how well the model performed when detecting those. Using the data and failure labels that you provided for training and evaluating the model, Lookout for Equipment reports how many times the model's predictions were true positives (how often the model found the equipment anomaly that was noted within the ranges shown in the labels). Within a labeled time range, the forewarning time represents the duration between the earliest time when the model found an anomaly and the end of the labeled time range. For example, if Lookout for Equipment reports that "6/7 abnormal equipment behavior events were detected within label ranges with an average forewarning time of 32 hrs," in 6 out of the 7 Filtering for normal data 89 Amazon Lookout for Equipment User Guide labeled events, the model detected that event and averaged 32 hours of forewarning. In one case, it did not detect the event. Lookout for Equipment also reports the abnormal behavior events that were not related to a failure, along with the duration of these abnormal behavior events. For example, if it reports that "5 abnormal equipment behavior events were detected outside the label range with an average duration of 4 hrs," the model thought an event was occurring in 5 cases. An abnormal behavior event such as this one might be attributed to someone erroneously operating the equipment for a period of time or a normal operating mode that you haven't seen previously. Lookout for Equipment also displays this information graphically on a chart that shows the days and events and in a table. Lookout for Equipment provides detailed information about the anomalous events that it detects. It displays a list of sensors that provided the data to indicate an anomalous event. This might help you determine which part of your asset is behaving abnormally. Improving your results To improve the results, consider the following: • Did unrecorded maintenance events, system inefficiencies, or a new normal operating mode happen during the time of flagged anomalies in the test set? If so, the results indicate those situations. Change your train-evaluation splits so that each normal mode is captured during model training. • Are the sensor inputs relevant to the failure labels? In other words, is it possible that the labels are related to one component of the equipment but the sensors are monitoring a different component? If so, consider building a new model where the sensor inputs and labels are relevant to each other and drop any irrelevant sensors. Alternatively, drop the labels you're using and train the model only on the sensor data. • Is the label time zone the same as the sensor data time zone? If not, consider adjusting the time zone of your label data to align with sensor data time zone. • Is the failure label range inadequate? In other words, could there be anomalous behavior outside of the label range? This can happen for a variety of reasons, such as when the anomalous behavior was observed much earlier than the actual repair work. If so, consider adjusting the range accordingly. • Are there data integrity issues with your sensor data? For example, do some of the sensors become nonfunctional during the training or evaluation data? In that case, consider dropping Improving your results 90 Amazon Lookout for Equipment User Guide those sensors when you run the model. Alternatively, use a training-evaluation split that filters out the non-functional part of the sensor data. • Does the sensor data include uninteresting normal-operating modes, such as off-periods or ramp-up or ramp-down periods? Consider filtering those out of the sensor data. • We recommend that you avoid using data that contains monotonically increasing values, such as operating hours or mileage. Consulting subject matter experts Lookout for Equipment identifies patterns in the dataset that help to detect critical issues, but it's the responsibility of a technician or subject matter expert (SME) to diagnose the problem and take corrective action, if needed. To ensure that you are getting the right output, we highly recommend that you work with a SME. The SME should help you make sure that you are using the right input data and that your output results are actionable and relevant. Consulting subject matter experts 91 Amazon Lookout for Equipment User Guide Use case: fluid pump Example Lookout for Equipment is designed primarily for stationary industrial equipment that operates continuously. This includes many types of equipment, including pumps, compressors, motors, and turbines. As an example of using Lookout for Equipment on data from a high-level machine , let's look at a fluid pump. In a simplified form, this fluid pump consists of three major components and their sensors.
amazon-lookout-for-equipment-ug-032
amazon-lookout-for-equipment-ug.pdf
32
SME should help you make sure that you are using the right input data and that your output results are actionable and relevant. Consulting subject matter experts 91 Amazon Lookout for Equipment User Guide Use case: fluid pump Example Lookout for Equipment is designed primarily for stationary industrial equipment that operates continuously. This includes many types of equipment, including pumps, compressors, motors, and turbines. As an example of using Lookout for Equipment on data from a high-level machine , let's look at a fluid pump. In a simplified form, this fluid pump consists of three major components and their sensors. Note that this is just an example and not a complete list of features and components of such a pump. 1. Motor – The motor converts electricity into mechanical rotation. Key sensors might measure the current, voltage, or revolutions per minute (RPM). 2. Bearing – Bearings keep the rotating shaft in position while allowing it to rotate with minimal friction. Key sensors measure vibration. 3. Pump – The pump is an impeller that rotates on the shaft and pulls fluid from one direction and forces fluid in another direction, similar to a boat propeller. Key sensors measure inlet and outlet flow rate, pressure, and temperature. For a simple application of Amazon Lookout for Equipment, let's say that the only available data consists of measurements of how fast the pump is spinning in RPM, and the outlet flow rate of the fluid. The following historical time-series plots show both sets of measurements. 92 Amazon Lookout for Equipment User Guide These graphs show the expected relationship between RPM and flow rate: as the pump rotates faster, the fluid flows faster. The graphs show two operating modes: one with low RPM and a low flow rate, and a second mode with high RPM and high flow rate. In this case, Amazon Lookout for Equipment wants to learn this normal relationship in terms of operating modes. .The following graph shows another way to visualize the learned normal operating modes. The normal behavior of this pump is clear. The operator runs the pump at low RPM and high RPM in order to get a low flow rate or a high flow rate. As the pump continues to run, we expect that the 93 Amazon Lookout for Equipment User Guide data will continue to fall into one of these two operating modes. However, if the pump starts to have problems, this relationship might not hold true. Over time, the impeller (the part similar to a boat propeller) starts to rust, chip, loosen, or become misaligned. As this happens, the data might show abnormal behavior. When the pump rotates at higher a RPM, the flow rate remains low, as shown in the following graph. These types of issues are precisely what Amazon Lookout for Equipment is designed to detect. In this case, we see a simple representation of the normal operating states of the pump and abnormal behavior if the pump has an issue. The following graph shows a simplified view of how Lookout for Equipment detects the output over time. When the relationship between RPM and flow rate is normal, Lookout for Equipment detects that everything is normal. However, as the RPM increases but the flow rate stays the same, Lookout for Equipment starts detecting abnormal behavior. The vertical red line denotes the potential failure point for the pump, at which unplanned downtime occurs. 94 Amazon Lookout for Equipment User Guide This is a very simple example of a straightforward application with only two inputs (RPM and flow rate) that have a direct linear relationship with each other. The situation become dramatically more complex when we add additional inputs, such as pressure, temperature, motor current, motor voltage, bearing vibration, and so on. The more you increase the number of inputs, the more complex the relationships between all of the inputs becomes. With some equipment, the number of inputs can easily reach into the hundreds. In addition, this simplified example doesn't attempt to represent the time-series aspect of the problem—the model also has to learn the changes in relationships over time. For example, even subtle changes in vibration over time can be critical to detecting issues. Amazon Lookout for Equipment works with up to 300 inputs at once. Keep in mind that to accurately analyze the data, Lookout for Equipment requires that the inputs are related to issues that you want to find, and that the historical data used to train (and evaluate) the model represents the equipment's normal behavior. 95 Amazon Lookout for Equipment User Guide Quotas for using Lookout for Equipment Supported Regions For a list of AWS Regions where Lookout for Equipment is available, see AWS Regions and Endpoints in the AWS General Reference. Quotas Service quotas, also referred to as limits, are the
amazon-lookout-for-equipment-ug-033
amazon-lookout-for-equipment-ug.pdf
33
Lookout for Equipment works with up to 300 inputs at once. Keep in mind that to accurately analyze the data, Lookout for Equipment requires that the inputs are related to issues that you want to find, and that the historical data used to train (and evaluate) the model represents the equipment's normal behavior. 95 Amazon Lookout for Equipment User Guide Quotas for using Lookout for Equipment Supported Regions For a list of AWS Regions where Lookout for Equipment is available, see AWS Regions and Endpoints in the AWS General Reference. Quotas Service quotas, also referred to as limits, are the maximum number of service resources for your AWS account. For more information, see AWS Service Quotas in the AWS General Reference. Description Data ingestion Maximum number of components per dataset Maximum number of datasets per account Quota 3,000 15 Maximum number of pending data ingestion jobs per account 5 Maximum number of columns across components per dataset (excluding timestamp) 3,000 Maximum number of files per component (per dataset) 1,000 Maximum length of component name 200 characters Maximum size per dataset Maximum size per file Training and evaluation Maximum number of models per account Maximum number of pending models per account 50 GB 5 GB 15 5 Supported Regions 96 Amazon Lookout for Equipment Description User Guide Quota Maximum number of rows in training data (after resampling) 1.5 million Maximum number of rows in evaluation data (after resampling) 1.5 million Maximum number of components in training data Maximum number of columns across components in training data (excluding timestamp) 300 300 Minimum timespan of training data 14 days Inference Maximum number of inference schedulers per model Maximum size of raw data in inference input data (5-min scheduling frequency) Maximum size of raw data in inference input data (10-min scheduling frequency) Maximum size of raw data in inference input data (15-min scheduling frequency) Maximum size of raw data in inference input data (30-min scheduling frequency) Maximum size of raw data in inference input data (1-hour scheduling frequency) Maximum number of rows in inference input data, after resampling (5-min scheduling frequency) Maximum number of rows in inference input data, after resampling (10-min scheduling frequency) Maximum number of rows in inference input data, after resampling (15-min scheduling frequency) 1 5 MB 10 MB 15 MB 30 MB 60 MB 300 600 900 Quotas 97 User Guide Amazon Lookout for Equipment Description Maximum number of rows in inference input data, after resampling (30-min scheduling frequency) Maximum number of rows in inference input data, after resampling (1-hour scheduling frequency) Maximum number of files per component (per inference execution) Labels Maximum number of label group per account Maximum number of labels per label group Quota 1,800 3,600 60 15 3000 Quotas 98 Amazon Lookout for Equipment User Guide Security in Amazon Lookout for Equipment Cloud security at AWS is the highest priority. As an AWS customer, you benefit from data centers and network architectures that are built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third- party auditors regularly test and verify the effectiveness of our security as part of the AWS Compliance Programs. To learn about the compliance programs that apply to Amazon Lookout for Equipment, see AWS Services in Scope by Compliance Program. • Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your company’s requirements, and applicable laws and regulations. This documentation helps you understand how to apply the shared responsibility model when using Lookout for Equipment. The following topics show you how to configure Lookout for Equipment to meet your security and compliance objectives. You also learn how to use other AWS services that help you to monitor and secure your Lookout for Equipment resources. Topics • Data protection in Amazon Lookout for Equipment • Identity and access management for Amazon Lookout for Equipment • Amazon Lookout for Equipment and interface VPC endpoints (AWS PrivateLink) • Compliance validation for Amazon Lookout for Equipment • Resilience in Amazon Lookout for Equipment • Infrastructure security in Amazon Lookout for Equipment Data protection in Amazon Lookout for Equipment Amazon Lookout for Equipment conforms to the AWS shared responsibility model, which includes regulations and guidelines for data protection. AWS is responsible for protecting the Data protection 99 Amazon Lookout for Equipment User Guide global infrastructure that runs all AWS services. AWS maintains control over data hosted on this infrastructure, including
amazon-lookout-for-equipment-ug-034
amazon-lookout-for-equipment-ug.pdf
34
and access management for Amazon Lookout for Equipment • Amazon Lookout for Equipment and interface VPC endpoints (AWS PrivateLink) • Compliance validation for Amazon Lookout for Equipment • Resilience in Amazon Lookout for Equipment • Infrastructure security in Amazon Lookout for Equipment Data protection in Amazon Lookout for Equipment Amazon Lookout for Equipment conforms to the AWS shared responsibility model, which includes regulations and guidelines for data protection. AWS is responsible for protecting the Data protection 99 Amazon Lookout for Equipment User Guide global infrastructure that runs all AWS services. AWS maintains control over data hosted on this infrastructure, including the security configuration controls for handling customer content and personal data. AWS customers and APN partners, acting either as data controllers or data processors, are responsible for any personal data that they put in the AWS Cloud. For data protection purposes, we recommend that you protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management) (IAM), so that each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways: • Use multi-factor authentication (MFA) with each account. • Use SSL/TLS to communicate with AWS resources. We recommend TLS 1.2 or later. • Set up API and user activity logging with AWS CloudTrail. • Use AWS encryption solutions, along with all default security controls within AWS services. • Use advanced managed security services such as Amazon Macie, which assists in discovering and securing personal data that is stored in Amazon Simple Storage Service (Amazon S3). We strongly recommend that you never put sensitive identifying information, such as your customers' account numbers, into free-form fields such as a Name field. This includes when you work with Amazon Lookout for Equipment or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into Amazon Lookout for Equipment or other services might get picked up for inclusion in diagnostic logs. When you provide a URL to an external server, don't include credentials information in the URL to validate your request to that server. For more information about data protection, see the AWS Shared Responsibility Model and GDPR blog post on the AWS Security Blog. Topics • Encryption at rest • Encryption in transit • Key management Encryption at rest Amazon Lookout for Equipment encrypts your data at rest with your choice of an encryption key. You can choose one of the following: Encryption at rest 100 Amazon Lookout for Equipment User Guide • An AWS owned key. If you don't specify an encryption key, your data is encrypted with this key by default. • A customer managed key. You can provide the Amazon Resource Name (ARN) of an encryption key that you created in your account. When you use a customer managed key, you must give the key a key policy that enables Amazon Lookout for Equipment to use the key. You must choose a symmetric customer managed key. Amazon Lookout for Equipment doesn't support asymmetric customer managed keys. For more information, see Key management. • Amazon Lookout for Equipment follows the Amazon S3 bucket encryption policy. You have to set Amazon S3 default encryption on your bucket to encrypt objects stored in your bucket by Amazon Lookout for Equipment. For more information ,see S3 bucket encryption. Encryption in transit Amazon Lookout for Equipment copies data out of your account and processes it in an internal AWS system. Amazon Lookout for Equipment uses TLS 1.2 with AWS certificates to encrypt data sent to other AWS services. Key management Amazon Lookout for Equipment encrypts your data using one of the following types of keys: • An AWS owned key. This is the default. • A customer managed key. You can create the key when you create an Amazon Lookout for Equipment dataset, model, or inference, or you can create the key using the AWS Key Management Service (AWS KMS) console. Choose a symmetric customer managed key, Amazon Lookout for Equipment doesn't support asymmetric customer managed keys. For more information, see Using symmetric and asymmetric keys in the AWS Key Management Service Developer Guide. When you create a key using the AWS KMS console, you can give the key the following policy, which enables users or roles to use the key with Amazon Lookout for Equipment. For more information, see Using key policies in AWS KMS in the AWS Key Management Service Developer Guide. { "Effect": "Allow", Encryption in transit 101 Amazon Lookout for Equipment User Guide "Sid": "Allow to use the key with Amazon Lookout for Equipment", "Principal": { "AWS": "IAM USER OR ROLE ARN" }, "Action": [ "kms:DescribeKey", "kms:CreateGrant", "kms:RetireGrant" ], "Resource": "*", "Condition": { "StringEquals": { "kms:ViaService": [ "lookoutequipment.Region.amazonaws.com" ] } } }, { "Effect": "Allow",
amazon-lookout-for-equipment-ug-035
amazon-lookout-for-equipment-ug.pdf
35
using the AWS KMS console, you can give the key the following policy, which enables users or roles to use the key with Amazon Lookout for Equipment. For more information, see Using key policies in AWS KMS in the AWS Key Management Service Developer Guide. { "Effect": "Allow", Encryption in transit 101 Amazon Lookout for Equipment User Guide "Sid": "Allow to use the key with Amazon Lookout for Equipment", "Principal": { "AWS": "IAM USER OR ROLE ARN" }, "Action": [ "kms:DescribeKey", "kms:CreateGrant", "kms:RetireGrant" ], "Resource": "*", "Condition": { "StringEquals": { "kms:ViaService": [ "lookoutequipment.Region.amazonaws.com" ] } } }, { "Effect": "Allow", "Sid": "Allow to view the key in the console" "Principal": { "AWS": "IAM USER OR ROLE ARN" }, "Action": [ "kms:DescribeKey" ], "Resource": "*" }, { "Effect": "Allow", "Sid": "Allow inference scheduler pass-in role to encrypt output data" "Principal": { "AWS": "INFERENCE SCHEDULER PASS-IN ROLE ARN" }, "Action": [ "kms:GenerateDataKey" ], "Resource": "*" } Key management 102 Amazon Lookout for Equipment User Guide Identity and access management for Amazon Lookout for Equipment AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Amazon Lookout for Equipment resources. IAM is an AWS service that you can use with no additional charge. Topics • Audience • Authenticating with identities • Managing access using policies • AWS Identity and Access Management for Amazon Lookout for Equipment • Identity-based policy examples for Amazon Lookout for Equipment • AWS managed policies for Amazon Lookout for Equipment • Troubleshooting Amazon Lookout for Equipment identity and access Audience How you use AWS Identity and Access Management (IAM) differs, depending on the work that you do in Amazon Lookout for Equipment. Service user – If you use the Amazon Lookout for Equipment service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more Amazon Lookout for Equipment features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. If you cannot access a feature in Amazon Lookout for Equipment, see Troubleshooting Amazon Lookout for Equipment identity and access. Service administrator – If you're in charge of Amazon Lookout for Equipment resources at your company, you probably have full access to Amazon Lookout for Equipment. It's your job to determine which Amazon Lookout for Equipment features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. To learn more about how your company can use IAM with Amazon Lookout for Equipment, see AWS Identity and Access Management for Amazon Lookout for Equipment. Identity and access management 103 Amazon Lookout for Equipment User Guide IAM administrator – If you're an IAM administrator, you might want to learn details about how you can write policies to manage access to Amazon Lookout for Equipment. To view example Amazon Lookout for Equipment identity-based policies that you can use in IAM, see Identity-based policy examples for Amazon Lookout for Equipment. Authenticating with identities Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more,
amazon-lookout-for-equipment-ug-036
amazon-lookout-for-equipment-ug.pdf
36
If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. AWS account root user When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your Authenticating with identities 104 Amazon Lookout for Equipment User Guide root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. Federated identity As a best practice, require human users, including users that require administrator access, to use federation with an identity provider to access AWS services by using temporary credentials. A federated identity is a user from your enterprise user directory, a web identity provider, the AWS Directory Service, the Identity Center directory, or any user that accesses AWS services by using credentials provided through an identity source. When federated identities access AWS accounts, they assume roles, and the roles provide temporary credentials. For centralized access management, we recommend that you use AWS IAM Identity Center. You can create users and groups in IAM Identity Center, or you can connect and synchronize to a set of users and groups in your own identity source for use across all your AWS accounts and applications. For information about IAM Identity Center, see What is IAM Identity Center? in the AWS IAM Identity Center User Guide. IAM users and groups An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require long- term credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see Use cases for IAM users in the IAM User Guide. Authenticating with identities 105 Amazon Lookout for Equipment IAM roles User Guide An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Methods to assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your
amazon-lookout-for-equipment-ug-037
amazon-lookout-for-equipment-ug.pdf
37
assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. • Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant cross- account access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. • Cross-service access – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. • Forward access sessions (FAS) – When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. Authenticating with identities 106 Amazon Lookout for Equipment User Guide • Service role – A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. • Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Use an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. Managing access using policies You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when a principal (user, root user, or role session) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. For more information about the structure and contents of JSON policy documents, see Overview of JSON policies in the IAM User Guide. Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. By default, users and roles have no permissions. To grant users permission to perform actions on the resources that they need, an IAM administrator can
amazon-lookout-for-equipment-ug-038
amazon-lookout-for-equipment-ug.pdf
38
role session) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. For more information about the structure and contents of JSON policy documents, see Overview of JSON policies in the IAM User Guide. Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. By default, users and roles have no permissions. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API. Managing access using policies 107 Amazon Lookout for Equipment Identity-based policies User Guide Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see Define custom IAM permissions with customer managed policies in the IAM User Guide. Identity-based policies can be further categorized as inline policies or managed policies. Inline policies are embedded directly into a single user, group, or role. Managed policies are standalone policies that you can attach to multiple users, groups, and roles in your AWS account. Managed policies include AWS managed policies and customer managed policies. To learn how to choose between a managed policy or an inline policy, see Choose between managed policies and inline policies in the IAM User Guide. Resource-based policies Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must specify a principal in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services. Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy. Access control lists (ACLs) Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Amazon S3, AWS WAF, and Amazon VPC are examples of services that support ACLs. To learn more about ACLs, see Access control list (ACL) overview in the Amazon Simple Storage Service Developer Guide. Managing access using policies 108 Amazon Lookout for Equipment Other policy types User Guide AWS supports additional, less-common policy types. These policy types can set the maximum permissions granted to you by the more common policy types. • Permissions boundaries – A permissions boundary is an advanced feature in which you set the maximum permissions that an identity-based policy can grant to an IAM entity (IAM user or role). You can set a permissions boundary for an entity. The resulting permissions are the intersection of an entity's identity-based policies and its permissions boundaries. Resource-based policies that specify the user or role in the Principal field are not limited by the permissions boundary. An explicit deny in any of these policies overrides the allow. For more information about permissions boundaries, see Permissions boundaries for IAM entities in the IAM User Guide. • Service control policies (SCPs) – SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for grouping and centrally managing multiple AWS accounts that your business owns. If you enable all features in an organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see Service control policies in the AWS Organizations User Guide. • Resource control policies (RCPs) – RCPs are JSON policies that you can use to set the maximum available permissions for resources in your accounts without updating the IAM policies attached to each resource that you own. The RCP limits permissions for resources in member accounts and can impact the effective permissions
amazon-lookout-for-equipment-ug-039
amazon-lookout-for-equipment-ug.pdf
39
organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see Service control policies in the AWS Organizations User Guide. • Resource control policies (RCPs) – RCPs are JSON policies that you can use to set the maximum available permissions for resources in your accounts without updating the IAM policies attached to each resource that you own. The RCP limits permissions for resources in member accounts and can impact the effective permissions for identities, including the AWS account root user, regardless of whether they belong to your organization. For more information about Organizations and RCPs, including a list of AWS services that support RCPs, see Resource control policies (RCPs) in the AWS Organizations User Guide. • Session policies – Session policies are advanced policies that you pass as a parameter when you programmatically create a temporary session for a role or federated user. The resulting session's permissions are the intersection of the user or role's identity-based policies and the session policies. Permissions can also come from a resource-based policy. An explicit deny in any of these policies overrides the allow. For more information, see Session policies in the IAM User Guide. Managing access using policies 109 Amazon Lookout for Equipment Multiple policy types User Guide When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see Policy evaluation logic in the IAM User Guide. AWS Identity and Access Management for Amazon Lookout for Equipment Before you use IAM to manage access to Amazon Lookout for Equipment, learn what IAM features are available to use with Amazon Lookout for Equipment. To get a high-level view of how Lookout for Equipment and other AWS services work with most IAM features, see AWS services that work with IAM in the IAM User Guide. Lookout for Equipment identity-based policies Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see Define custom IAM permissions with customer managed policies in the IAM User Guide. With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. You can't specify the principal in an identity-based policy because it applies to the user or role to which it is attached. To learn about all of the elements that you can use in a JSON policy, see IAM JSON policy elements reference in the IAM User Guide. Actions Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Action element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Policy actions usually have the same name as the associated AWS API operation. There are some exceptions, such as permission-only actions that don't have a matching API operation. There are also some operations that require multiple actions in a policy. These additional actions are called dependent actions. Include actions in a policy to grant permissions to perform the associated operation. AWS Identity and Access Management for Amazon Lookout for Equipment 110 Amazon Lookout for Equipment User Guide Policy actions in Lookout for Equipment use the following prefix before the action: lookoutequipment:. For example, to grant someone permission to list Lookout for Equipment datasets with the ListDatasets API operation, you include the lookoutequipment:ListDatasets action in their policy. Policy statements must include either an Action or NotAction element. Lookout for Equipment defines its own set of actions that describe tasks that you can perform with this service. To specify multiple actions in a single statement, separate them with commas as follows. "Action": [ "lookoutequipment:action1", "lookoutequipment:action2" ] You can specify multiple actions using wildcards (*). For example, to specify all actions that begin with the word Describe, include the following action. "Action": "lookoutequipment:Describe*" Resources Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Resource JSON policy element specifies the object or objects to which the action applies. Statements must include either a Resource or a NotResource element. As a best practice, specify a resource using its Amazon Resource Name (ARN). You can do this for actions that support a specific resource type,
amazon-lookout-for-equipment-ug-040
amazon-lookout-for-equipment-ug.pdf
40
actions using wildcards (*). For example, to specify all actions that begin with the word Describe, include the following action. "Action": "lookoutequipment:Describe*" Resources Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Resource JSON policy element specifies the object or objects to which the action applies. Statements must include either a Resource or a NotResource element. As a best practice, specify a resource using its Amazon Resource Name (ARN). You can do this for actions that support a specific resource type, known as resource-level permissions. For actions that don't support resource-level permissions, such as listing operations, use a wildcard (*) to indicate that the statement applies to all resources. "Resource": "*" The Lookout for Equipment dataset resource has the following Amazon Resource Name (ARN). arn:${Partition}:lookoutequipment:${Region}:${Account}:dataset/${datasetName}/${GUID} For example, to specify a dataset in your statement, use the full ARN: AWS Identity and Access Management for Amazon Lookout for Equipment 111 Amazon Lookout for Equipment User Guide "Resource": "arn:aws:lookoutequipment:${Region}:${Account}:dataset/${datasetName}/ ${GUID}" Some Lookout for Equipment actions, such as those for creating resources, cannot be performed on a specific resource. In those cases, you must use the wildcard (*). "Resource": "*" To see a list of Lookout for Equipment resource types and their ARNs, see Resources Defined by Amazon Lookout for Equipment in the Service Authorization Reference. To learn with which actions you can specify the ARN of each resource, see Actions defined by Amazon Lookout for Equipment. For more information about the format of ARNs, see Amazon Resource Names (ARNs) and AWS Service Namespaces. Condition keys Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Condition element (or Condition block) lets you specify conditions in which a statement is in effect. The Condition element is optional. You can create conditional expressions that use condition operators, such as equals or less than, to match the condition in the policy with values in the request. If you specify multiple Condition elements in a statement, or multiple keys in a single Condition element, AWS evaluates them using a logical AND operation. If you specify multiple values for a single condition key, AWS evaluates the condition using a logical OR operation. All of the conditions must be met before the statement's permissions are granted. You can also use placeholder variables when you specify conditions. For example, you can grant an IAM user permission to access a resource only if it is tagged with their IAM user name. For more information, see IAM policy elements: variables and tags in the IAM User Guide. AWS supports global condition keys and service-specific condition keys. To see all AWS global condition keys, see AWS global condition context keys in the IAM User Guide. To view examples of Amazon Lookout for Equipment identity-based policies, see Identity-based policy examples for Amazon Lookout for Equipment. AWS Identity and Access Management for Amazon Lookout for Equipment 112 Amazon Lookout for Equipment User Guide Access control lists (ACLs) in Lookout for Equipment Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Access control lists (ACLs) are lists of grantees that you can attach to resources. They grant accounts permissions to access the resource to which they are attached. You can attach ACLs to an Amazon S3 bucket resource. With Amazon S3 access control lists (ACLs), you can manage access to bucket resources. Each bucket has an ACL attached to it as a subresource. It defines which AWS accounts, IAM users or groups of users, or IAM roles are granted access and the type of access. When a request is received for a resource, AWS checks the corresponding ACL to verify that the requester has the necessary access permissions. When you create a bucket resource, Amazon S3 creates a default ACL that grants the resource owner full control over the resource. In the following example bucket ACL, John Doe is listed as the owner of the bucket and is granted full control over that bucket. An ACL can have up to 100 grantees. <?xml version="1.0" encoding="UTF-8"?> <AccessControlPolicy xmlns="http://lookoutequipment.amazonaws.com/doc/2006-03-01/"> <Owner> <ID>c1daexampleaaf850ea79cf0430f33d72579fd1611c97f7ded193374c0b163b6</ID> <DisplayName>john-doe</DisplayName> </Owner> <AccessControlList> <Grant> <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Canonical User"> <ID>c1daexampleaaf850ea79cf0430f33d72579fd1611c97f7ded193374c0b163b6</ID> <DisplayName>john-doe</DisplayName> </Grantee> <Permission>FULL_CONTROL</Permission> </Grant> </AccessControlList> </AccessControlPolicy> The ID field in the ACL is the AWS account canonical user ID. To learn how to view this ID in an account that you own, see Finding an AWS account canonical user ID. AWS Identity and Access Management for Amazon Lookout for Equipment 113 Amazon Lookout for Equipment User Guide Attribute-based access control (ABAC) with Lookout for Equipment
amazon-lookout-for-equipment-ug-041
amazon-lookout-for-equipment-ug.pdf
41
of the bucket and is granted full control over that bucket. An ACL can have up to 100 grantees. <?xml version="1.0" encoding="UTF-8"?> <AccessControlPolicy xmlns="http://lookoutequipment.amazonaws.com/doc/2006-03-01/"> <Owner> <ID>c1daexampleaaf850ea79cf0430f33d72579fd1611c97f7ded193374c0b163b6</ID> <DisplayName>john-doe</DisplayName> </Owner> <AccessControlList> <Grant> <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Canonical User"> <ID>c1daexampleaaf850ea79cf0430f33d72579fd1611c97f7ded193374c0b163b6</ID> <DisplayName>john-doe</DisplayName> </Grantee> <Permission>FULL_CONTROL</Permission> </Grant> </AccessControlList> </AccessControlPolicy> The ID field in the ACL is the AWS account canonical user ID. To learn how to view this ID in an account that you own, see Finding an AWS account canonical user ID. AWS Identity and Access Management for Amazon Lookout for Equipment 113 Amazon Lookout for Equipment User Guide Attribute-based access control (ABAC) with Lookout for Equipment Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes. In AWS, these attributes are called tags. You can attach tags to IAM entities (users or roles) and to many AWS resources. Tagging entities and resources is the first step of ABAC. Then you design ABAC policies to allow operations when the principal's tag matches the tag on the resource that they are trying to access. ABAC is helpful in environments that are growing rapidly and helps with situations where policy management becomes cumbersome. To control access based on tags, you provide tag information in the condition element of a policy using the aws:ResourceTag/key-name, aws:RequestTag/key-name, or aws:TagKeys condition keys. If a service supports all three condition keys for every resource type, then the value is Yes for the service. If a service supports all three condition keys for only some resource types, then the value is Partial. For more information about ABAC, see Define permissions with ABAC authorization in the IAM User Guide. To view a tutorial with steps for setting up ABAC, see Use attribute-based access control (ABAC) in the IAM User Guide. Using Temporary credentials with Lookout for Equipment Some AWS services don't work when you sign in using temporary credentials. For additional information, including which AWS services work with temporary credentials, see AWS services that work with IAM in the IAM User Guide. You are using temporary credentials if you sign in to the AWS Management Console using any method except a user name and password. For example, when you access AWS using your company's single sign-on (SSO) link, that process automatically creates temporary credentials. You also automatically create temporary credentials when you sign in to the console as a user and then switch roles. For more information about switching roles, see Switch from a user to an IAM role (console) in the IAM User Guide. You can manually create temporary credentials using the AWS CLI or AWS API. You can then use those temporary credentials to access AWS. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see Temporary security credentials in IAM. AWS Identity and Access Management for Amazon Lookout for Equipment 114 Amazon Lookout for Equipment User Guide Cross-service principal permissions for Lookout for Equipment When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. Service roles for Lookout for Equipment A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. Warning Changing the permissions for a service role might break Lookout for Equipment functionality. Edit service roles only when Lookout for Equipment provides guidance to do so. Choosing an IAM role in Lookout for Equipment When you create a resource in Lookout for Equipment, you must choose a role to allow Lookout for Equipment to access Amazon S3 on your behalf. If you have previously created a service role or service-linked role, then Lookout for Equipment provides you with a list of roles to choose from. It's important to choose a role that allows access to read and write to your Amazon S3 bucket. instances Identity-based policy examples for Amazon Lookout for Equipment By default, users and roles don't have permission to create or modify Amazon Lookout for Equipment resources. They also can't perform tasks by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS API. To grant users permission to perform actions
amazon-lookout-for-equipment-ug-042
amazon-lookout-for-equipment-ug.pdf
42
S3 on your behalf. If you have previously created a service role or service-linked role, then Lookout for Equipment provides you with a list of roles to choose from. It's important to choose a role that allows access to read and write to your Amazon S3 bucket. instances Identity-based policy examples for Amazon Lookout for Equipment By default, users and roles don't have permission to create or modify Amazon Lookout for Equipment resources. They also can't perform tasks by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS API. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. Identity-based policy examples 115 Amazon Lookout for Equipment User Guide To learn how to create an IAM identity-based policy by using these example JSON policy documents, see Create IAM policies (console) in the IAM User Guide. For details about actions and resource types defined by Lookout for Equipment, including the format of the ARNs for each of the resource types, see in the Service Authorization Reference. Topics • Policy best practices • Using the Lookout for Equipment console • Allow users to view their own permissions • Accessing a single Lookout for Equipment dataset • Publishing information about ingestion validation to Amazon CloudWatch Logs • Tag-based policy examples Policy best practices Identity-based policies determine whether someone can create, access, or delete Amazon Lookout for Equipment resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations: • Get started with AWS managed policies and move toward least-privilege permissions – To get started granting permissions to your users and workloads, use the AWS managed policies that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see AWS managed policies or AWS managed policies for job functions in the IAM User Guide. • Apply least-privilege permissions – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as least-privilege permissions. For more information about using IAM to apply permissions, see Policies and permissions in IAM in the IAM User Guide. • Use conditions in IAM policies to further restrict access – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as AWS CloudFormation. For more information, see IAM JSON policy elements: Condition in the IAM User Guide. Identity-based policy examples 116 Amazon Lookout for Equipment User Guide • Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see Validate policies with IAM Access Analyzer in the IAM User Guide. • Require multi-factor authentication (MFA) – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see Secure API access with MFA in the IAM User Guide. For more information about best practices in IAM, see Security best practices in IAM in the IAM User Guide. Using the Lookout for Equipment console To access the Amazon Lookout for Equipment console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the Amazon Lookout for Equipment resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy. You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that they're trying to perform. To ensure that users and roles can still use the Lookout for Equipment console, also attach the Lookout for Equipment ConsoleAccess
amazon-lookout-for-equipment-ug-043
amazon-lookout-for-equipment-ug.pdf
43
the Amazon Lookout for Equipment resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy. You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that they're trying to perform. To ensure that users and roles can still use the Lookout for Equipment console, also attach the Lookout for Equipment ConsoleAccess or ReadOnly AWS managed policy to the entities. For more information, see Adding permissions to a user in the IAM User Guide. Allow users to view their own permissions This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API. { "Version": "2012-10-17", Identity-based policy examples 117 User Guide Amazon Lookout for Equipment "Statement": [ { "Sid": "ViewOwnUserInfo", "Effect": "Allow", "Action": [ "iam:GetUserPolicy", "iam:ListGroupsForUser", "iam:ListAttachedUserPolicies", "iam:ListUserPolicies", "iam:GetUser" ], "Resource": ["arn:aws:iam::*:user/${aws:username}"] }, { "Sid": "NavigateInConsole", "Effect": "Allow", "Action": [ "iam:GetGroupPolicy", "iam:GetPolicyVersion", "iam:GetPolicy", "iam:ListAttachedGroupPolicies", "iam:ListGroupPolicies", "iam:ListPolicyVersions", "iam:ListPolicies", "iam:ListUsers" ], "Resource": "*" } ] } Accessing a single Lookout for Equipment dataset In this example, you grant an IAM user in your AWS account access to an Lookout for Equipment dataset. {"Version": "2012-10-17", "Statement": [ {"Sid": "GetAccessOfDataset", "Effect": "Allow", "Action": [ "lookoutequipment:DescribeDataset" ], Identity-based policy examples 118 Amazon Lookout for Equipment User Guide "Resource": "arn:aws:lookoutequipment:${Region}:${Account}:dataset/ ${datasetName}*" } ] } Publishing information about ingestion validation to Amazon CloudWatch Logs To enable Amazon CloudWatch Logs, do one of the following: • When creating a new role, check the Enable CloudWatch Logs box on the console. For more information see Logging your ingestion data. • Attach the following permissions to the dataAccessRoleArn submitted during ingestion: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:CreateLogGroup", "logs:PutLogEvents", "logs:DeleteLogStream" ], "Resource": [ "arn:aws:logs:{{region}}:{{account-id}}:log-group:/aws/ lookoutequipment/ingestion:*" ] }, { "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:{{region}}:{{account-id}}:log-group:*" ] } ] } Identity-based policy examples 119 Amazon Lookout for Equipment Tag-based policy examples User Guide Tag-based policies are JSON policy documents that specify the actions that a principal can perform on tagged resources. Example: Use a tag to access a resource This example policy grants an IAM user or role in your AWS account permission to use the CreateDataset operation with any resource tagged with the key machine and the value myMachine1. { "Version": "2012-10-17", "Statement": [ {"Effect": "Allow", "Action": [ "lookoutequipment:CreateDataset", "lookoutequipment:TagResource" ], "Resource": "*", "Condition": { "StringEquals": {"aws:RequestTag/machine": "myMachine1" } } } ] } Example: Use a tag to enable Lookout for Equipment operations This example policy grants an IAM user or role in your AWS account permission to use any Lookout for Equipment operation except the TagResource operation with any resource tagged with the key machine and the value myMachine1. { "Version": "2012-10-17", "Statement": [ {"Effect": "Allow", "Action": "lookoutequipment:*", "Resource": "*" }, {"Effect": "Deny", Identity-based policy examples 120 Amazon Lookout for Equipment "Action": [ "lookoutequipment:TagResource" ], "Resource": "*", "Condition": { "StringEquals": {"aws:ResourceTag/machine": "myMachine1" User Guide } } } ] } Example: Use a tag to restrict access to an operation This example policy restricts access for an IAM user or role in your AWS account to use the CreateDataset operation unless the user provides the machine tag and it has the allowed values myMachine1 and myMachine2. { "Version": "2012-10-17", "Statement": [ {"Effect": "Allow", "Action": "lookoutequipment:TagResource", "Resource": "*" }, {"Effect": "Deny", "Action": "lookoutequipment:CreateDataset", "Resource": "*", "Condition": { "Null": { "aws:RequestTag/machine": "true" } } }, {"Effect": "Deny", "Action": "lookoutequipment:CreateDataset", "Resource": "*", "Condition": { "ForAnyValue:StringNotEquals": {"aws:RequestTag/machine": [ "myMachine1", "myMachine2" ] } Identity-based policy examples 121 Amazon Lookout for Equipment User Guide } } ] } AWS managed policies for Amazon Lookout for Equipment To add permissions to users, groups, and roles, it is easier to use AWS managed policies than to write policies yourself. It takes time and expertise to create IAM customer managed policies that provide your team with only the permissions they need. To get started quickly, you can use our AWS managed policies. These policies cover common use cases and are available in your AWS account. For more information about AWS managed policies, see AWS managed policies in the IAM User Guide. AWS services maintain and update AWS managed policies. You can't change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed policy to support new features. This type of update affects all identities (users, groups, and roles) where the policy is attached. Services are most likely to update an AWS managed policy
amazon-lookout-for-equipment-ug-044
amazon-lookout-for-equipment-ug.pdf
44
permissions they need. To get started quickly, you can use our AWS managed policies. These policies cover common use cases and are available in your AWS account. For more information about AWS managed policies, see AWS managed policies in the IAM User Guide. AWS services maintain and update AWS managed policies. You can't change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed policy to support new features. This type of update affects all identities (users, groups, and roles) where the policy is attached. Services are most likely to update an AWS managed policy when a new feature is launched or when new operations become available. Services do not remove permissions from an AWS managed policy, so policy updates won't break your existing permissions. Additionally, AWS supports managed policies for job functions that span multiple services. For example, the ReadOnlyAccess AWS managed policy provides read-only access to all AWS services and resources. When a service launches a new feature, AWS adds read-only permissions for new operations and resources. For a list and descriptions of job function policies, see AWS managed policies for job functions in the IAM User Guide. AWS managed policy: AmazonLookoutEquipmentReadOnlyAccess You can attach AmazonLookoutEquipmentReadOnlyAccess to your IAM entities. Lookout for Equipment also attaches this policy to a service role that allows Lookout for Equipment to perform actions on your behalf. AWS managed policies 122 Amazon Lookout for Equipment User Guide This policy grants read-only permissions that allow read-only access to all Lookout for Equipment resources. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lookoutequipment:Describe*", "lookoutequipment:List*" ], "Resource": "*" } } AWS managed policy: AmazonLookoutEquipmentFullAccess You can attach AmazonLookoutEquipmentFullAccess to your IAM entities. Lookout for Equipment also attaches this policy to a service role that allows Lookout for Equipment to perform actions on your behalf. This policy grants administrative permissions that allow access to all Lookout for Equipment resources and operations. This policy enables you to use any IAM role or AWS KMS key with Lookout for Equipment. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lookoutequipment:*" ], AWS managed policies 123 User Guide Amazon Lookout for Equipment "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": [ "lookoutequipment.amazonaws.com" ] } } }, { "Effect": "Allow", "Action": [ "kms:CreateGrant" ], "Resource": "*", "Condition": { "StringLike": { "kms:ViaService": "lookoutequipment.*.amazonaws.com" } } }, { "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:ListAliases" ], "Resource": "*" } ] } AWS managed policies 124 Amazon Lookout for Equipment User Guide Lookout for Equipment updates to AWS managed policies View details about updates to AWS managed policies for Lookout for Equipment since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the Lookout for Equipment Document history page. Change Description Date AmazonLookoutEquip mentReadOnlyAccess Lookout for Equipment modified the policy to allow all Describe actions and all List actions. November 4, 2022 AmazonLookoutEquip mentReadOnlyAccess – Lookout for Equipment modified the policy so as to Update to an existing policy allow all list and describe October 26, 2022 APIs. AmazonLookoutEquip mentReadOnlyAccess – Lookout for Equipment modified the policy so as Update to an existing policy to enable you to list sensor June 22, 2022 statistics. AmazonLookoutEquip mentFullAccess – Update to Lookout for Equipment removed RetireGrant from the grant retirement policy managed policy as the service November 22, 2021 will be using retiring grant principal to retire the grants. You dont need to provide the retire grant permissions in the managed policy. Lookout for Equipment modified the policy so as to only apply the kms:ViaService condition to DescribeKey and CreateGrant. Oct 29, 2021 AmazonLookoutEquip mentFullAccess – Update to an existing policy AWS managed policies 125 Amazon Lookout for Equipment User Guide Change Description Date AmazonLookoutEquip mentReadOnlyAccess – New Lookout for Equipment added a new policy to allow read May 05, 2021 policy only access for all Lookout for Equipment resources. AmazonLookoutEquip mentFullAccess – Update to Lookout for Equipment added permissions to describe AWS May 05, 2021 an existing policy KMS managed encryption keys. You must use these permissio ns to use the Lookout for Equipment console to display information about AWS KMS keys across AWS accounts. Lookout for Equipment started tracking changes for its AWS managed policies. April 08, 2021 Lookout for Equipment started tracking changes Troubleshooting Amazon Lookout for Equipment identity and access Use the following information to help you diagnose and fix common issues that you might encounter when working with Amazon Lookout for Equipment and IAM. Topics • I am not authorized to perform an action in Lookout for Equipment • I am not authorized to perform iam:PassRole • I want to allow people outside of my AWS account to access my Lookout for Equipment resources