id
stringlengths 8
78
| source
stringclasses 743
values | chunk_id
int64 1
5.05k
| text
stringlengths 593
49.7k
|
---|---|---|---|
amazon-quicksight-dg-033 | amazon-quicksight-dg.pdf | 33 | following VPC endpoints for the AWS Management Console. • com.amazonaws.region.console • com.amazonaws.region.signin For more information about VPC endpoints for the AWS Management Console, see Required VPC endpoints and DNS configuration. Creating a VPC endpoint policy for QuickSight You can attach an endpoint policy to your VPC endpoint to restrict usage of the endpoint to specific QuickSight accounts or to accounts under specific AWS organizations. The AWS account IDs that are allow–listed or deny–listed are the AWS accounts in which the QuickSight account is created. In most cases, this is the same account ID in which the VPC endpoint is created. The policy specifies the following information: • The principal that can perform actions. • The actions that can be performed. • The resources on which actions can be performed. For more information, see Controlling access to services with VPC endpoints in the Amazon VPC User Guide. Example: VPC endpoint policy for QuickSight actions The following is an example of an endpoint policy for QuickSight. When attached to an endpoint, this policy grants access to all QuickSight actions for all principals on all resources. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", VPC endpoints (AWS PrivateLink) 111 Amazon QuickSight Developer Guide "Action": "*", "Resource": "*", "Condition": { "StringEquals": { "aws:PrincipalAccount": [ "012345678901" ] } } } ] } Policies for the QuickSight website must have the values of the Principal, Action, and Resource fields set to "*". A condition may be specified only against the aws:PrincipalAccount or the aws:OrgId attributes. These conditions are evaluated on all requests to the QuickSight website after the user signs in. Restricting access to the QuickSight website You can choose to restrict access to your QuickSight account to only allow traffic from an approved VPC endpoint. This prevents general internet users from accessing your QuickSight account. Before you can make this change, make sure that you're an IAM user with the UpdateIpRestriction permission. For more information on the permissions that are required to restrict access with a VPC endpoint, see Turning on IP and VPC endpoint restrictions in QuickSight. Use the following procedure to restrict access with a VPC endpoint in QuickSight. 1. Open the QuickSight console. 2. Choose Manage QuickSight, and then choose Security & permissions. 3. On the Security & permissions page that opens, navigate to IP and VPC endpoint restrictions and choose Manage. 4. Turn on the Enforce restrictions switch to turn on your VPC endpoint restrictions. You can also perform this action with the QuickSight APIs. The following example turns on the enforcement of a VPC endpoint restriction. aws quicksight update-ip-restriction \ --aws-account-id AWSACCOUNTID \ VPC endpoints (AWS PrivateLink) 112 Amazon QuickSight --region REGION \ Developer Guide --enabled \ --vpc-endpoint-id-restriction-rule-map vpce-001122def=MyVpcEndpointAllowed Domains accessed by QuickSight The table below lists all URLs that are accessed by QuickSight from your browser. Make sure that you have established connectivity for all of domins listed in the table. URL Reason Has VPC endpoint support? region.quicksight.aws.amazo n.com The bulk of traffic to QuickSight flows through this Yes signin.aws.amazon.com region.signin.aws domain. To sign in to the AWS console if the account uses IAM Yes identities. To sign in to the AWS console if the account uses or No QuickSight native users for identity management. *.cloudfront.net To download static assets, for example CSS orr JS. *.s3.region.amazonaws.com To download reports and thumbnails. No Yes *.execute-api.region.amazon aws.com To access client-side metrics. No Key management operations Use QuickSight key management APIs to list and update customer managed keys (CMKs) that are registered to a QuickSight account. For more information about key management in QuickSight, see Key management in the QuickSight User Guide. Key management operations 113 Amazon QuickSight Permissons Developer Guide Before you begin, create or update an IAM role that contains a user permission to access and use all CMKs that are registered to your QuickSight account. At minimum, the IAM policy must contain the kms:CreateGrant, quicksight:UpdateKeyRegistration, and quicksight:DescribeKeyRegistration permissions. To see a list of IAM policy examples that can be used to grant different degrees of access to the CMKs in a account, see IAM identity-based policies for Amazon QuickSight: using the admin key management console. CMK API Examples The example below lists all customer managed keys that are registered to a QuickSight account. aws quicksight describe-key-registration \ --aws-account-id AWSACCOUNTID \ --region REGION The example below updates a CMK registration and designates a default key. aws quicksight update-key-registration \ --aws-account-id AWSACCOUNTID \ --key-registration '[{"KeyArn": "KEYARN", "DefaultKey": true}]' --region REGION The example below updates the registration of two CMKs in a QuickSight account and designates one of the two updated keys as the new default key. aws quicksight update-key-registration \ --aws-account-id AWSACCOUNTID \ --key-registration '[{"KeyArn": "KEYARN", "DefaultKey": true}, {"KeyArn": "KEYARN", "DefaultKey": false}]' --region REGION The example below clears all CMK registrations from a QuickSight account. Instead, QuickSight uses AWS |
amazon-quicksight-dg-034 | amazon-quicksight-dg.pdf | 34 | are registered to a QuickSight account. aws quicksight describe-key-registration \ --aws-account-id AWSACCOUNTID \ --region REGION The example below updates a CMK registration and designates a default key. aws quicksight update-key-registration \ --aws-account-id AWSACCOUNTID \ --key-registration '[{"KeyArn": "KEYARN", "DefaultKey": true}]' --region REGION The example below updates the registration of two CMKs in a QuickSight account and designates one of the two updated keys as the new default key. aws quicksight update-key-registration \ --aws-account-id AWSACCOUNTID \ --key-registration '[{"KeyArn": "KEYARN", "DefaultKey": true}, {"KeyArn": "KEYARN", "DefaultKey": false}]' --region REGION The example below clears all CMK registrations from a QuickSight account. Instead, QuickSight uses AWS owned keys to encrypt your resources. aws quicksight update-key-registration \ --aws-account-id AWSACCOUNTID \ --key-registration '[]' --region REGION Examples 114 Amazon QuickSight Developer Guide Namespace operations An Amazon QuickSight namespace is a logical container that you can use to organize clients, subsidiaries, teams, and so on. By using a namespace, you can isolate the Amazon QuickSight users and groups that are registered for that namespace. Users that access the namespace can share assets only with other users or groups in the same namespace. They can't see users and groups in other namespaces. For more information about namespaces, see Supporting multitenancy with isolated namespaces in the QuickSight User guide. To implement namespaces, you use the following QuickSight API operations. Topics • CreateNamespace • DeleteNamespace • DescribeNamespace • ListNamespaces CreateNamespace Use the CreateNamespace API operation to create a new namespace for you to use with Amazon QuickSight. You can create a namespace after your AWS account is subscribed to Amazon QuickSight. The namespace must be unique within the AWS account. By default, there is a limit of 100 namespaces per AWS account. To increase your limit, create a ticket with AWS Support. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight create-namespace --aws-account-id AWSACCOUNTID --namespace NAMESPACE \ --identity-store QUICKSIGHT You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight create-namespace Namespace operations 115 Amazon QuickSight Developer Guide --cli-input-json file://createnamespace.json For more information about the CreateNamespace API operation, see CreateNamespace in the Amazon QuickSight API Reference. DeleteNamespace Use the DeleteNamespace API operation to delete a namespace and the users and groups that are associated with the namespace. This is an asynchronous process. Assets including dashboards, analyses, datasets, and data sources are not deleted. To delete these assets, you use the relevant API operations for each asset, such as DeleteDashboard or DeleteDataSet. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight delete-namespace --aws-account-id AWSACCOUNTID --namespace NAMESPACE Find a namespace by running the ListNamespaces operation. For more information about the DeleteNamespace API operation, see DeleteNamespace in the Amazon QuickSight API Reference. DescribeNamespace Use the DescribeNamespace API operation to describe a specified namespace. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight describe-namespace --aws-account-id AWSACCOUNTID --namespace NAMESPACE For more information about the DescribeNamespace API operation, see DescribeNamespace in the Amazon QuickSight API Reference. DeleteNamespace 116 Amazon QuickSight ListNamespaces Developer Guide Use the ListNamespaces API operation to list namespaces for a specified AWS account. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight list-namespaces --aws-account-id AWSACCOUNTID --page-size 10 --max-items 100 For more information about the ListNamespaces API operation, see ListNamespaces in the Amazon QuickSight API Reference. Tag operations Tags can help you categorize and allocate costs incurred by your QuickSight resources. For more information about tags, see User-defined cost allocation tags. You can visualize costs of tagged resources that have consumption-based pricing in AWS cost and usage reports. For more information on cost and usage reports, see What are AWS Cost and Usage Reports. You can also use tags to scope user permissions by granting a user permission to access or change only resources with certain tag values. You can use the TagResource API operation with a resource that already has tags. If you specify a new tag key for the resource, this tag is appended to the list of tags associated with the resource. If you specify a tag key that is already associated with the resource, the new tag value that you specify replaces the previous value for that tag. You can tag a new QuickSight managed user or IAM user at creation with a RegisterUser API call. You can associate as many as 50 tags with a resource. Amazon QuickSight supports tagging for a data sets, data sources, dashboards, users, and templates. Tagging for QuickSight works in a similar way to tagging for other AWS services. QuickSight doesn't currently support the tag editor for AWS Resource Groups. Tags that are used for Admin Pro, Author Pro, or Reader Pro users can't be used as cost allocation |
amazon-quicksight-dg-035 | amazon-quicksight-dg.pdf | 35 | new tag value that you specify replaces the previous value for that tag. You can tag a new QuickSight managed user or IAM user at creation with a RegisterUser API call. You can associate as many as 50 tags with a resource. Amazon QuickSight supports tagging for a data sets, data sources, dashboards, users, and templates. Tagging for QuickSight works in a similar way to tagging for other AWS services. QuickSight doesn't currently support the tag editor for AWS Resource Groups. Tags that are used for Admin Pro, Author Pro, or Reader Pro users can't be used as cost allocation tags. For more information about the Tag API operations, see the following topics. ListNamespaces 117 Developer Guide Amazon QuickSight Topics • ListTagsForResource • TagResource • UntagResource • RegisterUser ListTagsForResource Use the ListTagsForResource API operation to list tags assigned to a resource. Following is an example AWS CLI command for this operation. To find a resource’s Amazon Resource Name (ARN), use the List operation for the resource. For example, ListDashboards. AWS CLI aws quicksight list-tags-for-resource --resource-arn 444455556666 For more information about the ListTagsForResource API operation, see ListTagsForResource in the Amazon QuickSight API Reference. TagResource Use the TagResource API operation to assign one or more tags (key-value pairs) to the specified Amazon QuickSight resource. Following is an example AWS CLI command for this operation. To find a resource's Amazon Resource Name (ARN), use the List operation for the resource, for example ListDashboards. AWS CLI aws quicksight tag-resource --resource-arn 777788889999 --tags Key=NewDashboard,Value=True ListTagsForResource 118 Amazon QuickSight Developer Guide For more information about the TagResource API operation, see TagResource in the Amazon QuickSight API Reference. UntagResource Use the UntagResource API operation to remove a tag from a resource. Before you do so, you can call the ListTagsForResource API operation to list the tags assigned to a resource. Following is an example AWS CLI command for this operation. To find a resource’s Amazon Resource Name (ARN), use the List operation for the resource, for example ListDashboards. AWS CLI aws quicksight untag-resource --resource-arn 777788889999 --tag-keys NewDashboard,ExampleDashboard For more information about the UntagResource API operation, see UntagResource in the Amazon QuickSight API Reference. Template alias operations A template alias is a reference to a version of a template. For example, suppose that you create the template alias exampleAlias for version 1 of the template exampleTemp. You can use the template alias exampleAlias to reference version 1 of template exampleTemp in a DescribeTemplate API operation, as in the following example. aws quicksight describe-template --aws-account-id AWSACCOUNTID --template-id exampleTempID --alias-name exampleAlias With template alias API operations, you can perform actions on QuickSight template aliases. For more information, see the following API operations. Topics • CreateTemplateAlias UntagResource 119 Amazon QuickSight • DeleteTemplateAlias • DescribeTemplateAlias • ListTemplateAliases • UpdateTemplateAlias CreateTemplateAlias Developer Guide Use the CreateTemplateAlias operation to create a template alias for a template. To use this operation, you need the ID of the template that you want to create an alias for. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight create-template-alias --aws-account-id AWSACCOUNTID --template-id TEMPLATEID --alias-name ALIAS --template-version-number VERSION You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight create-template-alias --cli-input-json file://createtemplatealias.json For more information about the CreateTemplateAlias operation, see CreateTemplateAlias in the QuickSight API Reference. DeleteTemplateAlias Use the DeleteTemplateAlias operation to delete the item that the specified template alias points to. If you provide a specific alias, you delete the version of the template that the alias points to. To use this operation, you need the ID of the template that is using the alias you want to delete. You can use the ListTemplates operation to list all templates and their corresponding template IDs. CreateTemplateAlias 120 Amazon QuickSight Developer Guide Following is an example AWS CLI command for this operation. AWS CLI aws quicksight delete-template-alias --aws-account-id AWSACCOUNTID --template-id TEMPLATEID --alias-name ALIAS For more information about the DeleteTemplateAlias operation, see DeleteTemplateAlias in the QuickSight API Reference. DescribeTemplateAlias Use the DescribeTemplateAlias operation to describe the template alias for a template. To use this operation, you need the ID of the template that is using the alias that you want to describe. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight describe-template-alias --aws-account-id AWSACCOUNTID --template-id 222244446666 --alias-name ALIAS The parameter value for alias-name can be $LATEST. For more information about the DescribeTemplateAlias operation, see DescribeTemplateAlias in the QuickSight API Reference. ListTemplateAliases Use the ListTemplateAliases operation to list all the aliases of a template. To use this operation, you need the ID of the |
amazon-quicksight-dg-036 | amazon-quicksight-dg.pdf | 36 | this operation, you need the ID of the template that is using the alias that you want to describe. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight describe-template-alias --aws-account-id AWSACCOUNTID --template-id 222244446666 --alias-name ALIAS The parameter value for alias-name can be $LATEST. For more information about the DescribeTemplateAlias operation, see DescribeTemplateAlias in the QuickSight API Reference. ListTemplateAliases Use the ListTemplateAliases operation to list all the aliases of a template. To use this operation, you need the ID of the template that is using the aliases that you want to list. You can use the ListTemplates operation to list all templates and their corresponding template IDs. DescribeTemplateAlias 121 Amazon QuickSight Developer Guide Following is an example AWS CLI command for this operation. AWS CLI aws quicksight list-template-aliases --aws-account-id AWSACCOUNTID --template-id TEMPLATEID --page-size 10 --max-items 100 For more information about the ListTemplateAliases operation, see ListTemplateAliases in the QuickSight API Reference. UpdateTemplateAlias Use the UpdateTemplateAlias operation to update the template alias of a template. To use this operation, you need the ID of the template that is using the alias that you want to update. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight update-template-alias --aws-account-id AWSACCOUNTID --template-id TEMPLATEID --alias-name ALIAS --template-version-number VERSION You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight update-template-alias --cli-input-json file://updatetemplatealias.json The parameter value for alias-name can be $LATEST. For more information about the UpdateTemplateAlias operation, see UpdateTemplateAlias in the QuickSight API Reference. UpdateTemplateAlias 122 Amazon QuickSight Developer Guide Template operations A template is a resource in QuickSight that holds the information necessary to create an analysis or dashboard. You can use templates to migrate dashboards and analyses across accounts. With template API operations, you can perform actions on QuickSight templates. For more information, see the following API operations. Topics • Template permissions • CreateTemplate • DeleteTemplate • DescribeTemplate • ListTemplates • ListTemplateVersions • UpdateTemplate Template permissions With template permissions API operations, you can view and update permissions for templates. For more information, see the following API operations. • DescribeTemplatePermissions • UpdateTemplatePermissions DescribeTemplatePermissions Use the DescribeTemplatePermissions operation to describe read and write permissions for a template. To use this operation, you need the ID of the template that you want to describe the permissions of. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight describe-template-permissions Template operations 123 Amazon QuickSight Developer Guide --aws-account-id AWSACCOUNTID --template-id 222244446666 For more information about the DescribeTemplatePermissions operation, see DescribeTemplatePermissions in the QuickSight API Reference. UpdateTemplatePermissions Use the UpdateTemplatePermissions operation updates the resource permissions for a template. You can grant or revoke permissions in the same command. To use this operation, you need the ID of the template that you want to update the permissions of. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight update-template-permissions --aws-account-id AWSACCOUNTID --template-id TEMPLATEID --grant-permissions Principal=arn:aws:quicksight:us-east-1:AWSACCOUNTID:user/ default/USERNAME,Actions=DescribeTemplate --revoke-permissions Principal=arn:aws:quicksight:us-east-1:AWSACCOUNTID:user/ default/USERNAME,Actions=DescribeTemplate If your region has already been configured within the CLI, it doesn't need to be included as an argument. You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight update-template-permissions --cli-input-json file://updatetemplatepermissions.json If your region has already been configured with the CLI, it does not need to be included in an argument. For more information on the UpdateTemplatePermissions operation, see UpdateTemplatePermissions in the QuickSight API Reference. Template permissions 124 Amazon QuickSight CreateTemplate Developer Guide Use the CreateTemplate operation to create a template from an existing QuickSight analysis or template. You can use the resulting template to create a dashboard. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight create-template --aws-account-id 555555555555 --template-id TEMPLATEID --source-entity SOURCEENTITY You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight create-template --cli-input-json file://createtemplate.json You can to get the ID the dataset ID by using a DescribeAnalysisoperation. The ANALYSISIDis part of the analysis URL in QuickSight. You can also use the ListAnalyses operation to get the ID. For more information about the CreateTemplate operation, see CreateTemplate in the QuickSight API Reference. DeleteTemplate Use the DeleteTemplate operation to delete a template. To use this operation, you need the ID of the |
amazon-quicksight-dg-037 | amazon-quicksight-dg.pdf | 37 | --source-entity SOURCEENTITY You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight create-template --cli-input-json file://createtemplate.json You can to get the ID the dataset ID by using a DescribeAnalysisoperation. The ANALYSISIDis part of the analysis URL in QuickSight. You can also use the ListAnalyses operation to get the ID. For more information about the CreateTemplate operation, see CreateTemplate in the QuickSight API Reference. DeleteTemplate Use the DeleteTemplate operation to delete a template. To use this operation, you need the ID of the template that you want to delete. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight delete-template --aws-account-id AWSACCOUNTID --template-id TEMPLATEID CreateTemplate 125 Amazon QuickSight Developer Guide For more information about the DeleteTemplate operation, see DeleteTemplate in the QuickSight API Reference. DescribeTemplate Use the DescribeTemplate operation to describe a template's metadata. To use this operation, you need the ID of the template that you want to describe. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight describe-template --aws-account-id AWSACCOUNTID --template-id TEMPLATEID --version-number VERSION --alias-name ALIAS The parameter value for alias-name can be $LATEST. For more information about the DescribeTemplate operation, see DescribeTemplate in the QuickSight API Reference. ListTemplates Use the ListTemplates operation to list all the templates in the current QuickSight account. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight list-templates --aws-account-id AWSACCOUNTID --page-size 10 --max-items 100 For more information about the ListTemplates operation, see ListTemplates in the QuickSight API Reference. DescribeTemplate 126 Amazon QuickSight ListTemplateVersions Developer Guide Use the ListTemplateVersions operation to list all the versions of the templates in the current QuickSight account. To use this operation to list the versions of a template, you need that template's ID. You can use the ListTemplates operation to list all templates and their corresponding template IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight list-template-versions --aws-account-id AWSACCOUNTID --template-id TEMPLATEID --page-size 10 --max-items 100 For more information about the ListTemplateVersions operation, see ListTemplateVersions in the QuickSight API Reference. UpdateTemplate Use the UpdateTemplate operation to update a template from an existing QuickSight analysis or another template. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight update-template --aws-account-id 555555555555 --template-id TEMPLATEID --source-entity SOURCEENTITY You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight update-template --cli-input-json file://updatetemplate.json ListTemplateVersions 127 Amazon QuickSight Developer Guide For more information about the UpdateTemplate operation, see UpdateTemplate in the QuickSight API Reference. Theme operations A theme is a collection of settings that you can apply to analyses and dashboards in Amazon QuickSight. You can apply themes to modify the appearance of dashboards and analyses. With theme operations, you can perform actions on QuickSight themes. For more information, see the following API operations. Topics • Theme permissions • CreateTheme • DeleteTheme • DescribeTheme • ListThemes • ListThemeVersions • UpdateTheme Theme permissions With theme permissions API operations, you can view and update permissions for themes. For more information, see the following API operations. • DescribeThemePermissions • UpdateThemePermissions DescribeThemePermissions Use the DescribeThemePermissions operation to describe the read and write permissions for a theme. To use this operation, you need the ID of the theme that you want to describe. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. Theme operations 128 Amazon QuickSight AWS CLI aws quicksight describe-theme-permissions --aws-account-id AWSACCOUNTID --theme-id THEMEID Developer Guide For more information about the UpdateThemePermissions operation, see UpdateThemePermissions in theQuickSight API Reference. UpdateThemePermissions Use the UpdateThemePermissions operation to update the resource permissions for a template. You can grant or revoke permissions in the same command. To use this operation, you need the ID of the theme that you want to update. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight update-theme-permissions --aws-account-id 555555555555 --theme-id 111122223333 --grant-permissions Principal=arn:aws:quicksight:us-east-1:AWSACCOUNTID:user/ default/USERNAME,Actions=quicksight:ListThemeVersions, quicksight:UpdateThemeAlias, quicksight: DescribeThemeAlias, quicksight:UpdateThemePermissions, quicksight:DeleteThemeAlias, quicksight: DeleteTheme, quicksight:ListThemeAliases, quicksight:DescribeTheme, quicksight: CreateThemeAlias, quicksight:UpdateTheme, quicksight: DescribeThemePermissions --revoke-permissions Principal=arn:aws:quicksight:us-east-1:AWSACCOUNTID:user/ default/USERNAME,Actions=quicksight:ListThemeVersions, quicksight:UpdateThemeAlias, quicksight: DescribeThemeAlias, quicksight:UpdateThemePermissions, quicksight:DeleteThemeAlias, quicksight: DeleteTheme, quicksight:ListThemeAliases, quicksight:DescribeTheme, quicksight: CreateThemeAlias, quicksight:UpdateTheme, quicksight: DescribeThemePermissions If your region has already been configured within the CLI, it doesn't need to be included as an argument. You can also make this command using a CLI skeleton file with the following command. For more information |
amazon-quicksight-dg-038 | amazon-quicksight-dg.pdf | 38 | operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight update-theme-permissions --aws-account-id 555555555555 --theme-id 111122223333 --grant-permissions Principal=arn:aws:quicksight:us-east-1:AWSACCOUNTID:user/ default/USERNAME,Actions=quicksight:ListThemeVersions, quicksight:UpdateThemeAlias, quicksight: DescribeThemeAlias, quicksight:UpdateThemePermissions, quicksight:DeleteThemeAlias, quicksight: DeleteTheme, quicksight:ListThemeAliases, quicksight:DescribeTheme, quicksight: CreateThemeAlias, quicksight:UpdateTheme, quicksight: DescribeThemePermissions --revoke-permissions Principal=arn:aws:quicksight:us-east-1:AWSACCOUNTID:user/ default/USERNAME,Actions=quicksight:ListThemeVersions, quicksight:UpdateThemeAlias, quicksight: DescribeThemeAlias, quicksight:UpdateThemePermissions, quicksight:DeleteThemeAlias, quicksight: DeleteTheme, quicksight:ListThemeAliases, quicksight:DescribeTheme, quicksight: CreateThemeAlias, quicksight:UpdateTheme, quicksight: DescribeThemePermissions If your region has already been configured within the CLI, it doesn't need to be included as an argument. You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. Theme permissions 129 Amazon QuickSight Developer Guide aws quicksight update-theme-permissions --cli-input-json file//:updatethemepermissions.json If your region has already been configured with the CLI, it does not need to be included in an argument. For more information on the UpdateThemePermissions operation, see UpdateThemePermissions in the QuickSight API Reference. CreateTheme Use the CreateTheme operation to create a theme. The base-theme-id is the ID of the theme that you want to base the new theme off of. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight create-theme --aws-account-id AWSACCOUNTID --theme-id THEMEID --name NAME --base-theme-id THEMEID --configuration '{"Configuration":{"DataColorPalette":{"Colors": [""],"MinMaxGradient":[""],"EmptyFillColor":""},"UIColorPalette": {"PrimaryForeground":"","PrimaryBackground": "","SecondaryForeground":"","SecondaryBackground":"","Accent":"","AccentForeground":"","Danger":"","DangerForeground":"","Warning":"","WarningForeground":"","Success":"","SuccessForeground":"","Dimension":"","DimensionForeground":"","Measure":"","MeasureForeground":""},"Sheet": {"Tile":{"Border":{"Show":true}},"TileLayout":{"Gutter":{"Show":true},"Margin": {"Show":true}}}}' You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight create-theme --cli-input-json file//:createtheme.json For more information about the CreateTheme operation, see CreateTheme in theQuickSight API Reference. CreateTheme 130 Amazon QuickSight DeleteTheme Developer Guide Use the DeleteTheme operation to delete a theme. To use this operation, you need the ID of the theme that you want to delete. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight delete-theme --aws-account-id AWSACCOUNTID --theme-id THEMEID For more information about the DeleteTheme operation, see DeleteTheme in the QuickSight API Reference. DescribeTheme Use the DescribeTheme operation to describe a theme. To use this operation, you need the ID of the theme that you want to describe. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight describe-theme --aws-account-id AWSACCOUNTID --theme-id THEMEID --version-number 1 --alias-name ALIAS The parameter value for alias-name can be $LATEST. For more information about the DescribeTheme operation, see DescribeTheme in the QuickSight API Reference. DeleteTheme 131 Amazon QuickSight ListThemes Developer Guide Use the ListThemes operation to list all the themes in the current AWS account. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight list-themes --aws-account-id AWSACCOUNTID --type QUICKSIGHT --page-size 10 --max-items 100 For more information about the ListThemes operation, see ListThemes in the QuickSight API Reference. ListThemeVersions Use the ListThemeVersions operation to list all the versions of the themes in the current AWS account. To use this operation to list the versions of a theme, you need that theme's ID. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight list-theme-version --aws-account-id AWSACCOUNTID --theme-id THEMEID --page-size 10 --max-items 100 To list all themes and their theme IDs, call the ListThemes operation. For more information about the ListThemeVersions operation, see ListThemeVersions in the QuickSight API Reference. ListThemes 132 Developer Guide Amazon QuickSight UpdateTheme Use the UpdateTheme operation to update a theme. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight update-theme --aws-account-id 555555555555 --theme-id THEMEID --base-theme-id BASETHEMEID You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight update-theme --cli-input-json file//:updatetheme.json For more information about the UpdateTheme operation, see UpdateTheme in theQuickSight API Reference. Theme alias operations A theme alias is a reference to a version of a theme. For example, suppose that you create the theme alias exampleAlias for version 1 of the theme exampleTheme. You can use the theme alias exampleAlias to reference version 1 of theme exampleTheme in a DescribeTheme API operation, as in the following example. Example aws quicksight describe-theme --aws-account-id AWSACCOUNTID --theme-id exampleThemeID --alias-name exampleAlias With theme alias operations, you can perform actions on QuickSight theme aliases. For more information, see the following API operations. UpdateTheme 133 Developer Guide Amazon QuickSight Topics • CreateThemeAlias • DeleteThemeAlias • DescribeThemeAlias • ListThemeAliases • UpdateThemeAlias CreateThemeAlias The CreateThemeAlias operation creates a theme alias for a theme. To use this operation, you need the ID of the theme that you |
amazon-quicksight-dg-039 | amazon-quicksight-dg.pdf | 39 | 1 of the theme exampleTheme. You can use the theme alias exampleAlias to reference version 1 of theme exampleTheme in a DescribeTheme API operation, as in the following example. Example aws quicksight describe-theme --aws-account-id AWSACCOUNTID --theme-id exampleThemeID --alias-name exampleAlias With theme alias operations, you can perform actions on QuickSight theme aliases. For more information, see the following API operations. UpdateTheme 133 Developer Guide Amazon QuickSight Topics • CreateThemeAlias • DeleteThemeAlias • DescribeThemeAlias • ListThemeAliases • UpdateThemeAlias CreateThemeAlias The CreateThemeAlias operation creates a theme alias for a theme. To use this operation, you need the ID of the theme that you want to create an alias for. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight --aws-account-id AWSACCOUNTID --theme-id THEMEID --alias-name ALIAS --theme-version-number VERSION You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight create-theme-alias --cli-input-json file://create-theme-alias.json For more information about the CreateThemeAlias operation, see CreateThemeAlias in the QuickSight API Reference. DeleteThemeAlias Use the DeleteThemeAlias operation to delete the version of the theme that the specified theme alias points to. If you provide a specific alias, you delete the version of the theme that the alias points to. To use this operation, you need the ID of the theme that is using the alias that you CreateThemeAlias 134 Amazon QuickSight Developer Guide want to delete. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight delete-theme-alias --aws-account-id AWSACCOUNTID --theme-id THEMEID --alias-name ALIAS For more information about the DeleteThemeAlias operation, see DeleteThemeAlias in the QuickSight API Reference. DescribeThemeAlias Use the DescribeThemeAlias operation to describe the alias for a theme. To use this operation, you need the ID of the theme that is using the alias that you want to describe. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight describe-theme-alias --aws-account-id AWSACCOUNTID --theme-id THEMEID --alias-name ALIAS For more information about the DescribeThemeAlias operation, see DescribeThemeAlias in the QuickSight API Reference. ListThemeAliases Use the ListThemeAliases operation to list all the aliases of a theme. To use this operation, you need the ID of the theme that is using the aliases that you want described. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. DescribeThemeAlias 135 Amazon QuickSight AWS CLI aws quicksight list-theme-aliases --aws-account-id AWSACCOUNTID --theme-id THEMEID --max-results 100 Developer Guide For more information about the ListThemeAliases operation, see ListThemeAliases in the QuickSight API Reference. UpdateThemeAlias Use the UpdateThemeAlias operation to update an alias of a theme. To use this operation, you need the ID of the theme that is using the alias that you want to update. You can use the ListThemes operation to list all themes and their corresponding theme IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight update-theme-alias --aws-account-id AWSACCOUNTID --theme-id THEMEID --alias-name ALIAS --theme-version-number VERSION You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight update-theme-alias --cli-input-json file://updatethemealias.json For more information about the UpdateThemeAlias operation, see UpdateThemeAlias in the QuickSight API Reference. User operations With user API operations, you can perform actions on Amazon QuickSight account users. For more information, see the following API operations. UpdateThemeAlias 136 Developer Guide Amazon QuickSight Topics • DeleteUser • DeleteUserByPrincipalTitle • DescribeUser • ListUserGroups • ListUsers • RegisterUser • UpdateUser DeleteUser Use the DeleteUser operation to delete the QuickSight user that is associated with the identity of the IAM user or role that's making the call. The IAM user isn't deleted as a result of this call. To use this operation, you need the ID of the user that you want to delete. You can also use the ListUsers operation to list all users and their corresponding user IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight delete-user --user-name USERNAME --aws-account-id AWSACCOUNTID --namespace default For more information about the DeleteUser operation, see DeleteUser in the QuickSight API Reference. DeleteUserByPrincipalTitle The DeleteUserByPrincipalTitle operation deletes a user identified by a principal ID. Following is an example AWS CLI command for this operation. To use this operation, you need the ID of the user that you want to delete. You can also use the ListUsers operation to list all users and their corresponding user IDs. Following is an example AWS CLI |
amazon-quicksight-dg-040 | amazon-quicksight-dg.pdf | 40 | and their corresponding user IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight delete-user --user-name USERNAME --aws-account-id AWSACCOUNTID --namespace default For more information about the DeleteUser operation, see DeleteUser in the QuickSight API Reference. DeleteUserByPrincipalTitle The DeleteUserByPrincipalTitle operation deletes a user identified by a principal ID. Following is an example AWS CLI command for this operation. To use this operation, you need the ID of the user that you want to delete. You can also use the ListUsers operation to list all users and their corresponding user IDs. Following is an example AWS CLI command for this operation. DeleteUser 137 Amazon QuickSight AWS CLI Developer Guide aws quicksight delete-user-by-principal-id --principal-id PRINCIPALID --aws-account-id AWSACCOUNTID --namespace default For more information about the DeleteUserByPrincipalTitle operation, see DeleteUserByPrincipalTitle in the QuickSight API Reference. DescribeUser Use the DescribeUser operation to return information about a user, given the user name. Following is an example AWS CLI command for this operation. To use this operation, you need the ID of the user that you want to describe. You can also use the ListUsers operation to list all users and their corresponding user IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight describe-user --aws-account-id AWSACCOUNTID --namespace default For more information about the DescribeUser operation, see DescribeUser in the QuickSight API Reference. ListUserGroups Use the ListUserGroups operation to list the QuickSight groups that an QuickSight user is a member of. To use this operation, you need the ID of the user whose group memberships you want to know. You can use the ListUsers operation to list all users and their corresponding user IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight list-user-groups DescribeUser 138 Amazon QuickSight Developer Guide --user-name USERNAME --aws-account-id AWSACCOUNTID --namespace default --max-results 100 For more information about ListUserGroups operation, see ListUserGroupsin the QuickSight API Reference. ListUsers Use the ListUsers operation to return a list of all of the QuickSight users belonging to this account. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight list-users --aws-account-id AWSACCOUNTID --max-results 100 --namespace default For more information about ListUsers operation, see ListUsers in the QuickSight API Reference. RegisterUser Use the RegisterUser operation to create an QuickSight user whose identity is associated with the IAM identity or role specified in the request. When you register a new user from the Amazon QuickSight API, Amazon QuickSight generates a registration URL. The user accesses this registration URL to create their account. Amazon QuickSight does not send a registration email to users who are registered from the Amazon QuickSight API. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight register-user --identity-type QUICKSIGHT --email EMAIL ListUsers 139 Amazon QuickSight Developer Guide --user-role AUTHOR --iam-arn 222233334444 --session-name SESSION --aws-account-id AWSACCOUNTID --namespace default --user-name USERNAME --external-login-federation-provider-type CUSTOM_OIDC --custom-federation-provider-url www.example.com/ --external-login-id EXTERNALLOGINID You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight register-user --cli-input-json file://registeruser.json After using this operation, you get a response that includes a link labeled Invitation URL. Click the Invitation URL to set up a password and activate the new account. The new user then appear in the QuickSight UI. You can use the ListUsers operation to list all users and their corresponding user IDs. For more information about RegisterUser operation, see RegisterUserin the QuickSight API Reference. UpdateUser Use the UpdateUser operation to update an QuickSight user. To use this operation, you need the ID of the user that you want to delete. You can use the ListUsers operation to list all users and their corresponding user IDs. Following is an example AWS CLI command for this operation. AWS CLI aws quicksight update-user --aws-account-id 555555555555 --username USERNAME --namespace NAMESPACE --email johndoe@example.com --role ROLE UpdateUser 140 Amazon QuickSight Developer Guide You can also make this command using a CLI skeleton file with the following command. For more information about CLI skeleton files, see Use CLI skeleton files. aws quicksight update-user --cli-input-json file://updateuser.json For more information about UpdateUser operation, see UpdateUserin the QuickSight API Reference. UpdateUser 141 Amazon QuickSight Developer Guide Document history for the Amazon QuickSight Developer Guide The following table describes important changes in each QuickSight Developer Guide release. Change Description Date Initial release Initial release of the Amazon QuickSight Developer Guide January 10, 2022 142 |
amazon-quicksight-user-001 | amazon-quicksight-user.pdf | 1 | User Guide Amazon QuickSight Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon QuickSight User Guide Amazon QuickSight: User Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Amazon QuickSight Table of Contents User Guide What is Amazon QuickSight? .......................................................................................................... 1 Why QuickSight? ........................................................................................................................................... 1 Starting work with QuickSight .................................................................................................................. 3 How it works .................................................................................................................................... 4 Sample data ................................................................................................................................................... 5 Terminology ................................................................................................................................................... 5 Data preparation ..................................................................................................................................... 5 SPICE .......................................................................................................................................................... 5 Data analysis ............................................................................................................................................ 6 Data visualization .................................................................................................................................... 6 Machine learning ..................................................................................................................................... 6 Sheet .......................................................................................................................................................... 6 Dashboard ................................................................................................................................................. 6 Setting up ........................................................................................................................................ 7 Complete initial configuration tasks ........................................................................................................ 7 Sign up for an AWS account ................................................................................................................ 7 Create a user with administrative access ........................................................................................... 8 Integrating with IAM Identity Center ....................................................................................................... 9 Signing up for a subscription .................................................................................................................... 9 Getting started .............................................................................................................................. 13 Signing in to QuickSight .......................................................................................................................... 13 How to sign in to QuickSight ............................................................................................................. 14 Quick start: Create an analysis using sample data ............................................................................. 22 Create a dashboard using sample data ................................................................................................. 26 Tutorial: Create a prepared dataset .................................................................................................. 27 Tutorial: Create an analysis ................................................................................................................ 33 Tutorial: Modify visuals ........................................................................................................................ 37 Tutorial: Create a dashboard .............................................................................................................. 46 Using the console ....................................................................................................................................... 47 Using the Amazon QuickSight menu and landing page ............................................................... 48 Creating an analysis ............................................................................................................................. 52 Searching Amazon QuickSight ........................................................................................................... 54 Choosing a language in Amazon QuickSight .................................................................................. 54 Using the Amazon QuickSight mobile app ..................................................................................... 56 iii Amazon QuickSight User Guide Connecting to data ........................................................................................................................ 57 Supported data sources ............................................................................................................................ 59 Connecting to relational data ............................................................................................................ 59 Importing file data ............................................................................................................................... 62 Software as a service (SaaS) data ..................................................................................................... 63 .................................................................................................................................................................. 64 Data source quotas .................................................................................................................................... 64 SPICE quotas for imported data ....................................................................................................... 64 Quotas for direct SQL queries ........................................................................................................... 65 Supported data types and values .......................................................................................................... 66 String and text data ............................................................................................................................. 66 Date and time data .............................................................................................................................. 67 Numeric data ......................................................................................................................................... 67 Supported data types from external data sources ........................................................................ 69 Connection examples ................................................................................................................................ 80 Amazon Athena ..................................................................................................................................... 80 Amazon OpenSearch Service ............................................................................................................. 82 Amazon S3 files .................................................................................................................................... 86 Apache Spark ......................................................................................................................................... 97 Databricks ............................................................................................................................................... 98 Google BigQuery ................................................................................................................................ 103 Microsoft Excel files ........................................................................................................................... 105 Presto .................................................................................................................................................... 106 Snowflake ............................................................................................................................................. 108 Starburst ............................................................................................................................................... 113 SaaS sources ........................................................................................................................................ 118 Salesforce ............................................................................................................................................. 120 Trino ....................................................................................................................................................... 121 Text files ............................................................................................................................................... 125 Timestream data ................................................................................................................................. 125 Creating datasets ..................................................................................................................................... 129 From new data sources ..................................................................................................................... 129 From existing data sources ............................................................................................................... 136 From existing datasets ...................................................................................................................... 139 Editing datasets ........................................................................................................................................ 144 Things to consider when editing datasets .................................................................................... 144 iv Amazon QuickSight User Guide Editing a dataset from the Datasets page .................................................................................... 145 Editing a dataset in an analysis ...................................................................................................... 146 Reverting datasets ................................................................................................................................... 146 Troubleshooting .................................................................................................................................. 147 Duplicating datasets ................................................................................................................................ 148 Sharing datasets ....................................................................................................................................... 149 Sharing a dataset ............................................................................................................................... 149 Viewing and editing the permissions of users that a dataset is shared with .......................... 150 Revoking access to a dataset ........................................................................................................... 150 Tracking dataset assets ........................................................................................................................... 150 Dataset parameters ................................................................................................................................. 151 Dataset parameter limitations ......................................................................................................... 152 Creating dataset parameters ............................................................................................................ 152 Inserting dataset parameters into custom SQL ........................................................................... 153 Adding dataset parameters to calculated fields .......................................................................... 156 Adding dataset parameters to filters ............................................................................................. 156 Using dataset parameters in QuickSight analyses ....................................................................... 159 Advanced use ....................................................................................................................................... 162 Using row-level security ......................................................................................................................... 168 Using user-based rules ...................................................................................................................... 168 Using tag-based rules ........................................................................................................................ 178 Using column-level security .................................................................................................................. 189 Running queries as an IAM role ............................................................................................................ 191 Athena data sources .......................................................................................................................... 193 Amazon Redshift data sources ........................................................................................................ 195 Amazon S3 data sources ................................................................................................................... 200 Deleting datasets ..................................................................................................................................... 203 Adding a dataset to an analysis ........................................................................................................... 204 Replacing datasets ............................................................................................................................. 206 Remove a dataset from an analysis ............................................................................................... 208 Working with data sources .................................................................................................................... 209 Creating a data source ...................................................................................................................... 209 Editing a data source ......................................................................................................................... 212 Deleting a data source ...................................................................................................................... 214 Refreshing data ........................................................................................................................... 215 Importing data into SPICE ..................................................................................................................... 216 v Amazon QuickSight User Guide Estimating the size of SPICE datasets ........................................................................................... 217 Refreshing SPICE data ............................................................................................................................ 218 Refreshing a dataset .......................................................................................................................... 218 Incrementally refreshing a dataset ................................................................................................. 219 Refreshing a dataset during data preparation ............................................................................. 221 Refreshing a dataset on a schedule .............................................................................................. 222 Incrementally refreshing a dataset on a schedule ....................................................................... 223 Using SPICE data in an analysis ........................................................................................................... 225 View SPICE ingestion history ................................................................................................................ 226 Troubleshooting skipped row errors .................................................................................................... 228 SPICE ingestion error codes ................................................................................................................... 231 Row import errors .............................................................................................................................. 231 Data import errors ............................................................................................................................. 232 Updating |
amazon-quicksight-user-002 | amazon-quicksight-user.pdf | 2 | data ........................................................................................................................... 215 Importing data into SPICE ..................................................................................................................... 216 v Amazon QuickSight User Guide Estimating the size of SPICE datasets ........................................................................................... 217 Refreshing SPICE data ............................................................................................................................ 218 Refreshing a dataset .......................................................................................................................... 218 Incrementally refreshing a dataset ................................................................................................. 219 Refreshing a dataset during data preparation ............................................................................. 221 Refreshing a dataset on a schedule .............................................................................................. 222 Incrementally refreshing a dataset on a schedule ....................................................................... 223 Using SPICE data in an analysis ........................................................................................................... 225 View SPICE ingestion history ................................................................................................................ 226 Troubleshooting skipped row errors .................................................................................................... 228 SPICE ingestion error codes ................................................................................................................... 231 Row import errors .............................................................................................................................. 231 Data import errors ............................................................................................................................. 232 Updating files in a dataset .................................................................................................................... 235 Preparing data ............................................................................................................................. 237 Describing data ......................................................................................................................................... 238 Choosing file upload settings ............................................................................................................... 239 Changing text file upload settings ................................................................................................. 239 Changing Microsoft Excel file upload settings ............................................................................. 239 Preparing data fields ............................................................................................................................... 240 Editing field names and descriptions ............................................................................................. 240 Setting fields as a dimensions or measures ................................................................................. 242 Changing a field data type ............................................................................................................... 243 Adding drill-downs ............................................................................................................................. 245 Selecting fields .................................................................................................................................... 249 Organizing fields into folders .......................................................................................................... 250 Mapping and joining fields ............................................................................................................... 254 Adding calculations ................................................................................................................................. 256 Adding calculated fields .................................................................................................................... 256 Order of evaluation ............................................................................................................................ 266 Level-aware calculations ................................................................................................................... 269 Functions and operators ................................................................................................................... 277 Previewing tables in a dataset .............................................................................................................. 508 Joining data .............................................................................................................................................. 509 Types of joined datasets ................................................................................................................... 509 Facts about joining datasets ............................................................................................................ 510 vi Amazon QuickSight User Guide Creating a join ..................................................................................................................................... 511 Join types ............................................................................................................................................. 516 Filtering data ............................................................................................................................................ 519 Viewing existing filters ...................................................................................................................... 520 Adding filters ....................................................................................................................................... 522 Cross-sheet filters and controls ....................................................................................................... 525 Filter types ........................................................................................................................................... 536 Adding filter controls ......................................................................................................................... 551 Editing filters ....................................................................................................................................... 561 Enabling or disabling filters ............................................................................................................. 563 Deleting filters .................................................................................................................................... 565 Using SQL to customize data ................................................................................................................ 567 Creating a basic SQL query .............................................................................................................. 568 Adding geospatial data .......................................................................................................................... 569 Changing a geospatial grouping ..................................................................................................... 574 Geospatial troubleshooting .............................................................................................................. 574 Using unsupported or custom dates ................................................................................................... 628 Adding a unique key to a QuickSight dataset ................................................................................... 630 Integrating SageMaker AI models ........................................................................................................ 631 How SageMaker AI integration works ............................................................................................ 632 Costs incurred (no additional costs with integration itself) ....................................................... 634 Usage guidelines ................................................................................................................................. 634 Defining the schema file ................................................................................................................... 635 Adding a SageMaker AI model to your QuickSight dataset ....................................................... 638 SageMaker AI Canvas ......................................................................................................................... 639 Preparing dataset examples .................................................................................................................. 641 Preparing a dataset based on file data ......................................................................................... 642 Preparing a dataset based on Salesforce data ............................................................................. 645 Preparing a dataset based on database data ............................................................................... 647 Visualizing data ........................................................................................................................... 649 Working with an analysis ....................................................................................................................... 649 Starting an analysis ........................................................................................................................... 650 Adding titles and descriptions to an analysis ............................................................................... 653 Renaming an analysis ........................................................................................................................ 654 Duplicating analyses .......................................................................................................................... 654 Viewing analysis details .................................................................................................................... 654 vii Amazon QuickSight User Guide Date and time settings ...................................................................................................................... 655 The analysis menu .............................................................................................................................. 658 Configure analysis settings ............................................................................................................... 660 Item limits for QuickSight analyses ................................................................................................ 661 Saving changes to analyses .............................................................................................................. 662 Exporting data from analyses .......................................................................................................... 663 Deleting an analysis ........................................................................................................................... 664 Adding and managing sheets ............................................................................................................... 664 Working with interactive sheets in Amazon QuickSight .................................................................. 666 Customizing dashboard layouts ...................................................................................................... 667 Parameters ........................................................................................................................................... 683 Custom actions .................................................................................................................................... 711 Working with paginated reports in Amazon QuickSight ................................................................. 720 Getting started .................................................................................................................................... 722 Creating reports from an analysis in Amazon QuickSight ......................................................... 723 Formatting reports in Amazon QuickSight ................................................................................... 725 Consuming paginated reports in Amazon QuickSight ................................................................ 738 Unsubscribe from paginated reporting ......................................................................................... 740 Working with items on sheets .............................................................................................................. 741 Adding visuals ..................................................................................................................................... 741 Using Q Topics .................................................................................................................................... 757 Visual types .......................................................................................................................................... 758 Formatting ........................................................................................................................................... 927 Customizing data presentation ..................................................................................................... 1002 Using themes in Amazon QuickSight ................................................................................................ 1034 Keyboard shortcuts ............................................................................................................................... 1041 Using shortcuts within a visual ..................................................................................................... 1042 Gaining insights with ML .......................................................................................................... 1045 Understanding the ML algorithm ...................................................................................................... 1046 What's the difference between anomaly detection and forecasting? .................................... 1047 What is RCF? ..................................................................................................................................... 1048 How RCF is applied to detect anomalies .................................................................................... 1049 How RCF is applied to generate forecasts .................................................................................. 1049 References for machine learning and RCF .................................................................................. 1050 Dataset requirements ........................................................................................................................... 1051 Adding insights ...................................................................................................................................... 1052 viii Amazon QuickSight User Guide Adding suggested insights ............................................................................................................. 1052 Adding custom insights to your analysis .................................................................................... 1055 Autonarratives ........................................................................................................................................ 1056 Insights that include autonarratives ............................................................................................ 1057 Use the narrative expression editor ............................................................................................. 1058 The expression editor workspace .................................................................................................. 1062 Adding URLs ...................................................................................................................................... 1064 Computations .................................................................................................................................... 1065 Detecting outliers .................................................................................................................................. 1093 Concepts for anomaly or outlier detection ................................................................................ 1094 Setting up ML-powered anomaly detection for outlier analysis ............................................ 1095 Exploring outliers and key drivers ................................................................................................ 1101 ML-powered forecasts and what-ifs .................................................................................................. 1107 Using forecasts and what-if scenarios ......................................................................................... 1108 Answering questions with QuickSight Q ................................................................................. 1113 New ways to get value from NLQ ..................................................................................................... 1114 Guided setup ..................................................................................................................................... 1114 Add to analysis ................................................................................................................................. 1115 Getting started ....................................................................................................................................... 1115 Step 1: Get the Q add-on .............................................................................................................. 1116 Step |
amazon-quicksight-user-003 | amazon-quicksight-user.pdf | 3 | Use the narrative expression editor ............................................................................................. 1058 The expression editor workspace .................................................................................................. 1062 Adding URLs ...................................................................................................................................... 1064 Computations .................................................................................................................................... 1065 Detecting outliers .................................................................................................................................. 1093 Concepts for anomaly or outlier detection ................................................................................ 1094 Setting up ML-powered anomaly detection for outlier analysis ............................................ 1095 Exploring outliers and key drivers ................................................................................................ 1101 ML-powered forecasts and what-ifs .................................................................................................. 1107 Using forecasts and what-if scenarios ......................................................................................... 1108 Answering questions with QuickSight Q ................................................................................. 1113 New ways to get value from NLQ ..................................................................................................... 1114 Guided setup ..................................................................................................................................... 1114 Add to analysis ................................................................................................................................. 1115 Getting started ....................................................................................................................................... 1115 Step 1: Get the Q add-on .............................................................................................................. 1116 Step 2: Create a sample Q topic ................................................................................................... 1116 Step 3: Explore the sample topic ................................................................................................. 1117 Step 4: Practice asking questions with the Q bar ..................................................................... 1120 Try Q Embedding .................................................................................................................................. 1123 Working with Q topics ......................................................................................................................... 1124 Navigating Q Topics ......................................................................................................................... 1125 Creating topics .................................................................................................................................. 1131 Topic workspace ............................................................................................................................... 1133 Working with datasets in a topic .................................................................................................. 1141 Making topics natural-language-friendly .................................................................................... 1152 Sharing topics ................................................................................................................................... 1166 Manage topic permissions .............................................................................................................. 1167 Reviewing topic performance and feedback .............................................................................. 1168 Refreshing topic indexes ................................................................................................................. 1176 Using the Amazon QuickSight APIs ............................................................................................. 1178 Asking questions .................................................................................................................................... 1187 ix Amazon QuickSight User Guide Supported question types .............................................................................................................. 1193 Pinning visuals ....................................................................................................................................... 1195 Pin visuals .......................................................................................................................................... 1196 Using your pinboard ........................................................................................................................ 1197 Providing feedback ................................................................................................................................ 1200 Correcting answers ................................................................................................................................ 1202 Correcting wrong answers .............................................................................................................. 1203 What to do when Q can't provide an answer ............................................................................ 1214 Saving corrections to a Q answer ................................................................................................. 1215 Verifying answers .................................................................................................................................. 1215 Verifying answers to questions ..................................................................................................... 1216 Reviewing verified answers ............................................................................................................ 1216 Managing Q regions .............................................................................................................................. 1218 Unsubscribe from Q .............................................................................................................................. 1219 Generative BI with Amazon Q in QuickSight ........................................................................... 1220 Get started .............................................................................................................................................. 1221 Augmenting Amazon QuickSight insights with Amazon Q Business .......................................... 1225 Considerations ................................................................................................................................... 1226 Create a new Amazon Q Business application in QuickSight .................................................. 1227 Connect QuickSight to an existing Amazon Q Business application ..................................... 1228 Disconnect an Amazon Q Business application from QuickSight ........................................... 1228 Authoring experience ............................................................................................................................ 1229 Build visuals ....................................................................................................................................... 1230 Build calculations ............................................................................................................................. 1234 Refine visuals .................................................................................................................................... 1234 Executive summaries ............................................................................................................................. 1236 Authoring Q&A ...................................................................................................................................... 1237 Converting to the Generative Q&A experience .......................................................................... 1237 Named entities .................................................................................................................................. 1237 Measure aggregations ..................................................................................................................... 1239 Turn on the Dashboard Q&A experience in Amazon QuickSight ................................................. 1240 Asking and answering questions of data with Amazon Q in QuickSight ................................... 1243 Opting out of Amazon Q in QuickSight ........................................................................................... 1252 Working with data stories in Amazon QuickSight .......................................................................... 1254 Creating a data story ...................................................................................................................... 1255 Personalize data stories in Amazon QuickSight ........................................................................ 1257 x Amazon QuickSight User Guide Viewing a generated data story in Amazon QuickSight .......................................................... 1257 Editing a generated data story in Amazon QuickSight ............................................................ 1257 Themes and animations .................................................................................................................. 1260 Sharing a data story in Amazon QuickSight .............................................................................. 1260 Working with scenarios in Amazon QuickSight .............................................................................. 1262 Considerations ................................................................................................................................... 1263 Creating an Amazon QuickSight scenario ................................................................................... 1263 Working with threads in an Amazon QuickSight scenario ....................................................... 1265 Working with data in an Amazon QuickSight scenario ............................................................ 1268 Sharing data .............................................................................................................................. 1271 Sharing Amazon QuickSight analyses ............................................................................................... 1272 Sharing an analysis .......................................................................................................................... 1272 Viewing the users that an analysis is shared with .................................................................... 1273 Revoking access to an analysis ...................................................................................................... 1273 Publishing dashboards ......................................................................................................................... 1274 Copying a dashboard ....................................................................................................................... 1276 Deleting dashboards ........................................................................................................................ 1277 Publishing previous dashboard versions ..................................................................................... 1279 Sharing dashboards ............................................................................................................................... 1279 Granting access to a dashboard .................................................................................................... 1280 Sharing a link a shared dashboard ............................................................................................... 1297 View who has access ....................................................................................................................... 1298 Revoke access .................................................................................................................................... 1298 Share your view of a dashboard ........................................................................................................ 1298 Sending reports ..................................................................................................................................... 1299 Configuring email reports .............................................................................................................. 1300 Report billing .................................................................................................................................... 1308 Subscribing to reports .......................................................................................................................... 1308 Threshold alerts ..................................................................................................................................... 1310 Alert Permissions .............................................................................................................................. 1311 Creating Alerts .................................................................................................................................. 1311 Managing Threshold Alerts ............................................................................................................ 1315 Investigating Alert Failures ............................................................................................................ 1317 Alert Scheduling ............................................................................................................................... 1318 Print a dashboard or analysis ............................................................................................................. 1318 Exporting as PDFs ................................................................................................................................. 1319 xi Amazon QuickSight User Guide PDF Error codes ..................................................................................................................................... 1319 Organizing assets into folders ............................................................................................................ 1321 Considerations ................................................................................................................................... 1322 Overview of QuickSight folders .................................................................................................... 1323 Permissions ........................................................................................................................................ 1328 Create a shared folder .................................................................................................................... 1329 Creating scaled folders with the QuickSight APIs ..................................................................... 1332 Exploring dashboards ................................................................................................................ 1336 Interacting with dashboards ............................................................................................................... 1337 Using filters ....................................................................................................................................... 1338 Filtering dashboard data ................................................................................................................ 1340 Using dashboard elements ............................................................................................................. 1344 Sorting data ...................................................................................................................................... 1346 Exporting and printing dashboard reports ................................................................................. 1348 Generate an executive summary .................................................................................................. 1351 Interacting with paginated reports .................................................................................................... 1352 Exporting and printing .................................................................................................................... 1353 Subscribe to emails and alerts ........................................................................................................... 1356 Getting email reports ...................................................................................................................... 1356 Sign up for anomaly alerts ............................................................................................................ 1357 Reader generated reports .................................................................................................................... 1358 Creating a reader generated report ............................................................................................. 1358 Loading a saved view of a reader generated report ................................................................. 1360 Updating the view of a scheduled reader generated report ................................................... 1361 Updating |
amazon-quicksight-user-004 | amazon-quicksight-user.pdf | 4 | with dashboards ............................................................................................................... 1337 Using filters ....................................................................................................................................... 1338 Filtering dashboard data ................................................................................................................ 1340 Using dashboard elements ............................................................................................................. 1344 Sorting data ...................................................................................................................................... 1346 Exporting and printing dashboard reports ................................................................................. 1348 Generate an executive summary .................................................................................................. 1351 Interacting with paginated reports .................................................................................................... 1352 Exporting and printing .................................................................................................................... 1353 Subscribe to emails and alerts ........................................................................................................... 1356 Getting email reports ...................................................................................................................... 1356 Sign up for anomaly alerts ............................................................................................................ 1357 Reader generated reports .................................................................................................................... 1358 Creating a reader generated report ............................................................................................. 1358 Loading a saved view of a reader generated report ................................................................. 1360 Updating the view of a scheduled reader generated report ................................................... 1361 Updating a reader generated report schedule ........................................................................... 1362 Bookmarks ............................................................................................................................................... 1362 Creating bookmarks ......................................................................................................................... 1363 Updating bookmarks ....................................................................................................................... 1364 Renaming bookmarks ...................................................................................................................... 1364 Making a bookmark the default view .......................................................................................... 1365 Sharing bookmarks .......................................................................................................................... 1366 Deleting bookmarks ......................................................................................................................... 1367 Monitoring data ......................................................................................................................... 1369 Accessing metrics in CloudWatch ....................................................................................................... 1369 Graph metrics with the CloudWatch console ............................................................................. 1369 Creating alarms with the CloudWatch console .......................................................................... 1370 xii Amazon QuickSight User Guide Metrics ...................................................................................................................................................... 1370 Per-dashboard metrics .................................................................................................................... 1370 Per-dataset ingestion metrics ........................................................................................................ 1371 Per-visual metrics ............................................................................................................................. 1373 Aggregate metrics ................................................................................................................................. 1374 Aggregate dashboard metrics ....................................................................................................... 1374 Aggregate ingestion metrics .......................................................................................................... 1376 Aggregate visual metrics ................................................................................................................ 1377 Aggregate SPICE metrics ..................................................................................................................... 1378 Dimensions .............................................................................................................................................. 1379 Developing with Amazon QuickSight ...................................................................................... 1381 Required knowledge ............................................................................................................................. 1381 Available API operations for Amazon QuickSight .......................................................................... 1381 Terminology and concepts .................................................................................................................. 1383 QuickSight Dev portal .......................................................................................................................... 1385 Developing with the QuickSight APIs ............................................................................................... 1386 Events integration ................................................................................................................................. 1394 Supported events ............................................................................................................................. 1394 Example event payload ................................................................................................................... 1412 Creating rules to send events to Amazon CloudWatch ............................................................ 1412 Creating rules to send events to AWS Lambda ......................................................................... 1413 Embedded analytics .............................................................................................................................. 1416 Embedding analytics into your applications .............................................................................. 1418 Embedding custom assets .............................................................................................................. 1419 1-click embedding ............................................................................................................................ 1436 Embedding with the QuickSight APIs .......................................................................................... 1448 Troubleshooting ......................................................................................................................... 1666 Resolving Amazon QuickSight issues and error messages ............................................................ 1666 Athena issues .......................................................................................................................................... 1666 Athena column not found .............................................................................................................. 1667 Athena invalid data ......................................................................................................................... 1668 Athena query timeout ..................................................................................................................... 1668 Athena staging bucket missing ..................................................................................................... 1668 AWS Glue table incompatible with Athena ................................................................................ 1669 Athena Table not found ................................................................................................................. 1675 Workgroup and output errors when using Athena with Amazon QuickSight ...................... 1676 xiii Amazon QuickSight User Guide Data source connectivity issues .......................................................................................................... 1677 I can't connect although my data source connection options look right (SSL) .................... 1677 I can't connect to Amazon Athena ............................................................................................... 1679 I can't connect to Amazon S3 ....................................................................................................... 1684 I can't create or refresh a dataset from an existing Adobe Analytics data source ............... 1685 I need to validate the connection to my data source, or change data source settings ....... 1686 I can't connect to MySQL (issues with SSL and authorization) ............................................... 1686 I can't connect to RDS .................................................................................................................... 1689 Login issues ............................................................................................................................................. 1689 Insufficient permissions with Athena ........................................................................................... 1689 Amazon QuickSight isn't working in my browser ...................................................................... 1691 How do I delete my Amazon QuickSight account? ................................................................... 1691 Individuals in my organization get "External Login is Unauthorized" .................................... 1691 My email sign-in stopped working ............................................................................................... 1696 Visual issues ............................................................................................................................................ 1697 I can't see my visuals ...................................................................................................................... 1697 I get a feedback bar across my printed documents .................................................................. 1698 My map charts don't show locations ........................................................................................... 1698 My pivot table stops working ........................................................................................................ 1698 My visual can’t find missing columns .......................................................................................... 1698 My visual can’t find the query table ............................................................................................ 1699 My visual doesn't update after I change a calculated field ..................................................... 1700 Values with scientific notation don't format correctly ............................................................. 1700 Amazon QuickSight Administration ......................................................................................... 1701 Different editions of Amazon QuickSight ........................................................................................ 1701 Availability of editions .................................................................................................................... 1702 User management between editions ........................................................................................... 1702 Permissions for the different editions ......................................................................................... 1703 Regions and IP ranges .......................................................................................................................... 1704 Supported AWS Regions for Amazon QuickSight ..................................................................... 1705 Supported AWS Regions for Amazon Q in QuickSight ............................................................ 1708 Supported AWS Regions for Amazon QuickSight Q ................................................................. 1709 Cross-Region inference with Amazon Q in QuickSight ............................................................ 1710 Supported browsers .............................................................................................................................. 1711 Managing QuickSight ............................................................................................................................ 1712 Managing assets ............................................................................................................................... 1712 xiv Amazon QuickSight User Guide Managing your subscriptions ......................................................................................................... 1714 Upgrading your subscription ......................................................................................................... 1718 SPICE capacity ................................................................................................................................... 1721 Manage account settings ................................................................................................................ 1727 Domains and Embedding ............................................................................................................... 1732 Multitenancy and namespaces ........................................................................................................... 1735 To migrate existing users in one namespace to a different namespace ................................ 1738 Account customizations ....................................................................................................................... 1740 Welcome content ............................................................................................................................. 1741 Report and alert emails .................................................................................................................. 1742 Default analysis theme (CLI) .......................................................................................................... 1749 Amazon QuickSight brand customization ........................................................................................ 1751 Permissions ........................................................................................................................................ 1752 Create a brand .................................................................................................................................. 1753 Tracking cost and usage data ............................................................................................................. 1755 AWS security .............................................................................................................................. 1757 Data protection ...................................................................................................................................... 1757 Data encryption ................................................................................................................................ 1759 Encrypting SPICE datasets with AWS KMS customer-managed keys .................................... 1760 Inter-network traffic privacy .......................................................................................................... 1769 Accessing data sources .................................................................................................................... 1770 Identity and access management ...................................................................................................... 1811 Service control policies (SCP) |
amazon-quicksight-user-005 | amazon-quicksight-user.pdf | 5 | and namespaces ........................................................................................................... 1735 To migrate existing users in one namespace to a different namespace ................................ 1738 Account customizations ....................................................................................................................... 1740 Welcome content ............................................................................................................................. 1741 Report and alert emails .................................................................................................................. 1742 Default analysis theme (CLI) .......................................................................................................... 1749 Amazon QuickSight brand customization ........................................................................................ 1751 Permissions ........................................................................................................................................ 1752 Create a brand .................................................................................................................................. 1753 Tracking cost and usage data ............................................................................................................. 1755 AWS security .............................................................................................................................. 1757 Data protection ...................................................................................................................................... 1757 Data encryption ................................................................................................................................ 1759 Encrypting SPICE datasets with AWS KMS customer-managed keys .................................... 1760 Inter-network traffic privacy .......................................................................................................... 1769 Accessing data sources .................................................................................................................... 1770 Identity and access management ...................................................................................................... 1811 Service control policies (SCP) ........................................................................................................ 1812 IAM ...................................................................................................................................................... 1815 Identity management ...................................................................................................................... 1851 Managing user access ...................................................................................................................... 1883 Turning on IP and VPC endpoint restrictions ............................................................................. 1899 Customizing access to QuickSight capabilities .......................................................................... 1901 Incident response, logging, and monitoring .................................................................................... 1908 Logging QuickSight information with AWS CloudTrail ............................................................ 1909 Tracking non-API events by using CloudTrail logs .................................................................... 1911 Example: Amazon QuickSight log file entries ............................................................................ 1914 Compliance validation .......................................................................................................................... 1915 Resilience ................................................................................................................................................. 1916 Infrastructure security .......................................................................................................................... 1916 Network and database configuration requirements ................................................................. 1917 xv Amazon QuickSight User Guide Connecting to a VPC with QuickSight ......................................................................................... 1923 Best practices ......................................................................................................................................... 1956 AWS managed policies ......................................................................................................................... 1957 AWSQuickSightElasticsearchPolicy ............................................................................................... 1958 AWSQuickSightOpenSearchPolicy ................................................................................................ 1959 AWSQuickSightSageMakerPolicy ................................................................................................... 1960 AWSQuickSightAssetBundleExportPolicy .................................................................................... 1961 AWSQuickSightAssetBundleImportPolicy .................................................................................... 1962 Policy updates ................................................................................................................................... 1962 AWS Glossary ............................................................................................................................. 1964 Document history ...................................................................................................................... 1965 Previous updates ................................................................................................................................... 2048 Attributions ................................................................................................................................ 2061 xvi Amazon QuickSight User Guide What is Amazon QuickSight? Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights to the people who you work with, wherever they are. Amazon QuickSight connects to your data in the cloud and combines data from many different sources. In a single data dashboard, QuickSight can include AWS data, third-party data, big data, spreadsheet data, SaaS data, B2B data, and more. As a fully managed cloud-based service, Amazon QuickSight provides enterprise-grade security, global availability, and built-in redundancy. It also provides the user-management tools that you need to scale from 10 users to 10,000, all with no infrastructure to deploy or manage. QuickSight gives decision-makers the opportunity to explore and interpret information in an interactive visual environment. They have secure access to dashboards from any device on your network and from mobile devices. To learn more about the major components and processes of Amazon QuickSight and the typical workflow for creating data visualizations, see the following sections. Get started today to unlock the potential of your data and make the best decisions that you can. Topics • Why QuickSight? • Starting work with QuickSight Why QuickSight? Every day, the people in your organization make decisions that affect your business. When they have the right information at the right time, they can make the choices that move your company in the right direction. Here are some of the benefits of using Amazon QuickSight for analytics, data visualization, and reporting: • The in-memory engine, called SPICE, responds with blazing speed. • No upfront costs for licenses and a low total cost of ownership (TCO). • Collaborative analytics with no need to install an application. • Combine a variety of data into one analysis. Why QuickSight? 1 Amazon QuickSight User Guide • Publish and share your analysis as a dashboard. • Control features available in a dashboard. • No need to manage granular database permissions—dashboard viewers can see only what you share. For advanced users, QuickSight Enterprise edition offers even more features: • Saves you time and money with automated and customizable data insights, powered by machine learning (ML). This enables your organization to do the following, without requiring any knowledge of machine learning: • Automatically make reliable forecasts. • Automatically identify outliers. • Find hidden trends. • Act on key business drivers. • Translate data into easy-to-read narratives, like headline tiles for your dashboard. • Provides extra Enterprise security features, including the following: • Federated users, groups, and single sign-on (IAM Identity Center) with AWS Identity and Access Management (IAM) Federation, SAML, OpenID Connect, or AWS Directory Service for Microsoft Active Directory. • Granular permissions for AWS data access. • Row level security. • Highly secure data encryption at rest. • Access to AWS data and on-premises data in Amazon Virtual Private Cloud • Offers pay-per-session pricing for the users that you place in the "reader" security role—readers are dashboard subscribers, people who view reports but don't create them. • Empowers you to make QuickSight part of your own websites and applications by deploying embedded console analytics and dashboard sessions. • Makes our business your business with multitenancy features for value-added resellers (VARs) of analytical services. • Enables you to programmatically script dashboard templates that can be transferred to other AWS accounts. • Simplifies access management and organization with shared and personal folders for |
amazon-quicksight-user-006 | amazon-quicksight-user.pdf | 6 | data and on-premises data in Amazon Virtual Private Cloud • Offers pay-per-session pricing for the users that you place in the "reader" security role—readers are dashboard subscribers, people who view reports but don't create them. • Empowers you to make QuickSight part of your own websites and applications by deploying embedded console analytics and dashboard sessions. • Makes our business your business with multitenancy features for value-added resellers (VARs) of analytical services. • Enables you to programmatically script dashboard templates that can be transferred to other AWS accounts. • Simplifies access management and organization with shared and personal folders for analytical assets. Why QuickSight? 2 Amazon QuickSight User Guide • Enables larger data import quotas for SPICE data ingestion and more frequently scheduled data refreshes. To learn more, see the following video, which contains a two-minute introduction to Amazon QuickSight: Introducing Amazon QuickSight. The audio contains all of the relevant information. To discover the power of end-to-end BI from AWS, sign up at https://aws.amazon.com/QuickSight. Starting work with QuickSight To start work with QuickSight, we recommend that you read the following sections: • How Amazon QuickSight works– Learn essential terminology and how QuickSight components work together. • Getting started with Amazon QuickSight data analysis – Complete important setup tasks and learn how to use a dashboard, create an analysis, and publish a dashboard. • AWS security in Amazon QuickSight – Understand how you can help to secure access to data in QuickSight. Starting work with QuickSight 3 Amazon QuickSight User Guide How Amazon QuickSight works Using Amazon QuickSight, you can access data and prepare it for use in reporting. It saves your prepared data either in SPICE memory or as a direct query. You can use a variety of data sources for analysis. When you create an analysis, the typical workflow looks the following illustration: 1. Create a new analysis. 2. Add new or existing datasets. 3. Choose fields to create the first chart. QuickSight automatically suggests the best visualization. 4. Add more charts, tables, or insights to the analysis. Resize and rearrange them on one or more sheets. Use extended features to add variables, custom controls, colors, additional pages (called sheets), and more. 5. Publish the analysis as a dashboard to share it with other people. Topics • Sample data • Terminology 4 Amazon QuickSight Sample data User Guide To get a first look at how QuickSight works, you can explore Amazon QuickSight using the following sample data: • B2b sales data • Business overview data (revenue) • ML insights data • People overview data (human resources) • Sales pipeline data • Web and social media analytics data (marketing) Also, a variety of datasets are available free online that you can use with Amazon QuickSight, for example the AWS public datasets. These datasets come in a variety of formats. Terminology The following are some important terms that you will encounter in this guide. Data preparation Data preparation is the process of transforming data for use in an analysis. This includes making changes like the following: • Filtering out data so that you can focus on what's important to you. • Renaming fields to make them easier to read. • Changing data types so that they are more useful. • Adding calculated fields to enhance analysis. • Creating SQL queries to refine data. SPICE SPICE (Super-fast, Parallel, In-memory Calculation Engine) is the robust in-memory engine that QuickSight uses. SPICE is engineered to rapidly perform advanced calculations and serve data. The storage and processing capacity available in SPICE speeds up the analytical queries that you run Sample data 5 Amazon QuickSight User Guide against your imported data. By using SPICE, you save time because you don't need to retrieve the data every time that you change an analysis or update a visual. Data analysis A data analysis is the basic workspace for creating data visualizations, which are graphical representations of your data. Each analysis contains a collection of visualizations that you arrange and customize. Data visualization A data visualization, also known as a visual, is a graphical representation of data. There are many types of visualizations, including diagrams, charts, graphs, and tables. All visuals begin in AutoGraph mode, which automatically selects the best type of visualization for the fields that you select. You can also take control and choose your own visuals. You can enhance your analytics by applying filters, changing colors, adding parameter controls, custom click actions, and more. Machine learning Machine learning (ML) Insights propose narrative add-ons that are based on an evaluation of your data. You can choose one from the list, for example forecasting or anomaly (outlier) detection. Or you can create your own. You can combine insight calculations, narrative text, colors, images, and conditions that you define. Sheet A sheet is a page that displays a set |
amazon-quicksight-user-007 | amazon-quicksight-user.pdf | 7 | best type of visualization for the fields that you select. You can also take control and choose your own visuals. You can enhance your analytics by applying filters, changing colors, adding parameter controls, custom click actions, and more. Machine learning Machine learning (ML) Insights propose narrative add-ons that are based on an evaluation of your data. You can choose one from the list, for example forecasting or anomaly (outlier) detection. Or you can create your own. You can combine insight calculations, narrative text, colors, images, and conditions that you define. Sheet A sheet is a page that displays a set of visualizations and insights. You can imagine this as a sheet from a newspaper, except that it's filled with charts, graphs, tables, and insights. You can add more sheets, and make them work separately or together in your analysis. Dashboard A dashboard is the published version of an analysis. You can share with other users of Amazon QuickSight for reporting purposes. You specify who has access and what they can do with the dashboard. Data analysis 6 Amazon QuickSight User Guide Setting up for Amazon QuickSight This section includes the essential setup tasks, such as signing up for an AWS account, creating an administrative user, integrating your account with IAM Identity Center, assigning access to users, and subscribing to the Amazon QuickSight service. Topics • Complete initial configuration tasks • Integrating with IAM Identity Center • Signing up for an Amazon QuickSight subscription Complete initial configuration tasks To use Amazon QuickSight you must first complete the following tasks: Topics • Sign up for an AWS account • Create a user with administrative access Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. Complete initial configuration tasks 7 Amazon QuickSight User Guide AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. Create a user with administrative access 8 Amazon QuickSight User Guide For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Integrating with IAM Identity Center IAM Identity Center helps you securely create or connect your workforce identities and manage their access across AWS accounts and applications. Before you integrate your Amazon QuickSight account with IAM Identity Center, |
amazon-quicksight-user-008 | amazon-quicksight-user.pdf | 8 | users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Integrating with IAM Identity Center IAM Identity Center helps you securely create or connect your workforce identities and manage their access across AWS accounts and applications. Before you integrate your Amazon QuickSight account with IAM Identity Center, set up IAM Identity Center in your AWS account. If you haven't set up IAM Identity Center in your AWS organization, see Getting started in the AWS IAM Identity Center User Guide. If you want to configure an external identity provider with IAM Identity Center, see Supported identity providers to view a list of supported identity providers' configuration steps. Signing up for an Amazon QuickSight subscription When you first sign up for Amazon QuickSight, you get a free trial subscription for four users for 30 days. During the process of signing up, you choose which edition of QuickSight to use and set options for your identity provider. Before you begin, make sure that you can connect to an existing AWS account. If you don't have an AWS account, see Complete initial configuration tasks. The person who signs up for QuickSight needs to have the correct AWS Identity and Access Management (IAM) permissions. For more information, see IAM policy examples for Amazon QuickSight. To test your permissions, you can use the IAM policy simulator; for more information, see Testing IAM policies with the IAM policy simulator. Also, check whether your AWS account is part of an Integrating with IAM Identity Center 9 Amazon QuickSight User Guide organization based on the AWS Organizations service. If so and you sign in as an IAM user, make sure that you didn't inherit any IAM permissions that deny access to the required permissions. For more information on Organizations, see What is AWS Organizations? To subscribe to QuickSight 1. Sign in to your AWS account and open QuickSight from the AWS Management Console. You can find it under Analytics or by searching for QuickSight. Your AWS account number is displayed for verification purposes. 2. Choose Sign up for QuickSight. 3. Choose Standard or Enterprise. a. If you choose Standard, choose the method that you want to connect with. Choose one of the following: • Use IAM federated identities and QuickSight-managed users. • Use IAM federated identities only. b. If you choose Enterprise, choose Continue, and then choose the identity method that you want to connect with. Choose one of the following: • (Recommended) Use IAM Identity Center enabled application. This option is only available for Amazon QuickSight Enterprise Edition accounts. • Use Active Directory • Use IAM federated identities and QuickSight-managed users • Use IAM federated identities only To sign up for a QuickSight Enterprise Edition account with an IAM Identity center enabled application, you need the correct permissions. For more information on the permissions needed to use this method, see IAM identity-based policies for Amazon QuickSight: All access for Enterprise edition with IAM Identity Center. To sign up for QuickSight with federated users, you need the correct IAM permissions, defined as follows: Signing up for a subscription 10 Amazon QuickSight User Guide • To use role-based federation (that is, single sign-on, or IAM Identity Center) with QuickSight Standard Edition or QuickSight Enterprise Edition, see IAM identity-based policies for Amazon QuickSight: All access for Standard edition. • To use Microsoft Active Directory with QuickSight Enterprise Edition, see IAM identity- based policies for Amazon QuickSight: all access for Enterprise edition with Active Directory. QuickSight Standard Edition doesn't work with Active Directory. After you finish creating an Enterprise Edition account in Amazon QuickSight, you can add a subscription to Paginated Reports from the Manage subscriptions page of the Manage QuickSight menu. For more information on paginated reports, see Working with paginated reports in Amazon QuickSight. 4. For both Standard and Enterprise editions, make choices for the following items: • Enter a unique account name for QuickSight. Your account name can only contain characters (A–Z and a–z), digits (0–9), and hyphens (-). Note that if your account begins with the characters D- or d-, an error occurs. If you use Microsoft AD, and it has a default alias, this alias is used for the account name. • Enter a notification email address for the QuickSight account owner or group. This email address receives service and usage notifications. • (Optional) Choose the AWS Region that you want to use for your initial data storage capacity, called SPICE. • (Optional) Choose whether to allow autodiscovery of your AWS resources. You can |
amazon-quicksight-user-009 | amazon-quicksight-user.pdf | 9 | name can only contain characters (A–Z and a–z), digits (0–9), and hyphens (-). Note that if your account begins with the characters D- or d-, an error occurs. If you use Microsoft AD, and it has a default alias, this alias is used for the account name. • Enter a notification email address for the QuickSight account owner or group. This email address receives service and usage notifications. • (Optional) Choose the AWS Region that you want to use for your initial data storage capacity, called SPICE. • (Optional) Choose whether to allow autodiscovery of your AWS resources. You can change these options later in Manage Account. For more information, see Allowing autodiscovery of AWS resources. • (Optional) For IAM Role, choose Use an existing role, and then from the list choose a role that you want to use. Or enter the IAM Amazon Resource Name (ARN) in the following format: arn:aws:iam::account-id:role/path/role-name. Note Make sure to have your administrator give you permissions to pass any existing IAM roles in QuickSight. If you don't have permissions, or if you don't know if you have permissions, choose QuickSight-managed role. This is the default role. You can Signing up for a subscription 11 Amazon QuickSight User Guide always switch to using a different role later if you have the correct permissions. For more information, see Using an existing IAM role in QuickSight. 5. Review the choices that you made, then choose Finish. 6. (Optional) To open QuickSight, choose Go to QuickSight. If you're using Enterprise edition, you can manage user groups by choosing Manage access to QuickSight. Otherwise, close the browser and notify your users how to connect. 7. (Optional) If you're using IAM Identity Center or federation, choose the users and groups that are going to use QuickSight. Signing up for a subscription 12 Amazon QuickSight User Guide Getting started with Amazon QuickSight data analysis Use the topics in this section to create your first analysis. You can use sample data to create either a simple or a more advanced analysis. Or you can connect to your own data to create an analysis. Before you create your first analysis, make sure to complete the steps in Setting up for Amazon QuickSight. Topics • Signing in to Amazon QuickSight • Quick start: Create an Amazon QuickSight analysis with a single visual using sample data • Tutorial: Create an Amazon QuickSight dashboard using sample data • Using the Amazon QuickSight console Signing in to Amazon QuickSight You can sign in to Amazon QuickSight multiple ways, depending on what your QuickSight administrator has set up. You can sign in to QuickSight using AWS root, AWS Identity and Access Management (IAM), corporate Active Directory, or your native QuickSight credentials. If your QuickSight account is integrated with an identity provider such as Okta, the following procedures don't apply to you. If you're a QuickSight administrator, make sure to allow-list the following domains within your organization's network. User type Domain or domains to allow-list Users who sign in directly through QuickSight and Active Directory users signin.aws and awsapps.com AWS root user signin.aws.amazon.com and amazon.com IAM users signin.aws.amazon.com Signing in to QuickSight 13 Amazon QuickSight Important User Guide We strongly recommend that you don't use the AWS root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks. For more information, see AWS account root user in the IAM User Guide. How to sign in to Amazon QuickSight Use the following procedure to sign in to QuickSight. To sign in to QuickSight 1. Go to https://quicksight.aws.amazon.com/. 2. For QuickSight account name, enter your account name. This is the name that was created when the QuickSight account was created in AWS. If you were invited to the QuickSight account by email, you can find the account name inside of that email. If you don't have the email that invited you to QuickSight, ask the QuickSight administrator in your organization for the information that you need. How to sign in to QuickSight 14 Amazon QuickSight User Guide You can also find your QuickSight account name at the top of the menu at upper-right on the QuickSight console. In some cases, you might not have access to your QuickSight account or have an administrator who can provide this information, or both. If so, contact AWS Support and open a ticket that includes your AWS customer ID. How to sign in to QuickSight 15 Amazon QuickSight User Guide 3. For Username, enter your QuickSight user name. User names that contain a semicolon (;) aren't supported. Choose one of the following: • For organizational users – |
amazon-quicksight-user-010 | amazon-quicksight-user.pdf | 10 | Amazon QuickSight User Guide You can also find your QuickSight account name at the top of the menu at upper-right on the QuickSight console. In some cases, you might not have access to your QuickSight account or have an administrator who can provide this information, or both. If so, contact AWS Support and open a ticket that includes your AWS customer ID. How to sign in to QuickSight 15 Amazon QuickSight User Guide 3. For Username, enter your QuickSight user name. User names that contain a semicolon (;) aren't supported. Choose one of the following: • For organizational users – The user name is provided by your administrator. Your account can be based on IAM credentials or your email address if it's a root email address. Or it can be used as the user name to invite you into the QuickSight account. If you received an invitation email from another Amazon QuickSight user, it indicates what type of credentials to use. • For individual users – The user name that you created for yourself. This is usually the IAM credentials that you created. How to sign in to QuickSight 16 Amazon QuickSight User Guide The remaining steps vary depending on the user type you sign in as (directly through QuickSight or as an Active Directory user, AWS root user, or IAM user). For more information, see the following sections. How to sign in to QuickSight 17 Amazon QuickSight User Guide Finishing QuickSight sign-in as a QuickSight or Active Directory user If you're signing in directly through QuickSight or are using your corporate Active Directory credentials, you're redirected to signin.aws after you enter your account name and user name. Use the following procedure to finish signing in. To finish signing in to QuickSight if you sign in directly through QuickSight or use Active Directory credentials 1. For Password, enter your password. Passwords are case-sensitive and must be 8–64 characters in length. They must also contain each of the following: • Lowercase letters (a–z) • Uppercase letters (A–Z) • Numbers (0–9) • Nonalphanumeric characters (~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/) 2. If your account is multi-factor authentication enabled, enter the multi-factor authentication code that you receive for MFA code. 3. Choose Sign in. Finishing QuickSight sign-in as an AWS root user If you're signing in as an AWS root user, you're redirected to signin.aws.amazon.com (or amazon.com) to complete the sign-in process. Your user name is prefilled. Use the following procedure to finish signing in. To finish signing in as an AWS root user 1. Choose Next. How to sign in to QuickSight 18 Amazon QuickSight User Guide How to sign in to QuickSight 19 Amazon QuickSight User Guide 2. For password, enter your password. For more information about root user passwords, see Changing the AWS account root user password in the IAM User Guide. 3. Choose Sign in. Finishing QuickSight sign-in as an IAM user If you're signing in as an IAM user, you're redirected to signin.aws.amazon.com (or amazon.com) to complete the sign-in process. Your user name is prefilled. Use the following procedure to finish signing in. How to sign in to QuickSight 20 Amazon QuickSight User Guide To finish signing in as an IAM user 1. For Password, enter your password. For more information about IAM user passwords, see Default password policy in the IAM User Guide. 2. Choose Sign in. How to sign in to QuickSight 21 Amazon QuickSight User Guide If your sign-in process happens automatically and you need to use a different account, use a private or incognito browser window. Doing this prevents the browser from reusing cached settings. Quick start: Create an Amazon QuickSight analysis with a single visual using sample data With the following procedure, you use the Web and Social Media Analytics sample dataset to create an analysis containing a line chart visual. This visual shows the count by month of people that have added themselves to the mailing list. To create an analysis containing a line chart visual using a sample dataset 1. On the Amazon QuickSight start page, choose New analysis. If you don't have the sample data, you can download it from web-and-social-analytics.csv.zip. Unzip the file so you can use the .csv file. To upload the sample data, do the following: a. Choose New dataset. You can also add a new dataset from the Datasets page. To do this, choose Datasets, and then choose New dataset. b. Choose Upload a file. c. Choose the sample file, web-and-social-analytics.csv, from your drive. If you don't see it, check that you unzipped the web-and-social-analytics.csv.zip file. d. Confirm file upload settings by choosing Next on the Confirm file upload settings screen. e. f. Choose Visualize on the Data source details screen. Skip the next step. Choosing Visualize brings you to the same screen as the process in Step 2. 2. On |
amazon-quicksight-user-011 | amazon-quicksight-user.pdf | 11 | data, do the following: a. Choose New dataset. You can also add a new dataset from the Datasets page. To do this, choose Datasets, and then choose New dataset. b. Choose Upload a file. c. Choose the sample file, web-and-social-analytics.csv, from your drive. If you don't see it, check that you unzipped the web-and-social-analytics.csv.zip file. d. Confirm file upload settings by choosing Next on the Confirm file upload settings screen. e. f. Choose Visualize on the Data source details screen. Skip the next step. Choosing Visualize brings you to the same screen as the process in Step 2. 2. On the Datasets page, choose the Web and Social Media Analytics dataset, and then choose Use in Analysis at upper right. 3. In the Data pane, choose Date, and then choose Mailing list adds. Quick start: Create an analysis using sample data 22 Amazon QuickSight User Guide Amazon QuickSight uses AutoGraph to create the visual, selecting the visual type that it determines is most compatible with those fields. In this case, it selects a line chart that shows mailing list adds by day, which is the date granularity default. Quick start: Create an analysis using sample data 23 Amazon QuickSight User Guide 4. Navigate to the Field wells at the bottom of the Visuals pane. Quick start: Create an analysis using sample data 24 Amazon QuickSight User Guide 5. Choose the X axis field well, choose Aggregate, and then choose Month. The line chart updates to show mailing list adds by month, rather than by the default of by year. Quick start: Create an analysis using sample data 25 Amazon QuickSight User Guide Tutorial: Create an Amazon QuickSight dashboard using sample data Use the procedures in the following sections to complete these tasks: • Create and prepare a Marketing dataset using the Web and Social Media Analytics sample data. • Create a Marketing analysis and add several visuals to it. • Modify the visuals in the analysis, including the following: • Adding another measure to an existing visual • Changing chart colors • Changing date granularity • Changing the size and layout of the visuals Create a dashboard using sample data 26 User Guide Amazon QuickSight • Applying a filter • Publish a dashboard based on the analysis. Topics • Tutorial: Create a prepared Amazon QuickSight dataset • Tutorial: Create an Amazon QuickSight analysis • Tutorial: Modify Amazon QuickSight visuals • Tutorial: Create an Amazon QuickSight dashboard Tutorial: Create a prepared Amazon QuickSight dataset Use the following procedure to prepare the Marketing dataset and create an analysis. If you don't see the Web and Social Media Analytics sample data already in Amazon QuickSight, you can download it: web-and-social-analytics.csv.zip. To prepare the Marketing dataset and create an analysis 1. On the Amazon QuickSight start page, choose Datasets at left. Tutorial: Create a prepared dataset 27 Amazon QuickSight User Guide 2. On the Datasets page, choose New dataset. 3. In the FROM EXISTING DATA SOURCES section of the Create a Data Set page, choose the Web and Social Media Analytics Amazon S3 data source and then choose Edit dataset. Tutorial: Create a prepared dataset 28 Amazon QuickSight User Guide Amazon QuickSight opens the data preparation page. 4. For Dataset Name, enter Marketing Sample to replace Web and Social Media Analytics for the dataset name. 5. Exclude some fields from the dataset. In the Fields pane, choose the field menu for the Twitter followers cumulative and Mailing list cumulative fields, and then choose Exclude field.To select more than one field at a time, press the Ctrl key while you select (Command key on Mac). 6. Rename a field. In the Dataset preview pane, scroll to the Website Pageviews field and choose the edit icon. Tutorial: Create a prepared dataset 29 Amazon QuickSight User Guide In the Edit field page that opens, for Name, enter Website page views, and then choose Apply. 7. Add a calculated field that substitutes a text string for any 0-length string value in the Events field: a. On the data preparation page, scroll to the top of the Fields pane, and then choose Add calculated field. Tutorial: Create a prepared dataset 30 Amazon QuickSight User Guide b. c. In the Add calculated field page that opens, for Add name, enter populated_event. In the Functions pane at right, double-click the ifelse function from the list of functions. This adds the function to the calculated field formula. Tutorial: Create a prepared dataset 31 Amazon QuickSight User Guide d. Expand the Field list pane by choosing the drop-down arrow, and then double-click the Events field. This adds the field to the calculated field formula. e. In formula editor, enter the following additional functions and parameters required, in bold in the following: ifelse(strlen({Events})=0, 'Unknown', {Events}). The final formula should be as follows: ifelse(strlen({Events})=0, 'Unknown', |
amazon-quicksight-user-012 | amazon-quicksight-user.pdf | 12 | Add calculated field page that opens, for Add name, enter populated_event. In the Functions pane at right, double-click the ifelse function from the list of functions. This adds the function to the calculated field formula. Tutorial: Create a prepared dataset 31 Amazon QuickSight User Guide d. Expand the Field list pane by choosing the drop-down arrow, and then double-click the Events field. This adds the field to the calculated field formula. e. In formula editor, enter the following additional functions and parameters required, in bold in the following: ifelse(strlen({Events})=0, 'Unknown', {Events}). The final formula should be as follows: ifelse(strlen({Events})=0, 'Unknown', {Events}). f. Choose Save. The new calculated field is created, and appears at the top of the Fields pane. Tutorial: Create a prepared dataset 32 Amazon QuickSight User Guide 8. Choose Save. Next steps Create an analysis by using the procedure in Tutorial: Create an Amazon QuickSight analysis. Tutorial: Create an Amazon QuickSight analysis In the following short tutorial, you create an analysis, add a visual using AutoGraph, and add another visual by choosing a specific visual type. This procedure builds on the dataset that you create and prepare in Tutorial: Create a prepared Amazon QuickSight dataset. Create your analysis Use the following procedure to create your analysis. To create your analysis 1. On the Amazon QuickSight start page, choose New analysis. Tutorial: Create an analysis 33 Amazon QuickSight User Guide 2. On the Datasets page, choose the Business Review sample dataset, and then choose Create Analysis. Create a visual by using AutoGraph Create a visual by using AutoGraph, which is selected by default. On the analysis page, choose Date and Return visitors in the Fields list pane. Amazon QuickSight creates a line chart using this data. Create a scatter plot visual Create a visual by choosing a visual type and dragging fields to the field wells. Tutorial: Create an analysis 34 Amazon QuickSight To create a scatter plot visual User Guide 1. On the analysis page, choose Add and then Add visual on the application bar. A new, blank visual is created, and AutoGraph is selected by default. 2. In the Visual types pane, choose the scatter plot icon. 3. Choose fields in the Fields list pane to add to the Field wells pane: • Choose Desktop Uniques to populate the X axis field well. • Choose Mobile Uniques to populate the Y axis field well. • Choose Date to populate the Group/Color field well. A scatter plot is created using these fields. Tutorial: Create an analysis 35 Amazon QuickSight User Guide Next steps Modify the visuals in your analysis by using the procedure in Tutorial: Modify Amazon QuickSight visuals. Tutorial: Create an analysis 36 Amazon QuickSight User Guide Tutorial: Modify Amazon QuickSight visuals Use the following procedures to modify the visuals that you created using the procedures in Tutorial: Create an Amazon QuickSight analysis. Modify the line chart visual Modify your line chart visual by making it show an additional measure by date, and also by changing the chart color. To modify your line chart visual 1. In your analysis, select the line chart visual. 2. Add another measure to the visual. Select the New visitors SEO field in the Fields list pane. This measure is added to the Value field well, and the line chart updates with a line to represent it. The visual title also updates. Tutorial: Modify visuals 37 Amazon QuickSight User Guide 3. Change the color of the line used to represent the Return visitors measure. Choose the line on the chart that represents Return visitors. To do this, choose the end of the line, not the middle of the line. Choose Color Return visitors, and then choose the red icon from the color selector. Tutorial: Modify visuals 38 Amazon QuickSight User Guide 4. Choose the Date field in the X axis field well, choose Aggregate, and then choose Month. Tutorial: Modify visuals 39 Amazon QuickSight User Guide Modify the scatter plot visual Modify your scatter plot visual by changing the data granularity. To modify your scatter chart visual 1. In the analysis, select the scatter plot visual. 2. Choose the Group/Color field well, choose Aggregate, and then choose Month. The scatter plot updates to show the measures by month, rather than by the default of by year. Tutorial: Modify visuals 40 Amazon QuickSight User Guide Modify both visuals by changing visual layout and adding a filter Modify both visuals by changing visual size and location, and by adding a filter and applying it to both of them. Change the visual layout Modify both visuals by changing visual size and location. To modify both visuals 1. In your analysis, select the line chart visual. 2. Choose the resize handle in the lower right corner of the visual and drag up and to the left, |
amazon-quicksight-user-013 | amazon-quicksight-user.pdf | 13 | to show the measures by month, rather than by the default of by year. Tutorial: Modify visuals 40 Amazon QuickSight User Guide Modify both visuals by changing visual layout and adding a filter Modify both visuals by changing visual size and location, and by adding a filter and applying it to both of them. Change the visual layout Modify both visuals by changing visual size and location. To modify both visuals 1. In your analysis, select the line chart visual. 2. Choose the resize handle in the lower right corner of the visual and drag up and to the left, until the visual is half its former size both horizontally and vertically. Tutorial: Modify visuals 41 Amazon QuickSight User Guide 3. Repeat this procedure on the scatter plot visual. 4. Choose the move handle on the scatter plot visual, and drag it up to the right of the line chart visual so that they are side by side. Modify both visuals by adding a filter Modify both visuals by adding a filter and applying it to both of them. Tutorial: Modify visuals 42 Amazon QuickSight To add a filter to both visuals 1. In the analysis, choose the scatter plot visual. 2. Choose Filter at left. User Guide 3. On the Filters pane, choose the plus icon, and then choose the Date field to filter on. 4. Choose the new filter to expand it. 5. In the Edit filter pane, for Filter type, choose the After comparison type. Tutorial: Modify visuals 43 Amazon QuickSight User Guide 6. Enter a start date value of 1/1/2014. Choose Date, choose 2014 for the year, January for the month, and then choose 1 on the calendar. Tutorial: Modify visuals 44 Amazon QuickSight User Guide 7. In the Edit filter pane, choose Apply to apply the filter to the visual. The filter is applied to the scatter plot visual. This is indicated with a filter icon on the visual drop-down menu. 8. Apply the filter to the line chart visual. Tutorial: Modify visuals 45 Amazon QuickSight User Guide In the Filter pane at left, choose the Date filter again and choose Single visual, and then choose All visuals of this dataset. The filter is applied to the line chart visual as well. Next steps Create a dashboard from your analysis by using the procedure in Tutorial: Create an Amazon QuickSight dashboard. Tutorial: Create an Amazon QuickSight dashboard Use the following procedure to create a dashboard from the analysis that you created using the procedure in Tutorial: Create an Amazon QuickSight analysis. To create a dashboard from your analysis 1. In your analysis, choose Publish in the application bar at upper right. 2. In the Publish dashboard page that opens, choose Publish new dashboard as, and enter the name Marketing Dashboard. Tutorial: Create a dashboard 46 Amazon QuickSight User Guide 3. Choose Publish dashboard. The dashboard is now published. 4. On the Share dashboard page that opens, choose the X icon to close it. You can share the dashboard later by using the sharing option on the dashboard page. Using the Amazon QuickSight console In the following topic, you can find a brief introduction to using the Amazon QuickSight user interface. Topics • Using the Amazon QuickSight menu and landing page • Creating an analysis • Searching Amazon QuickSight • Choosing a language in Amazon QuickSight • Using the Amazon QuickSight mobile app Using the console 47 Amazon QuickSight User Guide Using the Amazon QuickSight menu and landing page After you sign in to Amazon QuickSight, you see the Amazon QuickSight landing page. This page provides tabs for your analyses, your dashboards, and our tutorial videos. It also provides a menu bar at the top, with options for the following: • Searching Amazon QuickSight • Choosing the AWS Region that you want to work in • Accessing your user profile (community, language selection, and help) • Creating a new analysis • Managing data Note Consult your administrator before changing your AWS Region. Your default AWS Region is configured by your Amazon QuickSight administrator. Changing the AWS Region changes where your work is stored. Using the Amazon QuickSight menu and landing page 48 Amazon QuickSight Viewing videos User Guide • To view videos about Amazon QuickSight, choose your user name at the upper-right of any page, and then choose Tutorial videos. Choose a video to play it. Accessing user profiles • To access the user profile menu, choose your user icon at the upper right of any page in Amazon QuickSight. Use this menu to manage Amazon QuickSight features, visit the community, send product feedback, choose a language, get help from the documentation, or sign out of Amazon QuickSight. Using the Amazon QuickSight menu and landing page 49 Amazon QuickSight User Guide The following options are available |
amazon-quicksight-user-014 | amazon-quicksight-user.pdf | 14 | User Guide • To view videos about Amazon QuickSight, choose your user name at the upper-right of any page, and then choose Tutorial videos. Choose a video to play it. Accessing user profiles • To access the user profile menu, choose your user icon at the upper right of any page in Amazon QuickSight. Use this menu to manage Amazon QuickSight features, visit the community, send product feedback, choose a language, get help from the documentation, or sign out of Amazon QuickSight. Using the Amazon QuickSight menu and landing page 49 Amazon QuickSight User Guide The following options are available from the user profile menu: • Manage QuickSight – If you have appropriate permissions, you can access administrative functions such as managing users, subscriptions, SPICE capacity, and account settings. • Community – Choose this option to visit the Amazon QuickSight online community. • Send feedback – This is your direct connection to the product team. Use this simple form to report problems, request features, or tell us how you are using Amazon QuickSight. • What's new – Find out what new features are available in Amazon QuickSight. • Language setting – Choose the language you want to use in the Amazon QuickSight user interface. • Region setting – Choose the AWS Region that you want to work in. Note Consult your administrator before changing your AWS Region. Your default AWS Region is configured by your Amazon QuickSight administrator. Changing the AWS Region changes where your work is stored. • Tutorial videos – This will open the Tutorial videos page where you can watch videos about Amazon QuickSight. • Help – This will open the official AWS documentation, which you can view online, in Kindle, or as a PDF. • Sign out – Choose this option to sign out of Amazon QuickSight and your AWS session. Using the Amazon QuickSight menu and landing page 50 Amazon QuickSight User Guide The following options are available from the user profile menu: • Manage QuickSight – If you have appropriate permissions, you can access administrative functions such as managing users, subscriptions, SPICE capacity, and account settings. • Community – Choose this option to visit the Amazon QuickSight online community. • Send feedback – This is your direct connection to the product team. Use this simple form to report problems, request features, or tell us how you are using Amazon QuickSight. • What's new – Find out what new features are available in Amazon QuickSight. • Language setting – Choose the language you want to use in the Amazon QuickSight user interface. • Region setting – Choose the AWS Region that you want to work in. Note Consult your administrator before changing your AWS Region. Your default AWS Region is configured by your Amazon QuickSight administrator. Changing the AWS Region changes where your work is stored. • Tutorial videos – This will open the Tutorial videos page where you can watch videos about Amazon QuickSight. • Help – This will open the official AWS documentation, which you can view online, in Kindle, or as a PDF. • Sign out – Choose this option to sign out of Amazon QuickSight and your AWS session. Viewing dashboards and analyses 1. To see available dashboards, choose Dashboards at left. Choose any dashboard on the page to open it. To see available analyses, choose Analyses at left. This is the default page when Amazon QuickSight opens. Choose any analysis to open it. Using the Amazon QuickSight menu and landing page 51 Amazon QuickSight User Guide 2. To see your list of favorite dashboards and analyses, choose Favorites. You can add items to your favorites by selecting the star near the title of the dashboard or analysis, so that the star is filled in. Clear the star to remove the item from your favorites. Creating an analysis Follow these steps to create an analysis in Amazon QuickSight. Creating an analysis 1. 2. To create a new analysis, choose New analysis, near the top left. This takes you to Datasets. Choose one to start analyzing it. To see current datasets, or to create a new dataset, choose Datasets. This takes you to the Datasets page, which displays the datasets that you have access to. (If they don't all fit on one page, you can navigate between pages.) From here, you can choose a dataset to analyze. Creating an analysis 52 Amazon QuickSight User Guide 3. To create a new dataset, choose New dataset. From here, you can upload a file, or you can create a new dataset based on a data source (a connection to external data). Icons for new data sources are at the top of the screen under From new data sources. Icons for existing data sources are displayed below them, under From existing data sources. Creating an analysis |
amazon-quicksight-user-015 | amazon-quicksight-user.pdf | 15 | have access to. (If they don't all fit on one page, you can navigate between pages.) From here, you can choose a dataset to analyze. Creating an analysis 52 Amazon QuickSight User Guide 3. To create a new dataset, choose New dataset. From here, you can upload a file, or you can create a new dataset based on a data source (a connection to external data). Icons for new data sources are at the top of the screen under From new data sources. Icons for existing data sources are displayed below them, under From existing data sources. Creating an analysis 53 Amazon QuickSight User Guide Searching Amazon QuickSight From the search bar, you can search for analyses and dashboards. To use the search tool, go to the Start Page and choose the search box at the top-left of the page. Then enter the name, or part of the name, of the data set, analyses, or dashboard you want to find. The search is not case-sensitive. After you locate the item that you're looking for, you can open it directly from the search results. You can modify a data set, create an analysis from a data set, or access an analysis or dashboard. Choose an item from the search results to open it. Choosing a language in Amazon QuickSight You can choose the language that you want to use in the Amazon QuickSight user interface. This option is set separately for each individual user. The first time a user signs in, Amazon QuickSight detects and selects a suitable language. This choice is based on the user's browser preferences and interactions with localized AWS websites. Amazon QuickSight supports the following languages: Languages available in the Amazon QuickSight user interface Official name Language code Localized name Dansk Deutsch English da de en Danish German English Searching Amazon QuickSight 54 Amazon QuickSight Official name Language code Localized name User Guide Español Français Italiano Nederlands Norsk Português Suomi Svenska 日本語 한국어 es fr it nl nb pt fi sv ja ko Spanish French Italian Dutch Norwegian Portuguese Finnish Swedish Japanese Korean 中文 (简体) zh-CN Simplified Chinese 中文 (繁體) zh-TW Traditional Chinese Choosing a language translates only user interface elements. It doesn't translate the following: • Amazon QuickSight reserved keywords • User input • Data • Date or number formats • ML Insights, suggested insights, or computations in narratives (including text) Use the following procedure to change the language in the Amazon QuickSight interface. 1. Choose your user name at top right. Choosing a language in Amazon QuickSight 55 Amazon QuickSight User Guide 2. To open the language options menu, choose the > symbol near the current language. 3. Choose the language that you want to use. Using the Amazon QuickSight mobile app The Amazon QuickSight mobile app enables you to securely get insights from your data from anywhere; favorite, browse, and interact with your dashboards; explore your data with drilldowns and filters; stay ahead of the curve via forecasting; get email alerts when unexpected changes happen in your data; and share those insights with colleagues. For a quick tour of the app, see Amazon QuickSight announces the all-new QuickSight mobile app on the AWS Big Data Blog. The QuickSight mobile app is not available in the eu-central-2 Europe (Zurich) region. QuickSight accounts that are integrated with IAM Identity Center are not supported by the Amazon QuickSight mobile app. To begin using the QuickSight Mobile app, do one of the following: • Download the iOS version from the iOS app store • Download the Android version from google play Using the Amazon QuickSight mobile app 56 Amazon QuickSight User Guide Connecting to data in Amazon QuickSight People in many different roles use Amazon QuickSight to help them do analysis and advanced calculations, design data dashboards, embed analytics, and make better-informed decisions. Before any of that can happen, someone who understands your data needs to add it to a QuickSight dataset. QuickSight supports direct connections and uploads from a variety of data sources. Capabilities and use cases QuickSight Standard edition capabilities After your data is available in QuickSight Standard edition, you can do the following: • Transform the dataset with field formatting, hierarchies, data type conversions, and calculations. • Create one or more data analyses based on your newly created dataset. • Share your analysis with other people so they can help design it. • Add charts, graphs, more datasets, and multiple pages (called sheets) to your data analysis. • Create visual appeal with customized formatting and themes. • Make them interactive by using parameters, controls, filters, and custom actions. • Combine data from multiple data sources, and then build new hierarchies for drilling down and calculations only available during analytics, like aggregations, window functions, and more. • Publish your analysis as an interactive data dashboard. • |
amazon-quicksight-user-016 | amazon-quicksight-user.pdf | 16 | calculations. • Create one or more data analyses based on your newly created dataset. • Share your analysis with other people so they can help design it. • Add charts, graphs, more datasets, and multiple pages (called sheets) to your data analysis. • Create visual appeal with customized formatting and themes. • Make them interactive by using parameters, controls, filters, and custom actions. • Combine data from multiple data sources, and then build new hierarchies for drilling down and calculations only available during analytics, like aggregations, window functions, and more. • Publish your analysis as an interactive data dashboard. • Share the dashboard so other people can use the dashboard, even if they don't use the analysis that it's based on. • Add more data to create more analyses and dashboards. QuickSight Enterprise edition capabilities After your data is available in QuickSight Enterprise edition, you can do different things depending on your role. If you can build datasets, design analyses, and publish dashboards, you can do all of the things people using Standard edition can do. In addition, these are some examples of additional tasks that you can do: • Create analyses that use QuickSight insights, including machine learning (ML) powered insights for forecasting, anomaly and outlier detection, and key driver identification. 57 Amazon QuickSight User Guide • Design narrative insights with text, colors, images, and calculations. • Add data from virtual private clouds (VPCs) and on-premises data sources, with data encryption at rest. • Control access in datasets by adding row and column level security. • Refresh imported datasets every hour. • Share emailed reports. Application development If you develop applications or use the AWS SDKs and AWS Command Line Interface (AWS CLI), you can do the following and more: • Add embedded analytics and embedded interactive dashboards to websites and applications. • Use API operations to manage data sources and datasets. • Refresh imported data more frequently by using the data ingestion API operations. • Script, transfer, and make templates from analyses and dashboards by using API operations. • Programmatically assign people to security roles based on settings managed by system administrators. Administrative functions in QuickSight If you perform administrative functions in QuickSight, you can do the following and more: • Manage security with shared folders to organize your teams' work and help them collaborate using dashboards, analytics, and datasets. • Add QuickSight to your VPC to enable access to data in VPC and on-premises data sources. • Protect sensitive data with finely grained access control to AWS data sources. • Manually assign people to the QuickSight author security role so they can prepare datasets, design analytics, and publish data dashboards at a fixed cost per month. • Manually assign people to the QuickSight reader security role so they can securely interact with published data dashboards on a pay-per-session basis. Dashboard subscription If you subscribe to dashboards, you can do the following: • Use and subscribe to interactive dashboards designed by your team of experts. • Enjoy a simplified uncluttered interface. • View dashboard snapshots in email. • Focus on making decisions with the data at your fingertips. 58 Amazon QuickSight User Guide After you connect to or import data, you create a dataset to shape and prepare data to share and reuse. You can view your available datasets on the Datasets page, which you reach by choosing Manage data on the Amazon QuickSight start page. You can view available data sources and create a new dataset on the Create a Data Set page, which you reach by choosing New data set on the Datasets page. Topics • Supported data sources • Data source quotas • Supported data types and values • Amazon QuickSight Connection examples • Creating datasets • Editing datasets • Reverting datasets back to previous published versions • Duplicating datasets • Sharing datasets • Tracking dashboards and analyses that use a dataset • Using dataset parameters in Amazon QuickSight • Using row-level security in Amazon QuickSight • Using column-level security to restrict access to a dataset • Running queries as an IAM role in Amazon QuickSight • Deleting datasets • Adding a dataset to an analysis • Working with data sources in Amazon QuickSight Supported data sources Amazon QuickSight supports a variety of data sources that you can use to provide data for analyses. The following data sources are supported. Connecting to relational data You can use any of the following relational data stores as data sources for Amazon QuickSight: Supported data sources 59 User Guide Amazon QuickSight • Amazon Athena • Amazon Aurora • Amazon OpenSearch Service • Amazon Redshift • Amazon Redshift Spectrum • Amazon S3 • Amazon S3 Analytics • Apache Spark 2.0 or later • AWS IoT Analytics • Databricks (E2 Platform only) on Spark 1.6 or later, up to version |
amazon-quicksight-user-017 | amazon-quicksight-user.pdf | 17 | data sources Amazon QuickSight supports a variety of data sources that you can use to provide data for analyses. The following data sources are supported. Connecting to relational data You can use any of the following relational data stores as data sources for Amazon QuickSight: Supported data sources 59 User Guide Amazon QuickSight • Amazon Athena • Amazon Aurora • Amazon OpenSearch Service • Amazon Redshift • Amazon Redshift Spectrum • Amazon S3 • Amazon S3 Analytics • Apache Spark 2.0 or later • AWS IoT Analytics • Databricks (E2 Platform only) on Spark 1.6 or later, up to version 3.0 • Exasol 7.1.2 or later • Google BigQuery • MariaDB 10.0 or later • Microsoft SQL Server 2012 or later • MySQL 5.7 or later Note Effective October 2023, the MySQL community has deprecated support for MySQL version 5.7. This means that Amazon QuickSight will no longer support new features, enhancements, bug fixes, or security patches for MySQL 5.7. Support for existing query workload will take place at a best effort basis. QuickSight customers can still use MySQL 5.7 datasets with QuickSight, but we encourage customers to upgrade their MySQL databases (DB) to major version 8.0 or higher. To see the statement provided by Amazon RDS, see Amazon RDS Extended Support opt-in behavior is changing. Upgrade your Amazon RDS for MySQL 5.7 database instances before February 29, 2024 to avoid potential increase in charges. Amazon RDS has updated their security settings for Amazon RDS MySQL 8.3. Any connections from QuickSight to Amazon RDS MySQL 8.3 are SSL-enabled by default. This is the only option available for MySQL 8.3. connections. • Oracle 12c or later • PostgreSQL 9.3.1 or later Connecting to relational data 60 Amazon QuickSight Note User Guide SCRAM based authentication to PostgreSQL from Amazon QuickSight is supported for the following connectors: RDS hosted PostgreSQL, Aurora PostgreSQL, and Vanilla PostgreSQL. If the appropriate PostgreSQL engine version is used, and the correct configurations in PostgreSQL for SCRAM are set up, no additional configurations are needed in QuickSight. If you are still experiencing issues establishing a SCRAM authentication to PostgreSQL from QuickSight, please create a support ticket. • Presto 0.167 or later • Snowflake • Starburst • Trino • Teradata 14.0 or later • Timestream Note You can access additional data sources not listed here by linking or importing them through supported data sources. Amazon Redshift clusters, Amazon Athena databases, and Amazon RDS instances must be in AWS. Other database instances must be in one of the following environments to be accessible from Amazon QuickSight: • Amazon EC2 • Local (on-premises) databases • Data in a data center or some other internet-accessible environment For more information, see Infrastructure security in Amazon QuickSight. Connecting to relational data 61 Amazon QuickSight Importing file data User Guide You can use files in Amazon S3 or on your local (on-premises) network as data sources. QuickSight supports files in the following formats: • CSV and TSV – Comma-delimited and tab-delimited text files • ELF and CLF – Extended and common log format files • JSON – Flat or semistructured data files • XLSX – Microsoft Excel files QuickSight supports UTF-8 file encoding, but not UTF-8 (with BOM). Files in Amazon S3 that have been compressed with zip, or gzip (www.gzip.org), can be imported as-is. If you used another compression program for files in Amazon S3, or if the files are on your local network, remove compression before importing them. JSON data Amazon QuickSight natively supports JSON flat files and JSON semistructured data files. You can either upload a JSON file or connect to your Amazon S3 bucket that contains JSON data. Amazon QuickSight automatically performs schema and type inference on JSON files and embedded JSON objects. Then it flattens the JSON, so you can analyze and visualize application- generated data. Basic support for JSON flat-file data includes the following: • Inferring the schema • Determining data types • Flattening the data • Parsing JSON (JSON embedded objects) from flat files Support for JSON file structures (.json) includes the following: • JSON records with structures • JSON records with root elements as arrays Importing file data 62 Amazon QuickSight User Guide You can also use the parseJson function to extract values from JSON objects in a text file. For example, if your CSV file has a JSON object embedded in one of the fields, you can extract a value from a specified key-value pair (KVP). For more information on how to do this, see parseJson. The following JSON features aren't supported: • Reading JSON with a structure containing a list of records • List attributes and list objects within a JSON record; these are skipped during import • Customizing upload or configuration settings • parseJSON functions for SQL and analyses • Error messaging for invalid JSON |
amazon-quicksight-user-018 | amazon-quicksight-user.pdf | 18 | parseJson function to extract values from JSON objects in a text file. For example, if your CSV file has a JSON object embedded in one of the fields, you can extract a value from a specified key-value pair (KVP). For more information on how to do this, see parseJson. The following JSON features aren't supported: • Reading JSON with a structure containing a list of records • List attributes and list objects within a JSON record; these are skipped during import • Customizing upload or configuration settings • parseJSON functions for SQL and analyses • Error messaging for invalid JSON • Extracting a JSON object from a JSON structure • Reading delimited JSON records You can use the parseJson function to parse flat files during data preparation. This function extracts elements from valid JSON structures and lists. The following JSON values are supported: • JSON object • String (double quoted) • Number (integer and float) • Boolean • NULL Software as a service (SaaS) data QuickSight can connect to a variety of Software as a Service (SaaS) data sources either by connecting directly or by using Open Authorization (OAuth). SaaS sources that support direct connection include the following: • Jira • ServiceNow Software as a service (SaaS) data 63 Amazon QuickSight User Guide SaaS sources that use OAuth require that you authorize the connection on the SaaS website. For this to work, QuickSight must be able to access the SaaS data source over the network. These sources include the following: • Adobe Analytics • GitHub • Salesforce You can use reports or objects in the following editions of Salesforce as data sources for Amazon QuickSight: • Enterprise Edition • Unlimited Edition • Developer Edition To connect to on premises data sources, you need to add your data sources and a QuickSight- specific network interface to Amazon Virtual Private Cloud (Amazon VPC). When configured properly, a VPC based on Amazon VPC resembles a traditional network that you operate in your own data center. It enables you to secure and isolate traffic between resources. You define and control the network elements to suit your requirements, while still getting the benefit of cloud networking and the scalable infrastructure of AWS. For detailed information, see Infrastructure security in Amazon QuickSight. Data source quotas Data sources that you use with Amazon QuickSight must conform to the following quotas. Topics • SPICE quotas for imported data • Quotas for direct SQL queries SPICE quotas for imported data When you create a new dataset in Amazon QuickSight, SPICE limits the number of rows you can add to a dataset. You can ingest data into SPICE from a query or from a file. Each file can have up 64 Amazon QuickSight User Guide to 2,000 columns. Each column name can have up to 127 Unicode characters. Each field can have up to 2,047 Unicode characters. To retrieve a subset of data from a larger set, you can deselect columns or apply filters to reduce the size of the data. If you are importing from Amazon S3, each manifest can specify up to 1,000 files. Quotas for SPICE are as follows: • 2,047 Unicode characters for each field • 127 Unicode characters for each column name • 2,000 columns for each file • 1,000 files for each manifest • For Standard edition, 25 million (25,000,000) rows or 25 GB for each dataset • For Enterprise edition, 1 billion (1,000,000,000) rows or 1 TB for each dataset All quotas apply to SPICE datasets with row-level security, as well. In rare cases, if you're ingesting large rows into SPICE, you might reach the quota for gigabytes per dataset before you reach the quota on rows. The size is based on the SPICE capacity the data occupies after ingestion into SPICE. Quotas for direct SQL queries If you aren't importing data into SPICE, different quotas apply for space and time. For operations such as connecting, sampling data for a dataset, and generating visuals, timeouts can occur. In some cases, these are timeout quotas set by the source database engine. In other cases, such as visualizing, Amazon QuickSight generates a timeout after 2 minutes. However, not all database drivers react to the 2-minute timeout, for example Amazon Redshift. In these cases, the query runs for as long as it takes for the response to return, which can result in long-running queries on your database. When this happens, you can cancel the query from the database server to free up database resources. Follow the instructions for your database server about how to do this. For example, for more information on how to cancel queries in Amazon Redshift, see Canceling a query in Amazon Redshift, and Implementing workload management in Amazon Redshift in the Amazon Redshift Database Developer Guide. Quotas for direct SQL queries 65 |
amazon-quicksight-user-019 | amazon-quicksight-user.pdf | 19 | timeout, for example Amazon Redshift. In these cases, the query runs for as long as it takes for the response to return, which can result in long-running queries on your database. When this happens, you can cancel the query from the database server to free up database resources. Follow the instructions for your database server about how to do this. For example, for more information on how to cancel queries in Amazon Redshift, see Canceling a query in Amazon Redshift, and Implementing workload management in Amazon Redshift in the Amazon Redshift Database Developer Guide. Quotas for direct SQL queries 65 Amazon QuickSight User Guide Each result set from a direct query can have up to 2,000 columns. Each column name can have up to 127 Unicode characters. If you want to retrieve data from a larger table, you can use one of several methods to reduce the size of the data. You can deselect columns, or apply filters. In a SQL query, you can also use predicates, such as WHERE, HAVING. If your visuals time out during a direct query, you can simplify your query to optimize execution time or you can import the data into SPICE. Quotas for queries are as follows: • 127 Unicode characters for each column name. • 2,000 columns for each dataset. • 2-minute quota for generating a visual, or an optional dataset sample. • Data source timeout quotas apply (varies for each database engine). Supported data types and values Amazon QuickSight currently supports the following primitive data types: Date, Decimal, Integer, and String. The following data types are supported in SPICE: Date, Decimal-fixed, Decimal-float, Integer, and String. QuickSight accepts Boolean values by promoting them to integers. It can also derive geospatial data types. Geospatial data types use metadata to interpret the physical data type. Latitude and longitude are numeric. All other geospatial categories are strings. Make sure that any table or file that you use as a data source contains only fields that can be implicitly converted to these data types. Amazon QuickSight skips any fields or columns that can't be converted. If you get an error that says "fields were skipped because they use unsupported data types", alter your query or table to remove or recast unsupported data types. String and text data Fields or columns that contain characters are called strings. A field with the data type of STRING can initially contain almost any type of data. Examples include names, descriptions, phone numbers, account numbers, JSON data, cities, post codes, dates, and numbers that can be used to calculate. These types are sometimes called textual data in a general sense, but not in a technical sense. QuickSight doesn't support binary and character large objects (BLOBs) in dataset columns. In the QuickSight documentation, the term "text" always means "string data". Supported data types and values 66 Amazon QuickSight User Guide The first time you query or import the data, QuickSight tries to interpret the data that it identifies as other types, for example dates and numbers. It's a good idea to verify that the data types assigned to your fields or columns are correct. For each string field in imported data, QuickSight uses a field length of 8 bytes plus the UTF-8 encoded character length. Amazon QuickSight supports UTF-8 file encoding, but not UTF-8 (with BOM). Date and time data Fields with a data type of Date also include time data, and are also known as Datetime fields. QuickSight supports dates and times that use supported date formats. QuickSight uses UTC time for querying, filtering, and displaying date data. When date data doesn't specify a time zone, QuickSight assumes UTC values. When date data does specify a time zone, QuickSight converts it to display in UTC time. For example, a date field with a time zone offset like 2015-11-01T03:00:00-08:00 is converted to UTC and displayed in Amazon QuickSight as 2015-11-01T15:30:00. For each DATE field in imported data, QuickSight uses a field length of 8 bytes. QuickSight supports UTF-8 file encoding, but not UTF-8 (with BOM). Numeric data Numeric data includes integers and decimals. Integers with a data type of INT are negative or positive numbers that don't have a decimal place. QuickSight doesn't distinguish between large and small integers. Integers over a value of 9007199254740991 or 2^53 - 1 might not display exactly or correctly in a visual. Decimals with the data type of Decimal are negative or positive numbers that contain at least one decimal place before or after the decimal point. When you choose Direct Query mode, all non- integer decimal types are marked as Decimal and the underlying engine handles the precision of the datapoint based on the data source's supported behaviors. For more information on supported data source types, see Supported data types and values. When you store your |
amazon-quicksight-user-020 | amazon-quicksight-user.pdf | 20 | distinguish between large and small integers. Integers over a value of 9007199254740991 or 2^53 - 1 might not display exactly or correctly in a visual. Decimals with the data type of Decimal are negative or positive numbers that contain at least one decimal place before or after the decimal point. When you choose Direct Query mode, all non- integer decimal types are marked as Decimal and the underlying engine handles the precision of the datapoint based on the data source's supported behaviors. For more information on supported data source types, see Supported data types and values. When you store your dataset in SPICE, you can choose to store your decimal values as fixed or float decimal types. Decimal-fixed data types use the format of decimal (18,4) that allow 18 digits total and up to 4 digits after the decimal point. Decimal-fixed data types are a good Date and time data 67 Amazon QuickSight User Guide choice to conduct exact mathematical operations, but QuickSight rounds the value to the nearest ten thousandth place when the value is ingested into SPICE. Decimal-float data types provide approximately 16 significant digits of accuracy to a value. The significant digits can be on either side of the decimal point to support numbers with many decimal places and higher numbers at the same time. For example, the Decimal-float data type supports the number 12345.1234567890 or the number 1234567890.12345. If you work with very small numbers that are close to 0, the Decimal-float data type supports up to 15 digits to the right of the decimal point, for example 0.123451234512345. The maximum value that this data type supports is 1.8 * 10^308 to minimize the probability of an overflow error with your data set. The Decimal-float data type is inexact and some values are stored as approximations instead of the real value. This may result in slight descrepencies when you store and return some specific values. The following considerations apply to the Decimal-float data type. • If the dataset that you're using comes from an Amazon S3 data source, SPICE assigns the Decimal-float decimal type to all numeric decimal values. • If the dataset that you're using comes from a database, SPICE uses the decimal type that the value is assigned in the database. For example, if the value is assigned a fixed-point numeric value in the database, the value will be a Decimal-fixed type in SPICE. For existing SPICE datasets that contain fields that can be converted to the Decimal-float data type, a pop-up appears in the Edit dataset page. To convert fields of an existing dataset to the Decimal-float data type, choose UPDATE FIELDS. If you don't want to opt in, choose DO NOT UPDATE FIELDS. The Update fields pop up appears every time you open the Edit dataset page until the dataset is saved and published. The image below shows the Update fields pop up. Numeric data 68 Amazon QuickSight User Guide Supported data types from external data sources The following table lists data types that are supported when using the following data sources with Amazon QuickSight. Supported data types from external data sources 69 Amazon QuickSight Database engine or source Amazon Athena, Presto, Starburst, Trino Amazon Aurora, MariaDB, and MySQL Numeric data types String data types Datetime data types Boolean data types User Guide • boolean date timestamp date datetime timestamp year • • • • • • • • • • • • • • • • • • • • • • bigint decimal double integer real smallint tinyint bigint decimal double int char varchar char enum set text • • • • • • • integer varchar mediumint numeric smallint tinyint Supported data types from external data sources 70 Amazon QuickSight Database engine or source Amazon OpenSearch Service Oracle Numeric data types String data types Datetime data types Boolean data types User Guide • string (keyword string field type in OpenSearch Service) • ip • char • nchar • timestamp • • boolean binary • date bit • datetime • nvarchar • datetime2 • text • varchar • datetimeo ffset • smalldatetime byte integer long float • • • • • double • bigint • decimal • decimal • int • money • numeric • real • smallint • smallmoney • tinyint Supported data types from external data sources 71 Amazon QuickSight Database engine or source PostgreSQL Apache Spark Numeric data types String data types Datetime data types Boolean data types User Guide • boolean date timestamp • boolean date timestamp • • • • • • • • • • • • • • • • • • • bigint decimal double integer numeric precision real smallint char character text varchar varying character • • • • • • bigint varchar decimal double integer real smallint tinyint Supported data |
amazon-quicksight-user-021 | amazon-quicksight-user.pdf | 21 | • money • numeric • real • smallint • smallmoney • tinyint Supported data types from external data sources 71 Amazon QuickSight Database engine or source PostgreSQL Apache Spark Numeric data types String data types Datetime data types Boolean data types User Guide • boolean date timestamp • boolean date timestamp • • • • • • • • • • • • • • • • • • • bigint decimal double integer numeric precision real smallint char character text varchar varying character • • • • • • bigint varchar decimal double integer real smallint tinyint Supported data types from external data sources 72 Amazon QuickSight Database engine or source Snowflake Numeric data types String data types Datetime data types Boolean data types User Guide • boolean char character string text • • • • • date datetime time timestamp • • • • • varchar timestamp_* • • • • • • • • • • • • • • • bigint byteint decimal double doublepre cision float float4 float8 int integer number numeric real smallint tinyint Supported data types from external data sources 73 Amazon QuickSight Database engine or source Microsoft SQL Server Numeric data types String data types Datetime data types Boolean data types User Guide • • • • • • • • • • bigint bit decimal int char nchar nvarchar text • • • • • money varchar • bit • • • • date datetime datetime2 smalldatetime numeric real smallint smallmoney tinyint Supported date formats Amazon QuickSight supports the date and time formats described in this section. Before you add data to Amazon QuickSight, check if your date format is compatible. If you need to use an unsupported format, see Using unsupported or custom dates. The supported formats vary depending on the data source type, as follows: Data source File uploads Amazon S3 sources Clocks Date formats Both 24- hour and Supported date and time formats are described in the Joda API documentation. Supported data types from external data sources 74 Amazon QuickSight Data source Athena Salesforce Clocks Date formats 12-hour For a complete list of Joda date formats, see clocks Class DateTimeFormat on the Joda website. User Guide For datasets stored in memory (SPICE), Amazon QuickSight supports dates in the following range: Jan 1, 1400 00:00:00 UTC through Dec 31, 9999, 23:59:59 UTC . Supported data types from external data sources 75 Amazon QuickSight User Guide Data source Clocks Date formats Relational databases sources 24-hour clock only The following data and time formats: dd/MM/yyyy HH:mm:ss , for example 31/12/2016 15:30:00. dd/MM/yyyy , for example 31/12/2016. dd/MMM/yyyy HH:mm:ss , for example 31/ DEC/2016 15:30:00. dd/MMM/yyyy , for example 31/DEC/2016. dd-MMM-yyyy HH:mm:ss , for example 31- DEC-2016 15:30:00. dd-MMM-yyyy , for example 31-DEC-2016. dd-MM-yyyy HH:mm:ss , for example 31-12-2016 15:30:00. dd-MM-yyyy , for example 31-12-2016. 1. 2. 3. 4. 5. 6. 7. 8. 9. MM/dd/yyyy HH:mm:ss , for example 12/31/2016 15:30:00. 10. MM/dd/yyyy , for example 12/31/2016. 11. MM-dd-yyyy HH:mm:ss , for example 12-31-2016 15:30:00. 12. MM-dd-yyyy , for example 12-31-2016. 13. MMM/dd/yyyy HH:mm:ss , for example DEC/31/2016 15:30:00. 14. MMM/dd/yyyy , for example DEC/31/2016. Supported data types from external data sources 76 Amazon QuickSight User Guide Data source Clocks Date formats 15. MMM-dd-yyyy HH:mm:ss , for example DEC-31-2016 15:30:00. 16. MMM-dd-yyyy , for example DEC-31-2016. 17. yyyy/MM/dd HH:mm:ss , for example 2016/12/31 15:30:00. 18. yyyy/MM/dd , for example 2016/12/31. 19. yyyy/MMM/dd HH:mm:ss , for example 2016/DEC/31 15:30:00. 20. yyyy/MMM/dd , for example 2016/DEC/31. 21. yyyy-MM-dd HH:mm:ss , for example 2016-12-31 15:30:00. 22. yyyy-MM-dd , for example 2016-12-31. 23. yyyy-MMM-dd HH:mm:ss , for example 2016-DEC-31 15:30:00. 24. yyyy-MMM-dd , for example 2016-DEC-31. 25. yyyyMMdd'T'HHmmss , for example 20161231T153000. 26. yyyy-MM-dd'T'HH:mm:ss 2016-12-31T15:30:00. , for example 27. yyyyMMdd'T'HHmmss.SSS 20161231T153000.123. , for example 28. MM/dd/yyyy HH:mm:ss.SSS , for example 12/31/2016 15:30:00.123. Supported data types from external data sources 77 Amazon QuickSight User Guide Data source Clocks Date formats 29. dd/MM/yyyy HH:mm:ss.SSS , for example 31/12/2016 15:30:00.123. 30. yyyy/MM/dd HH:mm:ss.SSS , for example 2016/12/31 15:30:00.123. 31. MMM/dd/yyyy HH:mm:ss.SSS example DEC/31/2016 15:30:00.123. , for 32. dd/MMM/yyyy HH:mm:ss.SSS example 31/DEC/2016 15:30:00.123. , for 33. yyyy/MMM/dd HH:mm:ss.SSS example 2016/DEC/31 15:30:00.123. , for 34. yyyy-MM-dd'T'HH:mm:ss.SSS example 2016-12-31T15:30:00.123. , for 35. MM-dd-yyyy HH:mm:ss.SSS , for example 12-31-2016 15:30:00.123. 36. dd-MM-yyyy HH:mm:ss.SSS , for example 31-12-2016 15:30:00.123. 37. yyyy-MM-dd HH:mm:ss.SSS , for example 2016-12-31 15:30:00.123. 38. MMM-dd-yyyy HH:mm:ss.SSS example DEC-31-2016 15:30:00.123. , for 39. dd-MMM-yyyy HH:mm:ss.SSS example 31-DEC-2016 15:30:00.123. , for 40. yyyy-MMM-dd HH:mm:ss.SSS example 2016-DEC-31 15:30:00.123. , for Supported data types from external data sources 78 Amazon QuickSight Unsupported values in data User Guide If a field contains values that don't conform with the data type that Amazon QuickSight assigns to the field, the rows containing those values are skipped. For example, take the following source data. Sales ID Sales Date |
amazon-quicksight-user-022 | amazon-quicksight-user.pdf | 22 | MM-dd-yyyy HH:mm:ss.SSS , for example 12-31-2016 15:30:00.123. 36. dd-MM-yyyy HH:mm:ss.SSS , for example 31-12-2016 15:30:00.123. 37. yyyy-MM-dd HH:mm:ss.SSS , for example 2016-12-31 15:30:00.123. 38. MMM-dd-yyyy HH:mm:ss.SSS example DEC-31-2016 15:30:00.123. , for 39. dd-MMM-yyyy HH:mm:ss.SSS example 31-DEC-2016 15:30:00.123. , for 40. yyyy-MMM-dd HH:mm:ss.SSS example 2016-DEC-31 15:30:00.123. , for Supported data types from external data sources 78 Amazon QuickSight Unsupported values in data User Guide If a field contains values that don't conform with the data type that Amazon QuickSight assigns to the field, the rows containing those values are skipped. For example, take the following source data. Sales ID Sales Date Sales Amount -------------------------------------- 001 10/14/2015 12.43 002 5/3/2012 25.00 003 Unknown 18.17 004 3/8/2009 86.02 Amazon QuickSight interprets Sales Date as a date field and drops the row containing a nondate value, so only the following rows are imported. Sales ID Sales Date Sales Amount -------------------------------------- 001 10/14/2015 12.43 002 5/3/2012 25.00 004 3/8/2009 86.02 In some cases, a database field might contain values that the JDBC driver can't interpret for the source database engine. In such cases, the uninterpretable values are replaced by null so that the rows can be imported. The only known occurrence of this issue is with MySQL date, datetime, and timestamp fields that have all-zero values, for example 0000-00-00 00:00:00. For example, take the following source data. Sales ID Sales Date Sales Amount --------------------------------------------------- 001 2004-10-12 09:14:27 12.43 002 2012-04-07 12:59:03 25.00 003 0000-00-00 00:00:00 18.17 004 2015-09-30 01:41:19 86.02 In this case, the following data is imported. Sales ID Sales Date Sales Amount --------------------------------------------------- 001 2004-10-12 09:14:27 12.43 Supported data types from external data sources 79 Amazon QuickSight User Guide 002 2012-04-07 12:59:03 25.00 003 (null) 18.17 004 2015-09-30 01:41:19 86.02 Amazon QuickSight Connection examples You can connect Amazon QuickSight to different types of data sources. This includes data residing in Software-as-a-Service (SaaS) applications, flat files stored in Amazon S3 buckets, data from third-party services like Salesforce, and query results from Athena. Use the following examples to learn more about the requirements for connecting to specific data sources. Topics • Creating a dataset using Amazon Athena data • Using Amazon OpenSearch Service with Amazon QuickSight • Creating a dataset using Amazon S3 files • Creating a data source using Apache Spark • Using Databricks in QuickSight • Creating a dataset using Google BigQuery • Creating a dataset using a Microsoft Excel file • Creating a data source using Presto • Using Snowflake with Amazon QuickSight • Using Starburst with Amazon QuickSight • Creating a data source and data set from SaaS sources • Creating a dataset from Salesforce • Using Trino with Amazon QuickSight • Creating a dataset using a local text file • Using Amazon Timestream data with Amazon QuickSight Creating a dataset using Amazon Athena data Use the following procedure to create a new dataset that connects to Amazon Athena data or to Athena Federated Query data. Connection examples 80 Amazon QuickSight To connect to Amazon Athena User Guide 1. Begin by creating a new dataset. Choose Datasets from the navigation pane at left, then choose New dataset. 2. a. To use an existing Athena connection profile (common), scroll down to the FROM EXISTING DATA SOURCES section, and choose the card for the existing data source that you want to use. Choose Create dataset. Cards are labeled with the Athena data source icon and the name provided by the person who created the connection. b. To create a new Athena connection profile (less common), use the following steps: 1. In the FROM NEW DATA SOURCES section, choose the Athena data source card. 2. For Data source name, enter a descriptive name. 3. For Athena workgroup, choose your workgroup. 4. Choose Validate connection to test the connection. 5. Choose Create data source. 6. (Optional) Select an IAM role ARN for queries to run as. 3. On the Choose your table screen, do the following: a. For Catalog, choose one of the following: • If you are using Athena Federated Query, choose the catalog you want to use. • Otherwise, choose AwsDataCatalog. b. Choose one of the following: • To write a SQL query, choose Use custom SQL. • To choose a database and table, choose your catalog that contains your databases from the dropdown under Catalog. Then, choose a database from the dropdown under Database and choose a table from the Tables list that appears for your database. If you don't have the right permissions, you receive the following error message: "You don't have sufficient permissions to connect to this dataset or run this query." Contact your QuickSight administrator for assistance. For more information, see Authorizing connections to Amazon Athena. 4. Choose Edit/preview data. Amazon Athena 81 Amazon QuickSight User Guide 5. Create a dataset and analyze the data using the table by |
amazon-quicksight-user-023 | amazon-quicksight-user.pdf | 23 | table, choose your catalog that contains your databases from the dropdown under Catalog. Then, choose a database from the dropdown under Database and choose a table from the Tables list that appears for your database. If you don't have the right permissions, you receive the following error message: "You don't have sufficient permissions to connect to this dataset or run this query." Contact your QuickSight administrator for assistance. For more information, see Authorizing connections to Amazon Athena. 4. Choose Edit/preview data. Amazon Athena 81 Amazon QuickSight User Guide 5. Create a dataset and analyze the data using the table by choosing Visualize. For more information, see Visualizing data in Amazon QuickSight. Using Amazon OpenSearch Service with Amazon QuickSight Following, you can find how to connect to your Amazon OpenSearch Service data using Amazon QuickSight. Creating a new QuickSight data source connection for OpenSearch Service Following, you can find how to connect to OpenSearch Service Before you can proceed, Amazon QuickSight needs to be authorized to connect to Amazon OpenSearch Service. If connections aren't enabled, you get an error when you try to connect. A QuickSight administrator can authorize connections to AWS resources. To authorize QuickSight to initiate a connection to OpenSearch Service 1. Open the menu by clicking on your profile icon at top right, then choose Manage QuickSight. If you don't see the Manage QuickSight option on your profile menu, ask your QuickSight administrator for assistance. 2. Choose Security & permissions, Add or remove. 3. Enable the option for OpenSearch. 4. Choose Update. After OpenSearch Service is accessible, you create a data source so people can use the specified domains. To connect to OpenSearch Service 1. Begin by creating a new dataset. Choose Datasets from the navigation pane at left, then choose New Dataset. 2. Choose the Amazon OpenSearch data source card. 3. For Data source name, enter a descriptive name for your OpenSearch Service data source connection, for example OpenSearch Service ML Data. Because you can create many datasets from a connection to OpenSearch Service, it's best to keep the name simple. Amazon OpenSearch Service 82 Amazon QuickSight User Guide 4. For Connection type, choose the network you want to use. This can be a virtual private cloud (VPC) based on Amazon VPC or a public network. The list of VPCs contains the names of VPC connections, rather than VPC IDs. These names are defined by the QuickSight administrator. 5. For Domain, choose the OpenSearch Service domain that you want to connect to. 6. Choose Validate connection to check that you can successfully connect to OpenSearch Service. 7. Choose Create data source to proceed. 8. For Tables, choose the one you want to use, then choose Select to continue. 9. Do one of the following: • To import your data into the QuickSight in-memory engine (called SPICE), choose Import to SPICE for quicker analytics. For information about how to enable importing OpenSearch data, see Authorizing connections to Amazon OpenSearch Service. • To allow QuickSight to run a query against your data each time you refresh the dataset or use the analysis or dashboard, choose Directly query your data. To enable autorefresh on a published dashboard that uses OpenSearch Service data, the OpenSearch Service dataset needs to use a direct query. 10. Choose Edit/Preview and then Save to save your dataset and close it. Managing permissions for OpenSearch Service data The following procedure describes how to view, add, and revoke permissions to allow access to the same OpenSearch Service data source. The people that you add need to be active users in QuickSight before you can add them. To edit permissions on a data source 1. Choose Datasets at left, then scroll down to find the data source card for your Amazon OpenSearch Service connection. An example might be US Amazon OpenSearch Service Data. 2. Choose the Amazon OpenSearch dataset. 3. On the dataset details page that opens, choose the Permissionstab. A list of current permissions appears. 4. To add permissions, choose Add users & groups, then follow these steps: Amazon OpenSearch Service 83 Amazon QuickSight User Guide a. Add users or groups to allow them to use the same dataset. b. When you're finished adding everyone that you want to add, choose the Permissions that you want to apply to them. 5. (Optional) To edit permissions, you can choose Viewer or Owner. • Choose Viewer to allow read access. • Choose Owner to allow that user to edit, share, or delete this QuickSight dataset. 6. (Optional) To revoke permissions, choose Revoke access. After you revoke someone's access, they can't create new datasets from this data source. However, their existing datasets still have access to this data source. 7. When you are finished, choose Close. Adding a new QuickSight dataset for OpenSearch Service After you have an existing data |
amazon-quicksight-user-024 | amazon-quicksight-user.pdf | 24 | want to add, choose the Permissions that you want to apply to them. 5. (Optional) To edit permissions, you can choose Viewer or Owner. • Choose Viewer to allow read access. • Choose Owner to allow that user to edit, share, or delete this QuickSight dataset. 6. (Optional) To revoke permissions, choose Revoke access. After you revoke someone's access, they can't create new datasets from this data source. However, their existing datasets still have access to this data source. 7. When you are finished, choose Close. Adding a new QuickSight dataset for OpenSearch Service After you have an existing data source connection for OpenSearch Service, you can create OpenSearch Service datasets to use for analysis. To create a dataset using OpenSearch Service 1. 2. From the start page, choose Datasets, New dataset. Scroll down to the data source card for your OpenSearch Service connection. If you have many data sources, you can use the search bar at the top of the page to find your data source with a partial match on the name. 3. Choose the Amazon OpenSearch data source card, and then choose Create data set. 4. For Tables, choose the OpenSearch Service index that you want to use. 5. Choose Edit/Preview. 6. Choose Save to save and close the dataset. Adding OpenSearch Service data to an analysis After you have an OpenSearch Service dataset available, you can add it to a QuickSight analysis. Before you begin, make sure that you have an existing dataset that contains the OpenSearch Service data that you want to use. Amazon OpenSearch Service 84 Amazon QuickSight User Guide To add OpenSearch Service data to an analysis 1. Choose Analyses at left. 2. Do one of the following: • To create a new analysis, choose New analysis at right. • To add to an existing analysis, open the analysis that you want to edit. • Choose the pencil icon near at top left. • Choose Add data set. 3. Choose the OpenSearch Service dataset that you want to add. For information on using OpenSearch Service in visualizations, see Limitations for using OpenSearch Service. 4. For more information, see Working with analyses. Limitations for using OpenSearch Service The following limitations apply to using OpenSearch Service datasets: • OpenSearch Service datasets support a subset of the visual types, sort options, and filter options. • To enable autorefresh on a published dashboard that uses OpenSearch Service data, the OpenSearch Service dataset needs to use a direct query. • Multiple subquery operations aren't supported. To avoid errors during visualization, don't add multiple fields to a field well, use one or two fields per visualization, and avoid using the Color field well. • Custom SQL isn't supported. • Crossdataset joins and self joins aren't supported. • Calculated fields aren't supported. • Text fields aren't supported. • The "other" category isn't supported. If you use an OpenSearch Service dataset with a visualization that supports the "other" category, disable the "other" category by using the menu on the visual. Amazon OpenSearch Service 85 Amazon QuickSight User Guide Creating a dataset using Amazon S3 files To create a dataset using one or more text files (.csv, .tsv, .clf, or .elf) from Amazon S3, create a manifest for Amazon QuickSight. Amazon QuickSight uses this manifest to identify the files that you want to use and to the upload settings needed to import them. When you create a dataset using Amazon S3, the file data is automatically imported into SPICE. You must grant Amazon QuickSight access to any Amazon S3 buckets that you want to read files from. For information about granting Amazon QuickSight access to AWS resources, see Accessing data sources. Supported formats for Amazon S3 manifest files You use JSON manifest files to specify files in Amazon S3 to import into Amazon QuickSight. These JSON manifest files can use either the Amazon QuickSight format described following or the Amazon Redshift format described in Using a manifest to specify data files in the Amazon Redshift Database Developer Guide. You don't have to use Amazon Redshift to use the Amazon Redshift manifest file format. If you use an Amazon QuickSight manifest file, it must have a .json extension, for example my_manifest.json. If you use an Amazon Redshift manifest file, it can have any extension. If you use an Amazon Redshift manifest file, Amazon QuickSight processes the optional mandatory option as Amazon Redshift does. If the associated file isn't found, Amazon QuickSight ends the import process and returns an error. Files that you select for import must be delimited text (for example, .csv or .tsv), log (.clf), or extended log (.elf) format, or JSON (.json). All files identified in one manifest file must use the same file format. Plus, they must have the same number and type of columns. Amazon QuickSight supports UTF-8 file |
amazon-quicksight-user-025 | amazon-quicksight-user.pdf | 25 | use an Amazon Redshift manifest file, it can have any extension. If you use an Amazon Redshift manifest file, Amazon QuickSight processes the optional mandatory option as Amazon Redshift does. If the associated file isn't found, Amazon QuickSight ends the import process and returns an error. Files that you select for import must be delimited text (for example, .csv or .tsv), log (.clf), or extended log (.elf) format, or JSON (.json). All files identified in one manifest file must use the same file format. Plus, they must have the same number and type of columns. Amazon QuickSight supports UTF-8 file encoding, but not UTF-8 with byte-order mark (BOM). If you are importing JSON files, then for globalUploadSettings specify format, but not delimiter, textqualifier, or containsHeader. Make sure that any files that you specify are in Amazon S3 buckets that you have granted Amazon QuickSight access to. For information about granting Amazon QuickSight access to AWS resources, see Accessing data sources. Manifest file format for Amazon QuickSight Amazon QuickSight manifest files use the following JSON format. Amazon S3 files 86 Amazon QuickSight User Guide { "fileLocations": [ { "URIs": [ "uri1", "uri2", "uri3" ] }, { "URIPrefixes": [ "prefix1", "prefix2", "prefix3" ] } ], "globalUploadSettings": { "format": "JSON", "delimiter": ",", "textqualifier": "'", "containsHeader": "true" } } Use the fields in the fileLocations element to specify the files to import, and the fields in the globalUploadSettings element to specify import settings for those files, such as field delimiters. The manifest file elements are described following: • fileLocations – Use this element to specify the files to import. You can use either or both of the URIs and URIPrefixes arrays to do this. You must specify at least one value in one or the other of them. • URIs – Use this array to list URIs for specific files to import. Amazon QuickSight can access Amazon S3 files that are in any AWS Region. However, you must use a URI format that identifies the AWS Region of the Amazon S3 bucket if it's different from that used by your Amazon QuickSight account. Amazon S3 files 87 Amazon QuickSight User Guide URIs in the following formats are supported. URI format Example Comments https://s3.amazonaws.com/<bucket name>/<file name> https://s3.amazonaws.com/ amzn-s3-demo-bucket/ s3://<bucket name>/<file name> https://<bucket name>.s3.amaz onaws.com/<file name> https://s3-<region name>.amazona ws.com/<bucket name>/<file name> data.csv s3://amzn-s3-demo-bucket/ data.csv https://amzn-s3-d emo-bucket naws.com/data.csv .s3.amazo https://s3-us- east-1.amazo naws.com /amzn-s3-d This URI type identifies the AWS Region emo-bucket /data.csv for the Amazon https://<bucket name>.s3-<region name>.amazonaws.com/<file name> https://amzn-s3-d emo-bucket east-1 .amazonaws .s3-us- .com/data.csv S3 bucket. This URI type identifies the AWS Region for the A mazon S3 bucket. • URIPrefixes – Use this array to list URI prefixes for S3 buckets and folders. All files in a specified bucket or folder are imported. Amazon QuickSight recursively retrieves files from child folders. QuickSight can access Amazon S3 buckets or folders that are in any AWS Region. Make sure to use a URI prefix format that identifies the S3 bucket's AWS Region if it's different from that used by your QuickSight account. URI prefixes in the following formats are supported. Amazon S3 files 88 Amazon QuickSight User Guide URI prefix format Example Comments https://s3.amazonaws.com/<bucket name>/ https://s3.amazonaws.com/ amzn-s3-demo-bucket/ https://s3.amazonaws.com/<bucket name>/<folder name1>/(<folder https://s3.amazonaws.com/ amzn-s3-demo-bucket/ name2>/etc.) folder1/ s3://<bucket name> s3://amzn-s3-demo-bucket/ s3://<bucket name>/<folder name1>/ (<folder name2>/etc.) s3://amzn-s3-demo-bucket/ folder1/ https://<bucket name>.s3.amazonaws.com https://s3-<region name>.amazona ws.com/<bucket name>/ https://s3-<region name>.amazona ws.com/<bucket name>/<folder name1>/(<folder name2>/etc.) https://amzn-s3-demo- bucket .s3.amazonaws .com https://s3-your-regi This on-for-example- us-east-2 .amazonaws .com /amzn-s3-demo- bucket / URIPrefix type identifie s the AWS Region for the Amazon S3 bucket. https://s3-us- This east-1.amazo naws.com /amzn-s3-d /folder1/ emo-bucket URIPrefix type identifie s the AWS Region for the Amazon S3 bucket. Amazon S3 files 89 Amazon QuickSight User Guide URI prefix format Example Comments https://<bucket name>.s3-<region name>.amazonaws.com https://amzn-s3-demo- bucket .s3-us-eas t-1.amazonaws .com This URIPrefix type identifie s the AWS Region for the Amazon S3 bucket. • globalUploadSettings – (Optional) Use this element to specify import settings for the Amazon S3 files, such as field delimiters. If this element is not specified, Amazon QuickSight uses the default values for the fields in this section. Important For log (.clf) and extended log (.elf) files, only the format field in this section is applicable, so you can skip the other fields. If you choose to include them, their values are ignored. • format – (Optional) Specify the format of the files to be imported. Valid formats are CSV, TSV, CLF, ELF, and JSON. The default value is CSV. • delimiter – (Optional) Specify the file field delimiter. This must map to the file type specified in the format field. Valid formats are commas (,) for .csv files and tabs (\t) for .tsv files. The default value is comma (,). • textqualifier – (Optional) Specify the file text qualifier. Valid formats are single quote ('), double quotes (\"). The leading backslash is |
amazon-quicksight-user-026 | amazon-quicksight-user.pdf | 26 | If you choose to include them, their values are ignored. • format – (Optional) Specify the format of the files to be imported. Valid formats are CSV, TSV, CLF, ELF, and JSON. The default value is CSV. • delimiter – (Optional) Specify the file field delimiter. This must map to the file type specified in the format field. Valid formats are commas (,) for .csv files and tabs (\t) for .tsv files. The default value is comma (,). • textqualifier – (Optional) Specify the file text qualifier. Valid formats are single quote ('), double quotes (\"). The leading backslash is a required escape character for a double quote in JSON. The default value is double quotes (\"). If your text doesn't need a text qualifier, don't include this property. • containsHeader – (Optional) Specify whether the file has a header row. Valid formats are true or false. The default value is true. Manifest file examples for Amazon QuickSight The following are some examples of completed Amazon QuickSight manifest files. Amazon S3 files 90 Amazon QuickSight User Guide The following example shows a manifest file that identifies two specific .csv files for import. These files use double quotes for text qualifiers. The format, delimiter, and containsHeader fields are skipped because the default values are acceptable. { "fileLocations": [ { "URIs": [ "https://yourBucket.s3.amazonaws.com/data-file.csv", "https://yourBucket.s3.amazonaws.com/data-file-2.csv" ] } ], "globalUploadSettings": { "textqualifier": "\"" } } The following example shows a manifest file that identifies one specific .tsv file for import. This file also includes a bucket in another AWS Region that contains additional .tsv files for import. The textqualifier and containsHeader fields are skipped because the default values are acceptable. { "fileLocations": [ { "URIs": [ "https://s3.amazonaws.com/amzn-s3-demo-bucket/data.tsv" ] }, { "URIPrefixes": [ "https://s3-us-east-1.amazonaws.com/amzn-s3-demo-bucket/" ] } ], "globalUploadSettings": { "format": "TSV", "delimiter": "\t" } } Amazon S3 files 91 Amazon QuickSight User Guide The following example identifies two buckets that contain .clf files for import. One is in the same AWS Region as the Amazon QuickSight account, and one in a different AWS Region. The delimiter, textqualifier, and containsHeader fields are skipped because they are not applicable to log files. { "fileLocations": [ { "URIPrefixes": [ "https://amzn-s3-demo-bucket1.your-s3-url.com", "s3://amzn-s3-demo-bucket2/" ] } ], "globalUploadSettings": { "format": "CLF" } } The following example uses the Amazon Redshift format to identify a .csv file for import. { "entries": [ { "url": "https://amzn-s3-demo-bucket.your-s3-url.com/myalias-test/file-to- import.csv", "mandatory": true } ] } The following example uses the Amazon Redshift format to identify two JSON files for import. { "fileLocations": [ { "URIs": [ "https://yourBucket.s3.amazonaws.com/data-file.json", "https://yourBucket.s3.amazonaws.com/data-file-2.json" ] } ], Amazon S3 files 92 Amazon QuickSight User Guide "globalUploadSettings": { "format": "JSON" } } Creating Amazon S3 datasets To create an Amazon S3 dataset 1. Check Data source quotas to make sure that your target file set doesn't exceed data source quotas. 2. Create a manifest file to identify the text files that you want to import, using one of the formats specified in Supported formats for Amazon S3 manifest files. 3. Save the manifest file to a local directory, or upload it into Amazon S3. 4. On the Amazon QuickSight start page, choose Datasets. 5. On the Datasets page, choose New dataset. 6. 7. In the FROM NEW DATA SOURCES section of the Create a Data Set page, choose the Amazon S3 icon. For Data source name, enter a description of the data source. This name should be something that helps you distinguish this data source from others. 8. For Upload a manifest file, do one of the following: • To use a local manifest file, choose Upload, and then choose Upload a JSON manifest file. For Open, choose a file, and then choose Open. • To use a manifest file from Amazon S3, choose URL, and enter the URL for the manifest file. To find the URL of a pre-existing manifest file in the Amazon S3 console, navigate to the appropriate file and choose it. A properties panel displays, including the link URL. You can copy the URL and paste it into Amazon QuickSight. 9. Choose Connect. 10. To make sure that the connection is complete, choose Edit/Preview data. Otherwise, choose Visualize to create an analysis using the data as-is. If you choose Edit/Preview data, you can specify a dataset name as part of preparing the data. Otherwise, the dataset name matches the name of the manifest file. To learn more about data preparation, see Preparing data in Amazon QuickSight. Amazon S3 files 93 Amazon QuickSight User Guide Creating datasets based on multiple Amazon S3 files You can use one of several methods to merge or combine files from Amazon S3 buckets inside Amazon QuickSight: • Combine files by using a manifest – In this case, the files must have the same number of fields (columns). The data types must match between fields |
amazon-quicksight-user-027 | amazon-quicksight-user.pdf | 27 | Edit/Preview data, you can specify a dataset name as part of preparing the data. Otherwise, the dataset name matches the name of the manifest file. To learn more about data preparation, see Preparing data in Amazon QuickSight. Amazon S3 files 93 Amazon QuickSight User Guide Creating datasets based on multiple Amazon S3 files You can use one of several methods to merge or combine files from Amazon S3 buckets inside Amazon QuickSight: • Combine files by using a manifest – In this case, the files must have the same number of fields (columns). The data types must match between fields in the same position in the file. For example, the first field must have the same data type in each file. The same goes for the second field, and the third field, and so on. Amazon QuickSight takes field names from the first file. The files must be listed explicitly in the manifest. However, they don't have to be inside the same Amazon S3 bucket. In addition, the files must follow the rules described in Supported formats for Amazon S3 manifest files. For more details about combining files using a manifest, see Creating a dataset using Amazon S3 files. • Merge files without using a manifest – To merge multiple files into one without having to list them individually in the manifest, you can use Athena. With this method, you can simply query your text files, like they are in a table in a database. For more information, see the post Analyzing data in Amazon S3 using Athena in the Big Data blog. • Use a script to append files before importing – You can use a script designed to combine your files before uploading. Datasets using S3 files in another AWS account Use this section to learn how to set up security so you can use Amazon QuickSight to access Amazon S3 files in another AWS account. For you to access files in another account, the owner of the other account must first set Amazon S3 to grant you permissions to read the file. Then, in Amazon QuickSight, you must set up access to the buckets that were shared with you. After both of these steps are finished, you can use a manifest to create a dataset. Amazon S3 files 94 Amazon QuickSight Note User Guide To access files that are shared with the public, you don't need to set up any special security. However, you still need a manifest file. Topics • Setting up Amazon S3 to allow access from a different Amazon QuickSight account • Setting up Amazon QuickSight to access Amazon S3 files in another AWS account Setting up Amazon S3 to allow access from a different Amazon QuickSight account Use this section to learn how to set permissions in Amazon S3 files so they can be accessed by Amazon QuickSight in another AWS account. For information on accessing another account's Amazon S3 files from your Amazon QuickSight account, see Setting up Amazon QuickSight to access Amazon S3 files in another AWS account. For more information about S3 permissions, see Managing access permissions to your Amazon S3 resources and How do I set permissions on an object? You can use the following procedure to set this access from the S3 console. Or you can grant permissions by using the AWS CLI or by writing a script. If you have a lot of files to share, you can instead create an S3 bucket policy on the s3:GetObject action. To use a bucket policy, add it to the bucket permissions, not to the file permissions. For information on bucket policies, see Bucket policy examples in the Amazon S3 Developer Guide. To set access from a different QuickSight account from the S3 console 1. Get the email address of the AWS account email that you want to share with. Or you can get and use the canonical user ID. For more information on canonical user IDs, see AWS account identifiers in the AWS General Reference. 2. Sign in to the AWS Management Console and open the Amazon S3 console at https:// console.aws.amazon.com/s3/. 3. Find the Amazon S3 bucket that you want to share with Amazon QuickSight. Choose Permissions. Amazon S3 files 95 Amazon QuickSight User Guide 4. Choose Add Account, and then enter an email address, or paste in a canonical user ID, for the AWS account that you want to share with. This email address should be the primary one for the AWS account. 5. Choose Yes for both Read bucket permissions and List objects. Choose Save to confirm. 6. 7. Find the file that you want to share, and open the file's permission settings. Enter an email address or the canonical user ID for the AWS account that you want to share with. This email address should |
amazon-quicksight-user-028 | amazon-quicksight-user.pdf | 28 | S3 files 95 Amazon QuickSight User Guide 4. Choose Add Account, and then enter an email address, or paste in a canonical user ID, for the AWS account that you want to share with. This email address should be the primary one for the AWS account. 5. Choose Yes for both Read bucket permissions and List objects. Choose Save to confirm. 6. 7. Find the file that you want to share, and open the file's permission settings. Enter an email address or the canonical user ID for the AWS account that you want to share with. This email address should be the primary one for the AWS account. 8. Enable Read object permissions for each file that Amazon QuickSight needs access to. 9. Notify the Amazon QuickSight user that the files are now available for use. Setting up Amazon QuickSight to access Amazon S3 files in another AWS account Use this section to learn how to set up Amazon QuickSight so you can access Amazon S3 files in another AWS account. For information on allowing someone else to access your Amazon S3 files from their Amazon QuickSight account, see Setting up Amazon S3 to allow access from a different Amazon QuickSight account. Use the following procedure to access another account's Amazon S3 files from Amazon QuickSight. Before you can use this procedure, the users in the other AWS account must share the files in their Amazon S3 bucket with you. To access another account's Amazon S3 files from QuickSight 1. Verify that the user or users in the other AWS account gave your account read and write permission to the S3 bucket in question. 2. Choose your profile icon, and then choose Manage Amazon QuickSight. 3. Choose Security & permissions. 4. Under QuickSight access to AWS services, choose Manage. 5. Choose Select S3 buckets. 6. On the Select Amazon S3 buckets screen, choose the S3 buckets you can access across AWS tab. The default tab is named S3 buckets linked to Amazon QuickSight account. It shows all the buckets your Amazon QuickSight account has access to. Amazon S3 files 96 Amazon QuickSight 7. Do one of the following: User Guide • To add all the buckets that you have permission to use, choose Choose accessible buckets from other AWS accounts. • If you have one or more Amazon S3 buckets that you want to add, enter their names. Each must exactly match the unique name of the Amazon S3 bucket. If you don't have the appropriate permissions, you see the error message "We can't connect to this S3 bucket. Make sure that any S3 buckets you specify are associated with the AWS account used to create this Amazon QuickSight account." This error message appears if you don't have either account permissions or Amazon QuickSight permissions. Note To use Amazon Athena, Amazon QuickSight needs to access the Amazon S3 buckets that Athena uses. You can add them here one by one, or use the Choose accessible buckets from other AWS accounts option. 8. Choose Select buckets to confirm your selection. 9. Create a new dataset based on Amazon S3, and upload your manifest file. For more information Amazon S3 datasets, see Creating a dataset using Amazon S3 files. Creating a data source using Apache Spark You can connect directly to Apache Spark using Amazon QuickSight, or you can connect to Spark through Spark SQL. Using the results of queries, or direct links to tables or views, you create data sources in Amazon QuickSight. You can either directly query your data through Spark, or you can import the results of your query into SPICE. Before you use Amazon QuickSight with Spark products, you must configure Spark for Amazon QuickSight. Amazon QuickSight requires your Spark server to be secured and authenticated using LDAP, which is available to Spark version 2.0 or later. If Spark is configured to allow unauthenticated access, Amazon QuickSight refuses the connection to the server. To use Amazon QuickSight as a Spark client, you must configure LDAP authentication to work with Spark. Apache Spark 97 Amazon QuickSight User Guide The Spark documentation contains information on how to set this up. To start, you need to configure it to enable front-end LDAP authentication over HTTPS. For general information on Spark, see the Apache spark website. For information specifically on Spark and security, see Spark security documentation. To make sure that you have configured your server for Amazon QuickSight access, follow the instructions in Network and database configuration requirements. Using Databricks in QuickSight Use this section to learn how to connect from QuickSight to Databricks. To connect to Databricks 1. Begin by creating a new dataset. Choose Datasets from the navigation pane at left, then choose New Dataset. 2. Choose the Databricks data source card. 3. For Data source name, enter a |
amazon-quicksight-user-029 | amazon-quicksight-user.pdf | 29 | LDAP authentication over HTTPS. For general information on Spark, see the Apache spark website. For information specifically on Spark and security, see Spark security documentation. To make sure that you have configured your server for Amazon QuickSight access, follow the instructions in Network and database configuration requirements. Using Databricks in QuickSight Use this section to learn how to connect from QuickSight to Databricks. To connect to Databricks 1. Begin by creating a new dataset. Choose Datasets from the navigation pane at left, then choose New Dataset. 2. Choose the Databricks data source card. 3. For Data source name, enter a descriptive name for your Databricks data source connection, for example Databricks CS. Because you can create many datasets from a connection to Databricks, it's best to keep the name simple. The following screenshot shows the connection screen for Databricks. Databricks 98 Amazon QuickSight User Guide 4. For Connection type, select the type of network you're using. • Public network – if your data is shared publicly. • VPC – if your data is inside a VPC. Databricks 99 Amazon QuickSight Note User Guide If you're using VPC, and you don't see it listed, check with your administrator. 5. 6. 7. 8. 9. For Database server, enter the Hostname of workspace specified in your Databricks connection details. For HTTP Path, enter the Partial URL for the spark instance specified in your Databricks connection details. For Port, enter the port specified in your Databricks connection details. For Username and Password, enter your connection credentials. To verify the connection is working, click Validate connection. 10. To finish and create the data source, click Create data source. Adding a new QuickSight dataset for Databricks After you have an existing data source connection for Databricks data, you can create Databricks datasets to use for analysis. To create a dataset using Databricks 1. Choose Datasets at left, then scroll down to find the data source card for your Databricks connection. If you have many data sources, you can use the search bar at the top of the page to find your data source with a partial match on the name. 2. Choose the Databricks data source card, and then choose Create data set. The following popup displays: Databricks 100 Amazon QuickSight User Guide 3. To specify the table you want to connect to, first select the Catalog and Schema you want to use. Then, for Tables, select the table that you want to use. If you prefer to use your own SQL statement, select Use custom SQL. 4. Choose Edit/Preview. 5. (Optional) To add more data, use the following steps: a. b. c. Choose Add data at top right. To connect to different data, choose Switch data source, and choose a different dataset. Follow the UI prompts to finish adding data. d. After adding new data to the same dataset, choose Configure this join (the two red dots). Set up a join for each additional table. If you want to add calculated fields, choose Add calculated field. To add a model from SageMaker AI, choose Augment with SageMaker. This option is only available in QuickSight Enterprise edition. 101 e. f. Databricks Amazon QuickSight User Guide g. Clear the check box for any fields that you want to omit. h. Update any data types that you want to change. 6. When you are done, choose Save to save and close the dataset. Amazon QuickSight Administrator's guide to connecting Databricks You can use Amazon QuickSight to connect to Databricks on AWS. You can connect to Databricks on AWS whether you signed up for through AWS Marketplace or through the Databricks website. Before you can connect to Databricks, your create or identify existing resources that the connection requires. Use this section to help you gather the resources you need to connect from QuickSight to Databricks. • To learn how to obtain your Databricks connection details, see Databricks ODBC and JDBC connections.. • To learn how to obtain your Databricks credentials—personal access token or user name and password—for authentication, see Authentication requirements in the Databricks documentation. To connect to a Databricks cluster, you need Can Attach To and Can Restart permissions. These permissions are managed in Databricks. For more information, see Permission Requirements in the Databricks documentation.. • If you are setting up a private connection for Databricks, you can learn more about how to configure a VPC for use with QuickSight, see Connecting to a VPC with Amazon QuickSight in the QuickSight documentation. If the connection isnt' visible, verify with a system administrator that the network has open inbound endpoints for Amazon Route 53. the hostname of a Databricks workspace uses a public IP , there needs to be DNS TCP and DNS UDP inbound and outbound rules to allow traffic on DNS port 53, for the Route 53 |
amazon-quicksight-user-030 | amazon-quicksight-user.pdf | 30 | see Permission Requirements in the Databricks documentation.. • If you are setting up a private connection for Databricks, you can learn more about how to configure a VPC for use with QuickSight, see Connecting to a VPC with Amazon QuickSight in the QuickSight documentation. If the connection isnt' visible, verify with a system administrator that the network has open inbound endpoints for Amazon Route 53. the hostname of a Databricks workspace uses a public IP , there needs to be DNS TCP and DNS UDP inbound and outbound rules to allow traffic on DNS port 53, for the Route 53 security group. An administrator needs to create a security group with 2 inbound rules: one for DNS(TCP) on port 53 to the VPC CIDR and one for DNS(UDP) for port 53 to the VPC CIDR. For Databricks-related details if you are using PrivateLink instead of a public connection, see Enable AWS PrivateLink in the Databricks documentation. Databricks 102 Amazon QuickSight User Guide Creating a dataset using Google BigQuery Note When QuickSight uses and transfers information that is received from Google APIs, it adheres to the Google API Services User Data Policy. Google BigQuery is a fully managed serverless data warehouse that customers use to manage and analyze their data. Google BigQuery customers use SQL to query their data without any infrastructure management. Creating a data source connection with Google BigQuery Prerequisites Before you start, make sure that you have the following. These are all required to create a data source connection with Google BigQuery: • Project ID – The project ID that is associated with your Google account. To find this, navigate to the Google Cloud console and choose the name of the project that you want to connect to QuickSight. Copy the project ID that appears in the new window and record it for later use. • Dataset Region – The Google region that the Google BigQuery project exists in. To find the dataset region, navigate to the Google BigQuery console and choose Explorer. Locate and expand the project that you want to connect to, then choose the dataset that you want to use. The dataset region appears in the pop-up that opens. • Google account login credentials – The login credentials for your Google account. If you don't have this information, contact your Google account administrator. • Google BigQuery Permissions – To connect your Google account with QuickSight, make sure that your Google account has the following permissions: • BigQuery Job User at the Project level. • BigQuery Data Viewer at the Dataset or Table level. • BigQuery Metadata Viewer at the Project level. For information about how to retrieve the previous prerequisite information, see Unlock the power of unified business intelligence with Google Cloud BigQuery and Amazon QuickSight. Google BigQuery 103 Amazon QuickSight User Guide Use the following procedure to connect your QuickSight account with your Google BigQuery data source. To create a new connection to a Google BigQuery data source from Amazon QuickSight 1. Open the QuickSight console. 2. From the left navigation pane, choose Datasets, and then choose New Dataset. 3. Choose the Google BigQuery tile. 4. Add the data source details that you recorded in the prerequisites section earlier: • Data source name – A name for the data source. • Project ID – A Google Platform project ID. This field is case sensitive. • Dataset Region – The Google cloud platform dataset region of the project that you want to connect to. 5. Choose Sign in. 6. In the new window that opens, enter the login credentials for the Google account that you want to connect to. 7. Choose Continue to grant QuickSight access to Google BigQuery. 8. After you create the new data source connection, continue to Step 4 in the following procedure. Adding a new QuickSight dataset for Google BigQuery After you create a data source connection with Google BigQuery, you can create Google BigQuery datasets for analysis. Datasets that use Google BigQuery can only be stored in SPICE. To create a dataset using Google BigQuery 1. Open the QuickSight console. 2. From the start page, choose Datasets, and then choose New Dataset. Google BigQuery 104 Amazon QuickSight User Guide 3. On the Create a dataset page that opens, choose the Google BigQuery tile, and then choose Create dataset. 4. For Tables, do one of the following: • • Choose the table that you want to use. Choose Use custom SQL to use your own personal SQL statement. For more information about using custom SQL in QuickSight, see Using SQL to customize data. 5. Choose Edit/Preview. 6. (Optional) In the Data prep page that opens, you can add customizations to your data with calculated fields, filters, and joins. 7. When you are finished making changes, choose Save to save and close |
amazon-quicksight-user-031 | amazon-quicksight-user.pdf | 31 | On the Create a dataset page that opens, choose the Google BigQuery tile, and then choose Create dataset. 4. For Tables, do one of the following: • • Choose the table that you want to use. Choose Use custom SQL to use your own personal SQL statement. For more information about using custom SQL in QuickSight, see Using SQL to customize data. 5. Choose Edit/Preview. 6. (Optional) In the Data prep page that opens, you can add customizations to your data with calculated fields, filters, and joins. 7. When you are finished making changes, choose Save to save and close the dataset. Creating a dataset using a Microsoft Excel file To create a dataset using a Microsoft Excel file data source, upload an .xlsx file from a local or networked drive. The data is imported into SPICE. For more information about creating new Amazon S3 datasets using Amazon S3 data sources, see Creating a dataset using an existing Amazon S3 data source or Creating a dataset using Amazon S3 files. To create a dataset based on an excel file 1. Check Data source quotas to make sure that your target file doesn't exceed data source quotas. 2. On the Amazon QuickSight start page, choose Datasets. 3. On the Datasets page, choose New dataset. 4. In the FROM NEW DATA SOURCES section of the Create a Data Set page, choose Upload a file. 5. In the Open dialog box, choose a file, and then choose Open. A file must be 1 GB or less to be uploaded to Amazon QuickSight. 6. If the Excel file contains multiple sheets, choose the sheet to import. You can change this later by preparing the data. Microsoft Excel files 105 Amazon QuickSight 7. Note User Guide On the following screens, you have multiple chances to prepare the data. Each of these takes you to the Prepare Data screen. This screen is the same one where you can access after the data import is complete. It enables you to change the upload settings even after the upload is complete. Choose Select to confirm your settings. Or you can choose Edit/Preview data to prepare the data immediately. A preview of the data appears on the next screen. You can't make changes directly to the data preview. 8. If the data headings and content don't look correct, choose Edit settings and prepare data to correct the file upload settings. Otherwise, choose Next. 9. On the Data Source Details screen, you can choose Edit/Preview data. You can specify a dataset name in the Prepare Data screen. If you don't need to prepare the data, you can choose to create an analysis using the data as- is. Choose Visualize. Doing this names the dataset the same as the source file, and takes you to the Analysis screen. To learn more about data preparation and excel upload settings, see Preparing data in Amazon QuickSight. Note If at anytime you want to make changes to the file, such as adding a new field,you must make the change in Microsoft Excel and create a new dataset using the updated version in QuickSight. For more information about possible implications of changing datasets, see Things to consider when editing datasets . Creating a data source using Presto Presto (or PrestoDB) is an open-source, distributed SQL query engine, designed for fast analytic queries against data of any size. It supports both nonrelational and relational data sources. Presto 106 Amazon QuickSight User Guide Supported nonrelational data sources include the Hadoop Distributed File System (HDFS), Amazon S3, Cassandra, MongoDB, and HBase. Supported relational data sources include MySQL, PostgreSQL, Amazon Redshift, Microsoft SQL Server, and Teradata. For more information about Presto, see the following: • Introduction to presto, a description of Presto on the AWS website. • Creating a presto cluster with Amazon elastic MapReduce (EMR) in the Amazon EMR Release Guide. • For general information on Presto, see the Presto documentation. The results of the queries that you run through the Presto query engine can be turned into Amazon QuickSight datasets. Presto processes the analytic queries on the backend databases. Then it returns results to the Amazon QuickSight client. You can directly query your data through Presto, or you can import the results of your query into SPICE. Before you use Amazon QuickSight as a Presto client to run queries, make sure that you configure data source profiles. You need a data source profile in Amazon QuickSight for each Presto data source that you want to access. Use the following procedure to create a connection to Presto. To create a new connection to a presto data source from Amazon QuickSight (console) 1. On the Amazon QuickSight start page, choose Datasets at top right. Then choose New dataset. 2. Choose the Presto tile. Note In most browsers, you can |
amazon-quicksight-user-032 | amazon-quicksight-user.pdf | 32 | you can import the results of your query into SPICE. Before you use Amazon QuickSight as a Presto client to run queries, make sure that you configure data source profiles. You need a data source profile in Amazon QuickSight for each Presto data source that you want to access. Use the following procedure to create a connection to Presto. To create a new connection to a presto data source from Amazon QuickSight (console) 1. On the Amazon QuickSight start page, choose Datasets at top right. Then choose New dataset. 2. Choose the Presto tile. Note In most browsers, you can use Ctrl-F or Cmd-F to open a search box and enter presto to locate it. 3. Add the settings for the new data source: • Data source name – Enter a descriptive name for your data source connection. This name appears in the Existing data sources section at the bottom of the Data sets screen. • Connection type – Choose the connection type that you need to use to connect to Presto. To connect through the public network, choose Public network. Presto 107 Amazon QuickSight User Guide If you use a public network, your Presto server must be secured and authenticated using Lightweight Directory Access Protocol (LDAP). For information on configuring Presto to use LDAP, see LDAP authentication in the Presto documentation. To connect through a virtual private connection, choose the appropriate VPC name from the VPC connections list. If your Presto server allows unauthenticated access, AWS requires that you connect to it securely by using a private VPC connection. For information on configuring a new VPC, see Connecting to a VPC with Amazon QuickSight. • Database server – The name of the database server. • Port – The port that the server using to accept incoming connections from Amazon QuickSight • Catalog – The name of the catalog that you want to use. • Authentication required – (Optional) This option only appears if you choose a VPC connection type. If the Presto data source that you're connecting to doesn't require authentication, choose No. Otherwise, keep the default setting (Yes). • Username – Enter a user name to use to connect to Presto. Amazon QuickSight applies the same user name and password to all connections that use this data source profile. If you want to monitor Amazon QuickSight separately from other accounts, create a Presto account for each Amazon QuickSight data source profile. The Presto account that you use needs be able to access to the database and run SELECT statements on at least one table. • Password – The password to use with the Presto user name. Amazon QuickSight encrypts all credentials that you use in data source profile. For more information, see Data encryption in Amazon QuickSight. • Enable SSL – SSL is enabled by default. 4. Choose Validate connection to test your settings. 5. After you validate your settings, choose Create data source to complete the connection. Using Snowflake with Amazon QuickSight Snowflake is an AI data cloud platform that provides data solutions from data warehousing and collaboration to data science and generative AI. Snowflake is an AWS Partner with multiple AWS Snowflake 108 Amazon QuickSight User Guide accreditations that include AWS ISV Competencies in Generative AI, Machine Learning, Data and Analytics, and Retail. Amazon QuickSight offers two ways to connect to Snowflake: with your Snowflake login credentials or with OAuth client credentials. Use the following sections to learn about both methods of connection. Topics • Creating an Amazon QuickSight data source connection to Snowflake with login credentials • Creating an Amazon QuickSight data source connection to Snowflake with OAuth client credentials Creating an Amazon QuickSight data source connection to Snowflake with login credentials Use this section to learn how to create a connection between Amazon QuickSight and Snowflake with your Snowflake login credentials. All traffic between QuickSight and Snowflake is enabled by SSL. To create a connection between Amazon QuickSight and Snowflake 1. Open the QuickSight console. 2. From the left navigation pane, choose Datasets, then choose New Dataset. 3. Choose the Snowflake data source card. 4. In the pop up that appears, enter the following information: a. b. Snowflake For Data source name, enter a descriptive name for your Snowflake data source connection. Because you can create many datasets from a connection to Snowflake, it's bets to keep the name simple. For Connection type, choose the type of network that you're using. Choose Public network if your data is shared publicly. Choose VPC if your data is located inside a VPC. To 109 Amazon QuickSight User Guide configure a VPC connection in Amazon QuickSight, see Configuring the VPC connection in Amazon QuickSight. c. For Database server enter the hostname specified in your Snowflake connection details. 5. For Database name and Warehouse, enter the respective Snowflake |
amazon-quicksight-user-033 | amazon-quicksight-user.pdf | 33 | descriptive name for your Snowflake data source connection. Because you can create many datasets from a connection to Snowflake, it's bets to keep the name simple. For Connection type, choose the type of network that you're using. Choose Public network if your data is shared publicly. Choose VPC if your data is located inside a VPC. To 109 Amazon QuickSight User Guide configure a VPC connection in Amazon QuickSight, see Configuring the VPC connection in Amazon QuickSight. c. For Database server enter the hostname specified in your Snowflake connection details. 5. For Database name and Warehouse, enter the respective Snowflake database and wearehouse that you want to connect. 6. For Username and Password, enter your Snowflake credentials. After you have successfully created a data source connection between your QuickSight and Snowflake accounts, you can begin creating QuickSight datasets that contain Snowflake data. Creating an Amazon QuickSight data source connection to Snowflake with OAuth client credentials You can use OAuth client credentials to connect your QuickSight account with Snowflake through the QuickSight APIs. OAuth is a standard authorization protocol that is often utilized for applications that have advanced security requirements. When you connect to Snowflake with OAuth client credentials, you can create datasets that contain Snowflake data with the QuickSight APIs and in the QuickSight UI. For more information about configuring OAuth in Snowflake, see Snowflake OAuth overview. QuickSight supports the client credentials OAuth grant type. OAuth client credentials is used to obtain an access token for machine-to-machine communication. This method is suitable for scenarios where a client needs to access resources that are hosted on a server without the involvement of a user. In the client credentials flow of OAuth 2.0, there are several client authentication mechanisms that can be used to authenticate the client application with the authorization server. QuickSight supports client credentials based OAuth for Snowflake for the following two mechanisms: • Token (Client secrets-based OAuth): The secret-based client authentication mechanism is used with the client credentials to grant flow in order to authenticate with authorization server. This authentication scheme requires the client_id and client_secret of the OAuth client app to be stored in Secrets Manager. • X509 (Client private key JWT-based OAuth): The X509 certificate key-based solution provides an additional security layer to the OAuth mechanism with client certificates that are used to authenticate instead of client secrets. This method is primarily used by private clients who use Snowflake 110 Amazon QuickSight User Guide this method to authenticate with the authorization server with strong trust between the two services. QuickSight has validated OAuth connections with the following Identity providers: • OKTA • PingFederate Storing OAuth credentials in Secrets Manager OAuth client credentials are meant for machine-to-machine use cases and are not designed to be interactive. To create a datasource connection between QuickSight and Snowflake, create a new secret in Secrets Manager that contains your credentials for the OAuth client app. The secret ARN that is created with the new secret can be used to create datasets that contain Snowflake data in QuickSight. For more information about using Secrets Manager keys in QuickSight, see Using AWS Secrets Manager secrets instead of database credentials in Amazon QuickSight. The credentials that you need to store in Secrets Manager are determined by the OAuth mechanism that you use. The following key/value pairs are required for X509-based OAuth secrets: • username: The Snowflake account username to be used when connecting to Snowflake • client_id: The OAuth client ID • client_private_key: The OAuth client private key • client_public_key: The OAuth client certificate public key and its encrypted algorithm (for example, {"alg": "RS256", "kid", "cert_kid"}) The following key/value pairs are required for token-based OAuth secrets: • username: The Snowflake account username to be used when connecting to Snowflake • client_id: The OAuth client ID • client_secret: the OAuth client secret Creating a Snowflake OAuth connection with the QuickSight APIs After you create a secret in Secrets Manager that contains your Snowflake OAuth credentials and havve connected your QuickSight account to Secrets Manager, you can establish a data source Snowflake 111 Amazon QuickSight User Guide connection between QuickSight and Snowflake with the QuickSight APIs and SDK. The following example creates a Sonwflake data source connection using token OAuth client credentials. { "AwsAccountId": "AWSACCOUNTID", "DataSourceId": "UNIQUEDATASOURCEID", "Name": "NAME", "Type": "SNOWFLAKE", "DataSourceParameters": { "SnowflakeParameters": { "Host": "HOSTNAME", "Database": "DATABASENAME", "Warehouse": "WAREHOUSENAME", "AuthenticationType": "TOKEN", "DatabaseAccessControlRole": "snowflake-db-access-role-name", "OAuthParameters": { "TokenProviderUrl": "oauth-access-token-endpoint", "OAuthScope": "oauth-scope", "IdentityProviderResourceUri" : "resource-uri", "IdentityProviderVpcConnectionProperties" : { "VpcConnectionArn": "IdP-VPC-connection-ARN" } } }, "VpcConnectionProperties": { "VpcConnectionArn": "VPC-connection-ARN-for-Snowflake" } "Credentials": { "SecretArn": "oauth-client-secret-ARN" } } For more information about the CreateDatasource API operation, see CreateDataSource. Once the connection between QuickSight and Snowflake is established and a data source is created with the QuickSight APIs or SDK, the new data source is displayed in QuickSight. QuickSight authors can use this |
amazon-quicksight-user-034 | amazon-quicksight-user.pdf | 34 | using token OAuth client credentials. { "AwsAccountId": "AWSACCOUNTID", "DataSourceId": "UNIQUEDATASOURCEID", "Name": "NAME", "Type": "SNOWFLAKE", "DataSourceParameters": { "SnowflakeParameters": { "Host": "HOSTNAME", "Database": "DATABASENAME", "Warehouse": "WAREHOUSENAME", "AuthenticationType": "TOKEN", "DatabaseAccessControlRole": "snowflake-db-access-role-name", "OAuthParameters": { "TokenProviderUrl": "oauth-access-token-endpoint", "OAuthScope": "oauth-scope", "IdentityProviderResourceUri" : "resource-uri", "IdentityProviderVpcConnectionProperties" : { "VpcConnectionArn": "IdP-VPC-connection-ARN" } } }, "VpcConnectionProperties": { "VpcConnectionArn": "VPC-connection-ARN-for-Snowflake" } "Credentials": { "SecretArn": "oauth-client-secret-ARN" } } For more information about the CreateDatasource API operation, see CreateDataSource. Once the connection between QuickSight and Snowflake is established and a data source is created with the QuickSight APIs or SDK, the new data source is displayed in QuickSight. QuickSight authors can use this data source to create datasets that contain Snowflake data. Tables are displayed based on the role used in the DatabaseAccessControlRole parameter that is passed in a CreateDataSource API call. If this parameter is not defined when the data source connection is created, the default Snowflake role is used. Snowflake 112 Amazon QuickSight User Guide After you have successfully created a data source connection between your QuickSight and Snowflake accounts, you can begin creating QuickSight datasets that contain Snowflake data. Using Starburst with Amazon QuickSight Starburst is a full-featured data lake analytics service built on top of a massively parallel processing (MPP) query engine, Trino. Use this section to learn how to connect from Amazon QuickSight to Starburst. All traffic between QuickSight and Starburst is enabled by SSL. If you're connecting to Starburst Galaxy, you can get the necessary connection details by logging in to your Starburst Galaxy account, then choose Partner Connect and then QuickSight. You should be able to see information, such as hostname and port. Amazon QuickSight supports basic username and password authentication to Starburst. Amazon QuickSight offers two ways to connect to Starburst: with your Starburst login credentials or with OAuth client credentials. Use the following sections to learn about both methods of connection. Topics • Creating an Amazon QuickSight data source connection to Starburst with login credentials • Creating an Amazon QuickSight data source connection to Starburst with OAuth client credentials Creating an Amazon QuickSight data source connection to Starburst with login credentials 1. Begin by creating a new dataset. From the left navigation pane, choose Datasets, then choose New Dataset. 2. Choose the Starburst data source card. 3. Select the Starburst product type. Choose Starburst Enterprise for on-prem Starburst instances. Choose Starburst Galaxy for managed instances. You should see the following data source creation modal. Starburst 113 Amazon QuickSight User Guide 4. 5. For Data source name, enter a descriptive name for your Starburst data source connection. Because you can create many datasets from a connection to Starburst, it's best to keep the name simple. For Connection type, select the type of network you're using. Choose Public network if your data is shared publicly. Choose VPC if your data is inside a VPC. To configure a VPC connection Starburst 114 Amazon QuickSight User Guide in Amazon QuickSight, see Configuring the VPC connection in Amazon QuickSight. This connection type is not available for Starburst Galaxy. For Database server enter the hostname specified in your Starburst connection details. For Catalog, enter the catalog specified in your Starburst connection details. For Port, enter the port specified in your Starburst connection details. Defaults to 443 for Starburst Galaxy. 6. 7. 8. 9. For Username and Password, enter your Starburst connection credentials. 10. To verify the connection is working, choose Validate connection. 11. To finish and create the data source, choose Create data source. Note Connectivity between Amazon QuickSight and Starburst was validated using Starburst version 420. After you have successfully created a data source connection between your QuickSight and Starburst accounts, you can begin creating QuickSight datasets that contain Starburst data. Creating an Amazon QuickSight data source connection to Starburst with OAuth client credentials You can use OAuth client credentials to connect your QuickSight account with Starburst through the QuickSight APIs. OAuth is a standard authorization protocol that is often utilized for applications that have advanced security requirements. When you connect to Starburst with OAuth client credentials, you can create datasets that contain Starburst data with the QuickSight APIs and in the QuickSight UI. For more information about configuring OAuth in Starburst, see OAuth 2.0 authentication. QuickSight supports the client credentials OAuth grant type. OAuth client credentials is used to obtain an access token for machine-to-machine communication. This method is suitable for scenarios where a client needs to access resources that are hosted on a server without the involvement of a user. In the client credentials flow of OAuth 2.0, there are several client authentication mechanisms that can be used to authenticate the client application with the authorization server. QuickSight supports client credentials based OAuth for Starburst for the following two mechanisms: Starburst 115 Amazon QuickSight User Guide • Token (Client secrets-based OAuth): The secret-based client authentication mechanism is |
amazon-quicksight-user-035 | amazon-quicksight-user.pdf | 35 | client credentials OAuth grant type. OAuth client credentials is used to obtain an access token for machine-to-machine communication. This method is suitable for scenarios where a client needs to access resources that are hosted on a server without the involvement of a user. In the client credentials flow of OAuth 2.0, there are several client authentication mechanisms that can be used to authenticate the client application with the authorization server. QuickSight supports client credentials based OAuth for Starburst for the following two mechanisms: Starburst 115 Amazon QuickSight User Guide • Token (Client secrets-based OAuth): The secret-based client authentication mechanism is used with the client credentials to grant flow in order to authenticate with authorization server. This authentication scheme requires the client_id and client_secret of the OAuth client app to be stored in Secrets Manager. • X509 (Client private key JWT-based OAuth): The X509 certificate key-based solution provides an additional security layer to the OAuth mechanism with client certificates that are used to authenticate instead of client secrets. This method is primarily used by private clients who use this method to authenticate with the authorization server with strong trust between the two services. QuickSight has validated OAuth connections with the following Identity providers: • OKTA • PingFederate Storing OAuth credentials in Secrets Manager OAuth client credentials are meant for machine-to-machine use cases and are not designed to be interactive. To create a datasource connection between QuickSight and Starburst, create a new secret in Secrets Manager that contains your credentials for the OAuth client app. The secret ARN that is created with the new secret can be used to create datasets that contain Starburst data in QuickSight. For more information about using Secrets Manager keys in QuickSight, see Using AWS Secrets Manager secrets instead of database credentials in Amazon QuickSight. The credentials that you need to store in Secrets Manager are determined by the OAuth mechanism that you use. The following key/value pairs are required for X509-based OAuth secrets: • username: The Starburst account username to be used when connecting to Starburst • client_id: The OAuth client ID • client_private_key: The OAuth client private key • client_public_key: The OAuth client certificate public key and its encrypted algorithm (for example, {"alg": "RS256", "kid", "cert_kid"}) The following key/value pairs are required for token-based OAuth secrets: • username: The Starburst account username to be used when connecting to Starburst Starburst 116 Amazon QuickSight User Guide • client_id: The OAuth client ID • client_secret: the OAuth client secret Creating a Starburst OAuth connection with the QuickSight APIs After you create a secret in Secrets Manager that contains your Starburst OAuth credentials and havve connected your QuickSight account to Secrets Manager, you can establish a data source connection between QuickSight and Starburst with the QuickSight APIs and SDK. The following example creates a Starburst data source connection using token OAuth client credentials. { "AwsAccountId": "AWSACCOUNTID", "DataSourceId": "DATASOURCEID", "Name": "NAME", "Type": "STARBURST", "DataSourceParameters": { "StarburstParameters": { "Host": "STARBURST_HOST_NAME", "Port": "STARBURST_PORT", "Catalog": "STARBURST_CATALOG", "ProductType": "STARBURST_PRODUCT_TYPE", "AuthenticationType": "TOKEN", "DatabaseAccessControlRole": "starburst-db-access-role-name", "OAuthParameters": { "TokenProviderUrl": "oauth-access-token-endpoint", "OAuthScope": "oauth-scope", "IdentityProviderResourceUri" : "resource-uri", "IdentityProviderVpcConnectionProperties" : { "VpcConnectionArn": "IdP-VPC-connection-ARN" } } }, "VpcConnectionProperties": { "VpcConnectionArn": "VPC-connection-ARN-for-Starburst" }, "Credentials": { "SecretArn": "oauth-client-secret-ARN" } } For more information about the CreateDatasource API operation, see CreateDataSource. Starburst 117 Amazon QuickSight User Guide Once the connection between QuickSight and Starburst is established and a data source is created with the QuickSight APIs or SDK, the new data source is displayed in QuickSight. QuickSight authors can use this data source to create datasets that contain Starburst data. Tables are displayed based on the role used in the DatabaseAccessControlRole parameter that is passed in a CreateDataSource API call. If this parameter is not defined when the data source connection is created, the default Starburst role is used. After you have successfully created a data source connection between your QuickSight and Starburst accounts, you can begin creating QuickSight datasets that contain Starburst data. Creating a data source and data set from SaaS sources To analyze and report on data from software as a service (SaaS) applications, you can use SaaS connectors to access your data directly from Amazon QuickSight. The SaaS connectors simplify accessing third-party application sources using OAuth, without any need to export the data to an intermediate data store. You can use either a cloud-based or server-based instance of a SaaS application. To connect to an SaaS application that is running on your corporate network, make sure that Amazon QuickSight can access the application's Domain Name System (DNS) name over the network. If Amazon QuickSight can't access the SaaS application, it generates an unknown host error. Here are examples of some ways that you can use SaaS data: • Engineering teams who use Jira to track issues and bugs can report on developer efficiency and bug burndown. • Marketing organizations |
amazon-quicksight-user-036 | amazon-quicksight-user.pdf | 36 | to export the data to an intermediate data store. You can use either a cloud-based or server-based instance of a SaaS application. To connect to an SaaS application that is running on your corporate network, make sure that Amazon QuickSight can access the application's Domain Name System (DNS) name over the network. If Amazon QuickSight can't access the SaaS application, it generates an unknown host error. Here are examples of some ways that you can use SaaS data: • Engineering teams who use Jira to track issues and bugs can report on developer efficiency and bug burndown. • Marketing organizations can integrate Amazon QuickSight with Adobe Analytics to build consolidated dashboards to visualize their online and web marketing data. Use the following procedure to create a data source and dataset by connecting to sources available through Software as a Service (SaaS). In this procedure, we use a connection to GitHub as an example. Other SaaS data sources follow the same process, although the screens—especially the SaaS screens—might look different. To create a data source and dataset by connecting to sources through SaaS 1. On the Amazon QuickSight start page, choose Datasets. 2. On the Datasets page, choose New dataset. SaaS sources 118 Amazon QuickSight User Guide 3. In the FROM NEW DATA SOURCES section of the Create a Data Set page, choose the icon that represents the SaaS source that you want to use. For example, you might choose Adobe Analytics or GitHub. For sources using OAuth, the connector takes you to the SaaS site to authorize the connection before you can create the data source. 4. Choose a name for the data source, and enter that. If there are more screen prompts, enter the appropriate information. Then choose Create data source. 5. If you are prompted to do so, enter your credentials on the SaaS login page. 6. When prompted, authorize the connection between your SaaS data source and Amazon QuickSight. The following example shows the authorization for Amazon QuickSight to access the GitHub account for the Amazon QuickSight documentation. Note Amazon QuickSight documentation is now available on GitHub. If you want to make changes to this user guide, you can use GitHub to edit it directly. (Optional) If your SaaS account is part of an organizational account, you might be asked to request organization access as part of authorizing Amazon QuickSight. If you want to do this, follow the prompts on your SaaS screen, then choose to authorize Amazon QuickSight. 7. After authorization is complete, choose a table or object to connect to. Then choose Select. 8. On the Finish data set creation screen, choose one of these options: • To save the data source and dataset, choose Edit/Preview data. Then choose Save from the top menu bar. • To create a dataset and an analysis using the data as-is, choose Visualize. This option automatically saves the data source and the dataset. You can also choose Edit/Preview data to prepare the data before creating an analysis. This opens the data preparation screen. For more information about data preparation, see Preparing dataset examples. The following constraints apply: SaaS sources 119 Amazon QuickSight User Guide • The SaaS source must support REST API operations for Amazon QuickSight to connect to it. • If you are connecting to Jira, the URL must be public address. • If you don't have enough SPICE capacity, choose Edit/Preview data. In the data preparation screen, you can remove fields from the dataset to decrease its size or apply a filter that reduces the number of rows returned. For more information about data preparation, see Preparing dataset examples. Creating a dataset from Salesforce Use the following procedure to create a dataset by connecting to Salesforce and selecting a report or object to provide data. To create a dataset using Salesforce from a report or object 1. Check Data source quotas to make sure that your target report or object doesn't exceed data source quotas. 2. On the Amazon QuickSight start page, choose Datasets. 3. On the Datasets page, choose New dataset. 4. In the FROM NEW DATA SOURCES section of the Create a Data Set page, choose the Salesforce icon. 5. Enter a name for the data source and then choose Create data source. 6. On the Salesforce login page, enter your Salesforce credentials. 7. For Data elements: contain your data, choose Select and then choose either REPORT or OBJECT. Note Joined reports aren't supported as Amazon QuickSight data sources. 8. Choose one of the following options: • To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation. For more information about data preparation, see Preparing dataset examples. • Otherwise, choose a report or object and then choose Select. 9. Choose one of the following options: Salesforce 120 Amazon QuickSight User |
amazon-quicksight-user-037 | amazon-quicksight-user.pdf | 37 | data source and then choose Create data source. 6. On the Salesforce login page, enter your Salesforce credentials. 7. For Data elements: contain your data, choose Select and then choose either REPORT or OBJECT. Note Joined reports aren't supported as Amazon QuickSight data sources. 8. Choose one of the following options: • To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation. For more information about data preparation, see Preparing dataset examples. • Otherwise, choose a report or object and then choose Select. 9. Choose one of the following options: Salesforce 120 Amazon QuickSight User Guide • To create a dataset and an analysis using the data as-is, choose Visualize. Note If you don't have enough SPICE capacity, choose Edit/Preview data. In data preparation, you can remove fields from the dataset to decrease its size or apply a filter that reduces the number of rows returned. For more information about data preparation, see Preparing dataset examples. • To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation for the selected report or object. For more information about data preparation, see Preparing dataset examples. Using Trino with Amazon QuickSight Trino is a massively parallel processing (MPP) query engine built to quickly query data lakes with petabytes of data. Use this section to learn how to connect from Amazon QuickSight to Trino. All traffic between Amazon QuickSight and Trino is enabled by SSL. Amazon QuickSight supports basic username and password authentication to Trino. Creating a data source connection for Trino 1. Begin by creating a new dataset. From the left navigation pane, choose Datasets, then choose New Dataset. 2. Choose the Trino data source card. You should see the following data source creation modal. Trino 121 Amazon QuickSight User Guide 3. 4. 5. 6. Trino For Data source name, enter a descriptive name for your Trino data source connection. Because you can create many datasets from a connection to Trino, it's best to keep the name simple. For Connection type, select the type of network you're using. Choose Public network if your data is shared publicly. Choose VPC if your data is inside a VPC. To configure a VPC connection in Amazon QuickSight, see Configuring the VPC connection in Amazon QuickSight. For Database server, enter the hostname specified in your Trino connection details. For Catalog, enter the catalog specified in your Trino connection details. 122 Amazon QuickSight User Guide 7. 8. 9. For Port, enter the port specified in your Trino connection details. For Username and Password, enter your Trino connection credentials. To verify the connection is working, choose Validate connection. 10. To finish and create the data source, choose Create data source. Adding a new Amazon QuickSight dataset for Trino After you go through the data source creation process for Trino, you can create Trino datasets to use for analysis. You can create new datasets from a new or an existing Trino data source. When you are creating a new data source, Amazon QuickSight immediately takes you to creating a dataset, which is step 3 below. If you're using an existing data source to create a new dataset, start from step 1 below. To create a dataset using a Trino data source, see the following steps. 1. 2. From the start page, choose Datasets and then choose New dataset in the top right. Scroll down to the section that says FROM EXISTING DATA SOURCES and choose the Trino data source you created. 3. Choose Create data set. 4. To specify the table you want to connect to, choose a schema. The screenshot below shows a chosen sample schema. If you don't want to choose a schema, you can also use your own SQL statement. Trino 123 Amazon QuickSight User Guide 5. To specify the table you want to connect to, first select the Schema you want to use. For Tables, choose the table that you want to use. If you prefer to use your own SQL statement, select Use custom SQL. 6. Choose Edit/Preview. 7. (Optional) To add more data, use the following steps: 8. Choose Add data in the top right. 9. To connect to different data, choose Switch data source, and choose a different dataset. 10. Follow the prompts to finish adding data. 11. After adding new data to the same dataset, choose Configure this join (the two red dots). Set up a join for each additional table. 12. If you want to add calculated fields, choose Add calculated field. 13. Clear the check box for any fields that you want to omit. 14. Update any data types that you want to change. Trino 124 Amazon QuickSight User Guide 15. When you are done, choose Save to save and close the dataset. Note Connectivity between QuickSight and Trino was |
amazon-quicksight-user-038 | amazon-quicksight-user.pdf | 38 | source, and choose a different dataset. 10. Follow the prompts to finish adding data. 11. After adding new data to the same dataset, choose Configure this join (the two red dots). Set up a join for each additional table. 12. If you want to add calculated fields, choose Add calculated field. 13. Clear the check box for any fields that you want to omit. 14. Update any data types that you want to change. Trino 124 Amazon QuickSight User Guide 15. When you are done, choose Save to save and close the dataset. Note Connectivity between QuickSight and Trino was validated using Trino version 410. Creating a dataset using a local text file To create a dataset using a local text file data source, identify the location of the file, and then upload it. The file data is automatically imported into SPICE as part of creating a dataset. To create a dataset based on a local text file 1. Check Data source quotas to make sure that your target file doesn't exceed data source quotas. Supported file types include .csv, .tsv, .json, .clf, or .elf files. 2. On the Amazon QuickSight start page, choose Datasets. 3. On the Datasets page, choose New dataset. 4. In the FROM NEW DATA SOURCES section of the Create a Data Set page, choose Upload a file. 5. In the Open dialog box, browse to a file, select it, and then choose Open. A file must be 1 GB or less to be uploaded to Amazon QuickSight. 6. To prepare the data before creating the dataset, choose Edit/Preview data. Otherwise, choose Visualize to create an analysis using the data as-is. If you choose the former, you can specify a dataset name as part of preparing the data. If you choose the latter, a dataset with the same name as the source file is created. To learn more about data preparation, see Preparing data in Amazon QuickSight. Using Amazon Timestream data with Amazon QuickSight Following, you can find how to connect to your Amazon Timestream data using Amazon QuickSight. For a brief overview, see the Getting started with Amazon Timestream and Amazon QuickSight video tutorial on YouTube. Text files 125 Amazon QuickSight User Guide Creating a new Amazon QuickSight data source connection for a Timestream database Following, you can find how to connect to Amazon Timestream from Amazon QuickSight. Before you can proceed, Amazon QuickSight needs to be authorized to connect to Amazon Timestream. If connections aren't enabled, you get an error when you try to connect. A QuickSight administrator can authorize connections to AWS resources. To authorize, open the menu by clicking on your profile icon at top right. Choose Manage QuickSight, Security & permissions, Add or remove. Then enable the check box for Amazon Timestream, then choose Update to confirm. For more information, see Accessing data sources. To connect to Amazon Timestream 1. Begin by creating a new dataset. Choose Datasets from the navigation pane at left, then choose New Dataset. 2. Choose the Timestream data source card. 3. For Data source name, enter a descriptive name for your Timestream data source connection, for example US Timestream Data. Because you can create many datasets from a connection to Timestream, it's best to keep the name simple. 4. Choose Validate connection to check that you can successfully connect to Timestream. 5. Choose Create data source to proceed. 6. For Database, choose Select to view the list of available options. 7. Choose the one you want to use, then choose Select to continue. 8. Do one of the following: • To import your data into QuickSight's in-memory engine (called SPICE), choose Import to SPICE for quicker analytics. • To allow QuickSight to run a query against your data each time you refresh the dataset or use the analysis or dashboard, choose Directly query your data. If you want to enable autorefresh on a published dashboard that uses Timestream data, the Timestream dataset needs to use a direct query. 9. Choose Edit/Preview and then Save to save your dataset and close it. 10. Repeat these steps for the number of concurrent direct connections to Timestream that you want to open in a dataset. For example, let's say you want to use four tables in a QuickSight Timestream data 126 Amazon QuickSight User Guide dataset. Currently, QuickSight datasets connect to only one table at a time from a Timestream data source. To use four tables in the same dataset, you need to add four data source connections in QuickSight. Managing permissions for Timestream data The following procedure describes how to view, add, and revoke permissions to allow access to the same Timestream data source. The people that you add need to be active users in QuickSight before you can add them. To edit permissions on a dataset 1. |
amazon-quicksight-user-039 | amazon-quicksight-user.pdf | 39 | you want to use four tables in a QuickSight Timestream data 126 Amazon QuickSight User Guide dataset. Currently, QuickSight datasets connect to only one table at a time from a Timestream data source. To use four tables in the same dataset, you need to add four data source connections in QuickSight. Managing permissions for Timestream data The following procedure describes how to view, add, and revoke permissions to allow access to the same Timestream data source. The people that you add need to be active users in QuickSight before you can add them. To edit permissions on a dataset 1. Choose Datasets at left, then scroll down to find the dataset for your Timestream connection. An example might be US Timestream Data. 2. Choose the Timestream dataset to open it. 3. On the dataset details page that opens, choose the Permissionstab. A list of current permissions appears. 4. To add permissions, choose Add users & groups, then follow these steps: a. Add users or groups to allow them to use the same dataset. b. When you're finished adding everyone that you want to add, choose the Permissions that you want to apply to them. 5. (Optional) To edit permissions, you can choose Viewer or Owner. • Choose Viewer to allow read access. • Choose Owner to allow that user to edit, share, or delete this QuickSight data source. 6. (Optional) To revoke permissions, choose Revoke access. After you revoke someone's access, they can't create edit, share, or delete the dataset. 7. When you are finished, choose Close. Adding a new QuickSight dataset for Timestream After you have an existing data source connection for Timestream data, you can create Timestream datasets to use for analysis. Timestream data 127 Amazon QuickSight User Guide Currently, you can use a Timestream connection only for a single table in a dataset. To add data from multiple Timestream tables in a single dataset, create an additional QuickSight data source connection for each table. To create a dataset using Amazon Timestream 1. Choose Datasets at left, then scroll down to find the data source card for your Timestream connection. If you have many data sources, you can use the search bar at the top of the page to find your data source with a partial match on the name. 2. Choose the Timestream data source card, and then choose Create data set. 3. For Database, choose Select to view a list of available databases and choose the one that you want to use. 4. For Tables, choose the table that you want to use. 5. Choose Edit/Preview. 6. (Optional) To add more data, use the following steps: a. b. c. Choose Add data at top right. To connect to different data, choose Switch data source, and choose a different dataset. Follow the UI prompts to finish adding data. d. After adding new data to the same dataset, choose Configure this join (the two red dots). e. f. Set up a join for each additional table. If you want to add calculated fields, choose Add calculated field. To add a model from SageMaker AI, choose Augment with SageMaker. This option is only available in QuickSight Enterprise edition. g. Clear the check box for any fields that you want to omit. h. Update any data types that you want to change. 7. When you are done, choose Save to save and close the dataset. Adding Timestream data to an analysis Following, you can find how to add an Amazon Timestream dataset to a QuickSight analysis. Before you begin, make sure that you have an existing dataset that contains the Timestream data that you want to use. Timestream data 128 Amazon QuickSight User Guide To add Amazon Timestream data to an analysis 1. Choose Analyses at left. 2. Do one of the following: • To create a new analysis, choose New analysis at right. • To add to an existing analysis, open the analysis that you want to edit. • Choose the pencil icon near at top left. • Choose Add data set. 3. Choose the Timestream dataset that you want to add. For more information, see Working with analyses. Creating datasets You can create data sets from new or existing data sources in Amazon QuickSight. You can use a variety of database data sources to provide data to Amazon QuickSight. This includes Amazon RDS instances and Amazon Redshift clusters. It also includes MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL instances in your organization, Amazon EC2, or similar environments. Topics • Creating datasets using new data sources • Creating a dataset using an existing data source • Creating a dataset using an existing dataset in Amazon QuickSight Creating datasets using new data sources When you create a dataset based on an AWS service like Amazon RDS, Amazon Redshift, or |
amazon-quicksight-user-040 | amazon-quicksight-user.pdf | 40 | existing data sources in Amazon QuickSight. You can use a variety of database data sources to provide data to Amazon QuickSight. This includes Amazon RDS instances and Amazon Redshift clusters. It also includes MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL instances in your organization, Amazon EC2, or similar environments. Topics • Creating datasets using new data sources • Creating a dataset using an existing data source • Creating a dataset using an existing dataset in Amazon QuickSight Creating datasets using new data sources When you create a dataset based on an AWS service like Amazon RDS, Amazon Redshift, or Amazon EC2, data transfer charges might apply when consuming data from that source. Those charges might also vary depending on whether that AWS resource is in the home AWS Region that you chose for your Amazon QuickSight account. For details on pricing, see the pricing page for the service in question. When creating a new database dataset, you can select one table, join several tables, or create a SQL query to retrieve the data that you want. You can also change whether the dataset uses a direct query or instead stores data in SPICE. Creating datasets 129 Amazon QuickSight To create a new dataset User Guide 1. To create a dataset, choose New data set on the Datasets page. You can then create a dataset based on an existing dataset or data source, or connect to a new data source and base the dataset on that. 2. Provide connection information to the data source: • For local text or Microsoft Excel files, you can simply identify the file location and upload the file. • For Amazon S3, provide a manifest identifying the files or buckets that you want to use, and also the import settings for the target files. • For Amazon Athena, all Athena databases for your AWS account are returned. No additional credentials are required. • For Salesforce, provide credentials to connect with. • For Amazon Redshift, Amazon RDS, Amazon EC2, or other database data sources, provide information about the server and database that host the data. Also provide valid credentials for that database instance. Creating a dataset from a database The following procedures walk you through connecting to database data sources and creating datasets. To create datasets from AWS data sources that your Amazon QuickSight account autodiscovered, use Creating a dataset from an autodiscovered Amazon Redshift cluster or Amazon RDS instance. To create datasets from any other database data sources, use Creating a dataset using a database that's not autodiscovered. Creating a dataset from an autodiscovered Amazon Redshift cluster or Amazon RDS instance Use the following procedure to create a connection to an autodiscovered AWS data source. To create a connection to an autodiscovered AWS data source 1. Check Data source quotas to make sure that your target table or query doesn't exceed data source quotas. 2. Confirm that the database credentials you plan to use have appropriate permissions as described in Required permissions. From new data sources 130 Amazon QuickSight User Guide 3. Make sure that you have configured the cluster or instance for Amazon QuickSight access by following the instructions in Network and database configuration requirements. 4. On the Amazon QuickSight start page, choose Datasets. 5. On the Datasets page, choose New dataset. 6. In the FROM NEW DATA SOURCES section of the Create a Data Set page, choose either the RDS or the Redshift Auto-discovered icon, depending on the AWS service that you want to connect to. 7. Enter the connection information for the data source, as follows: • For Data source name, enter a name for the data source. • For Instance ID, choose the name of the instance or cluster that you want to connect to. • Database name shows the default database for the Instance ID cluster or instance. To use a different database on that cluster or instance, enter its name. • For UserName, enter the user name of a user account that has permissions to do the following: • Access the target database. • Read (perform a SELECT statement on) any tables in that database that you want to use. • For Password, enter the password for the account that you entered. 8. Choose Validate connection to verify your connection information is correct. 9. If the connection validates, choose Create data source. If not, correct the connection information and try validating again. Note Amazon QuickSight automatically secures connections to Amazon RDS instances and Amazon Redshift clusters by using Secure Sockets Layer (SSL). You don't need to do anything to enable this. 10. Choose one of the following: • Custom SQL On the next screen, you can choose to write a query with the Use custom SQL option. Doing this opens a screen named Enter custom SQL query, where you |
amazon-quicksight-user-041 | amazon-quicksight-user.pdf | 41 | that you entered. 8. Choose Validate connection to verify your connection information is correct. 9. If the connection validates, choose Create data source. If not, correct the connection information and try validating again. Note Amazon QuickSight automatically secures connections to Amazon RDS instances and Amazon Redshift clusters by using Secure Sockets Layer (SSL). You don't need to do anything to enable this. 10. Choose one of the following: • Custom SQL On the next screen, you can choose to write a query with the Use custom SQL option. Doing this opens a screen named Enter custom SQL query, where you can enter a name for your query, and then enter the SQL. For best results, compose the query in a SQL editor, and then paste it into this window. After you name and enter the query, you can choose Edit/Preview From new data sources 131 Amazon QuickSight User Guide data or Confirm query. Choose Edit/Preview data to immediately go to data preparation. Choose Confirm query to validate the SQL and make sure that there are no errors. • Choose tables To connect to specific tables, for Schema: contain sets of tables, choose Select and then choose a schema. In some cases where there is only a single schema in the database, that schema is automatically chosen, and the schema selection option isn't displayed. To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation. Use this option if you want to join to more tables. Otherwise, after choosing a table, choose Select. 11. Choose one of the following options: • Prepare the data before creating an analysis. To do this, choose Edit/Preview data to open data preparation for the selected table. For more information about data preparation, see Preparing dataset examples. • Create a dataset and analysis using the table data as-is and to import the dataset data into SPICE for improved performance (recommended). To do this, check the table size and the SPICE indicator to see if you have enough capacity. If you have enough SPICE capacity, choose Import to SPICE for quicker analytics, and then create an analysis by choosing Visualize. Note If you want to use SPICE and you don't have enough space, choose Edit/Preview data. In data preparation, you can remove fields from the dataset to decrease its size. You can also apply a filter or write a SQL query that reduces the number of rows or columns returned. For more information about data preparation, see Preparing dataset examples. • To create a dataset and an analysis using the table data as-is, and to have the data queried directly from the database, choose the Directly query your data option. Then create an analysis by choosing Visualize. From new data sources 132 Amazon QuickSight User Guide Creating a dataset using a database that's not autodiscovered Use the following procedure to create a connection to any database other than an autodiscovered Amazon Redshift cluster or Amazon RDS instance. Such databases include Amazon Redshift clusters and Amazon RDS instances that are in a different AWS Region or are associated with a different AWS account. They also include MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL instances that are on-premises, in Amazon EC2, or in some other accessible environment. To create a connection to a database that isn't an autodiscovered Amazon Redshift cluster or RDS instance 1. Check Data source quotas to make sure that your target table or query doesn't exceed data source quotas. 2. Confirm that the database credentials that you plan to use have appropriate permissions as described in Required permissions. 3. Make sure that you have configured the cluster or instance for Amazon QuickSight access by following the instructions in Network and database configuration requirements. 4. On the Amazon QuickSight start page, choose Manage data. 5. On the Datasets page, choose New data set. 6. In the FROM NEW DATA SOURCES section of the Create a Data Set page, choose the Redshift Manual connect icon if you want to connect to an Amazon Redshift cluster in another AWS Region or associated with a different AWS account. Or choose the appropriate database management system icon to connect to an instance of Amazon Aurora, MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL. 7. Enter the connection information for the data source, as follows: • For Data source name, enter a name for the data source. • For Database server, enter one of the following values: • For an Amazon Redshift cluster or Amazon RDS instance, enter the endpoint of the cluster or instance without the port number. For example, if the endpoint value is clustername.1234abcd.us-west-2.redshift.amazonaws.com:1234, then enter clustername.1234abcd.us-west-2.redshift.amazonaws.com. You can get the endpoint value from the Endpoint field on the cluster or instance detail page in the AWS console. From new data sources 133 |
amazon-quicksight-user-042 | amazon-quicksight-user.pdf | 42 | MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL. 7. Enter the connection information for the data source, as follows: • For Data source name, enter a name for the data source. • For Database server, enter one of the following values: • For an Amazon Redshift cluster or Amazon RDS instance, enter the endpoint of the cluster or instance without the port number. For example, if the endpoint value is clustername.1234abcd.us-west-2.redshift.amazonaws.com:1234, then enter clustername.1234abcd.us-west-2.redshift.amazonaws.com. You can get the endpoint value from the Endpoint field on the cluster or instance detail page in the AWS console. From new data sources 133 Amazon QuickSight User Guide • For an Amazon EC2 instance of MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL, enter the public DNS address. You can get the public DNS value from the Public DNS field on the instance detail pane in the Amazon EC2 console. • For a non-Amazon EC2 instance of MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL, enter the hostname or public IP address of the database server. If you are using Secure Sockets Layer (SSL) for a secured connection (recommended), you likely need to provide the hostname to match the information required by the SSL certificate. For a list of accepted certificates see QuickSight SSL and CA certificates. • For Port, enter the port that the cluster or instance uses for connections. • For Database name, enter the name of the database that you want to use. • For UserName, enter the user name of a user account that has permissions to do the following: • Access the target database. • Read (perform a SELECT statement on) any tables in that database that you want to use. • For Password, enter the password associated with the account you entered. 8. (Optional) If you are connecting to anything other than an Amazon Redshift cluster and you don't want a secured connection, make sure that Enable SSL is clear. We strongly recommend leaving this checked, because an unsecured connection can be open to tampering. For more information on how the target instance uses SSL to secure connections, see the documentation for the target database management system. Amazon QuickSight doesn't accept self-signed SSL certificates as valid. For a list of accepted certificates, see QuickSight SSL and CA certificates. Amazon QuickSight automatically secures connections to Amazon Redshift clusters by using SSL. You don't need to do anything to enable this. Some databases, such as Presto and Apache Spark, must meet additional requirements before Amazon QuickSight can connect. For more information, see Creating a data source using Presto, or Creating a data source using Apache Spark. 9. (Optional) Choose Validate connection to verify your connection information is correct. 10. If the connection validates, choose Create data source. If not, correct the connection information and try validating again. 11. Choose one of the following: From new data sources 134 Amazon QuickSight • Custom SQL User Guide On the next screen, you can choose to write a query with the Use custom SQL option. Doing this opens a screen named Enter custom SQL query, where you can enter a name for your query, and then enter the SQL. For best results, compose the query in a SQL editor, and then paste it into this window. After you name and enter the query, you can choose Edit/Preview data or Confirm query. Choose Edit/Preview data to immediately go to data preparation. Choose Confirm query to validate the SQL and make sure that there are no errors. • Choose tables To connect to specific tables, for Schema: contain sets of tables, choose Select and then choose a schema. In some cases where there is only a single schema in the database, that schema is automatically chosen, and the schema selection option isn't displayed. To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation. Use this option if you want to join to more tables. Otherwise, after choosing a table, choose Select. 12. Choose one of the following options: • Prepare the data before creating an analysis. To do this, choose Edit/Preview data to open data preparation for the selected table. For more information about data preparation, see Preparing dataset examples. • Create a dataset and an analysis using the table data as-is and import the dataset data into SPICE for improved performance (recommended). To do this, check the table size and the SPICE indicator to see if you have enough space. If you have enough SPICE capacity, choose Import to SPICE for quicker analytics, and then create an analysis by choosing Visualize. Note If you want to use SPICE and you don't have enough space, choose Edit/Preview data. In data preparation, you can remove fields from the dataset to decrease its size. You can also apply a filter or write |
amazon-quicksight-user-043 | amazon-quicksight-user.pdf | 43 | • Create a dataset and an analysis using the table data as-is and import the dataset data into SPICE for improved performance (recommended). To do this, check the table size and the SPICE indicator to see if you have enough space. If you have enough SPICE capacity, choose Import to SPICE for quicker analytics, and then create an analysis by choosing Visualize. Note If you want to use SPICE and you don't have enough space, choose Edit/Preview data. In data preparation, you can remove fields from the dataset to decrease its size. You can also apply a filter or write a SQL query that reduces the number of rows or columns returned. For more information about data preparation, see Preparing dataset examples. From new data sources 135 Amazon QuickSight User Guide • Create a dataset and an analysis using the table data as-is and have the data queried directly from the database. To do this, choose the Directly query your data option. Then create an analysis by choosing Visualize. Creating a dataset using an existing data source After you make an initial connection to a Salesforce, AWS data store, or other database data source, Amazon QuickSight saves the connection information. It adds the data source to the FROM EXISTING DATA SOURCES section of the Create a Data Set page. You can use these existing data sources to create new datasets without respecifying connection information. Creating a dataset using an existing Amazon S3 data source Use the following procedure to create a dataset using an existing Amazon S3 data source. To create a dataset using an existing S3 data source 1. On the Amazon QuickSight start page, choose Datasets. 2. On the Datasets page, choose New dataset. 3. 4. In the FROM EXISTING DATA SOURCES section of the Create a Data Set page, choose the Amazon S3 data source to use. To prepare the data before creating the dataset, choose Edit/Preview data. To create an analysis using the data as-is, choose Visualize. Creating a dataset using an existing Amazon Athena data source To create a dataset using an existing Amazon Athena data source, use the following procedure. To create a dataset from an existing Athena connection profile 1. On the Amazon QuickSight start page, choose Manage data. 2. On the Datasets page, choose New data set. In the FROM EXISTING DATA SOURCES section of the Create a Data Set page, choose the connection profile icon for the existing data source that you want to use. Connection profiles are labeled with the data source icon and the name provided by the person who created the connection. From existing data sources 136 Amazon QuickSight 3. Choose Create data set. User Guide Amazon QuickSight creates a connection profile for this data source based only on the Athena workgroup. The database and table aren't saved. 4. On the Choose your table screen, do one of the following: • To write a SQL query, choose Use custom SQL. • To choose a database and table, first select your database from the Database list. Next, choose a table from the list that appears for your database. Create a dataset using an existing Salesforce data source Use the following procedure to create a dataset using an existing Salesforce data source. To create a dataset using an existing Salesforce data source 1. On the Amazon QuickSight start page, choose Manage data. 2. On the Datasets page, choose New data set. 3. In the FROM EXISTING DATA SOURCES section of the Create a Data Set page, choose the Salesforce data source to use. 4. Choose Create Data Set. 5. Choose one of the following: • Custom SQL On the next screen, you can choose to write a query with the Use custom SQL option. Doing this opens a screen named Enter custom SQL query, where you can enter a name for your query, and then enter the SQL. For best results, compose the query in a SQL editor, and then paste it into this window. After you name and enter the query, you can choose Edit/Preview data or Confirm query. Choose Edit/Preview data to immediately go to data preparation. Choose Confirm query to validate the SQL and make sure that there are no errors. • Choose tables To connect to specific tables, for Data elements: contain your data, choose Select and then choose either REPORT or OBJECT. To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation. Use this option if you want to join to more tables. From existing data sources 137 Amazon QuickSight User Guide Otherwise, after choosing a table, choose Select. 6. On the next screen, choose one of the following options: • To create a dataset and an analysis using the data as-is, choose Visualize. Note If you don't have enough |
amazon-quicksight-user-044 | amazon-quicksight-user.pdf | 44 | there are no errors. • Choose tables To connect to specific tables, for Data elements: contain your data, choose Select and then choose either REPORT or OBJECT. To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation. Use this option if you want to join to more tables. From existing data sources 137 Amazon QuickSight User Guide Otherwise, after choosing a table, choose Select. 6. On the next screen, choose one of the following options: • To create a dataset and an analysis using the data as-is, choose Visualize. Note If you don't have enough SPICE capacity, choose Edit/Preview data. In data preparation, you can remove fields from the dataset to decrease its size or apply a filter that reduces the number of rows returned. For more information about data preparation, see Preparing dataset examples. • To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation for the selected report or object. For more information about data preparation, see Preparing dataset examples. Creating a dataset using an existing database data source Use the following procedure to create a dataset using an existing database data source. To create a dataset using an existing database data source 1. On the Amazon QuickSight start page, choose Manage data. 2. On the Datasets page, choose New data set. 3. In the FROM EXISTING DATA SOURCES section of the Create a Data Set page, choose the database data source to use, and then choose Create Data Set. 4. Choose one of the following: • Custom SQL On the next screen, you can choose to write a query with the Use custom SQL option. Doing this opens a screen named Enter custom SQL query, where you can enter a name for your query, and then enter the SQL. For best results, compose the query in a SQL editor, and then paste it into this window. After you name and enter the query, you can choose Edit/Preview data or Confirm query. Choose Edit/Preview data to immediately go to data preparation. Choose Confirm query to validate the SQL and make sure that there are no errors. • Choose tables From existing data sources 138 Amazon QuickSight User Guide To connect to specific tables, for Schema: contain sets of tables, choose Select and then choose a schema. In some cases where there is only a single schema in the database, that schema is automatically chosen, and the schema selection option isn't displayed. To prepare the data before creating an analysis, choose Edit/Preview data to open data preparation. Use this option if you want to join to more tables. Otherwise, after choosing a table, choose Select. 5. Choose one of the following options: • Prepare the data before creating an analysis. To do this, choose Edit/Preview data to open data preparation for the selected table. For more information about data preparation, see Preparing dataset examples. • Create a dataset and an analysis using the table data as-is and import the dataset data into SPICE for improved performance (recommended). To do this, check the SPICE indicator to see if you have enough space. If you have enough SPICE capacity, choose Import to SPICE for quicker analytics, and then create an analysis by choosing Visualize. Note If you want to use SPICE and you don't have enough space, choose Edit/Preview data. In data preparation, you can remove fields from the dataset to decrease its size. You can also apply a filter or write a SQL query that reduces the number of rows or columns returned. For more information about data preparation, see Preparing dataset examples. • Create a dataset and an analysis using the table data as-is and have the data queried directly from the database. To do this, choose the Directly query your data option. Then create an analysis by choosing Visualize. Creating a dataset using an existing dataset in Amazon QuickSight After you create a dataset in Amazon QuickSight, you can create additional datasets using it as a source. When you do this, any data preparation that the parent dataset contains, such as any joins or calculated fields, is kept. You can add additional preparation to the data in the new child From existing datasets 139 Amazon QuickSight User Guide datasets, such as joining new data and filtering data. You can also set up your own data refresh schedule for the child dataset and track the dashboards and analyses that use it. Child datasets that are created using a dataset with RLS rules active as a source inherit the parent dataset's RLS rules. Users who are creating a child dataset from a larger parent dataset can only see the data that they have access to in the parent dataset. Then, you can add more RLS rules to the new child |
amazon-quicksight-user-045 | amazon-quicksight-user.pdf | 45 | new child From existing datasets 139 Amazon QuickSight User Guide datasets, such as joining new data and filtering data. You can also set up your own data refresh schedule for the child dataset and track the dashboards and analyses that use it. Child datasets that are created using a dataset with RLS rules active as a source inherit the parent dataset's RLS rules. Users who are creating a child dataset from a larger parent dataset can only see the data that they have access to in the parent dataset. Then, you can add more RLS rules to the new child dataset in addition to the inherited RLS rules to further manage who can access the data that is in the new dataset. You can only create child datasets from datasets with RLS rules active in Direct Query. Creating datasets from existing QuickSight datasets has the following advantages: • Central management of datasets – Data engineers can easily scale to the needs of multiple teams within their organization. To do this, they can develop and maintain a few general- purpose datasets that describe the organization's main data models. • Reduction of data source management – Business analysts (BAs) often spend lots of time and effort requesting access to databases, managing database credentials, finding the right tables, and managing QuickSight data refresh schedules. Building new datasets from existing datasets means that BAs don't have to start from scratch with raw data from databases. They can start with curated data. • Predefined key metrics – By creating datasets from existing datasets, data engineers can centrally define and maintain critical data definitions across their company's many organizations. Examples might be sales growth and net marginal return. With this feature, data engineers can also distribute changes to those definitions. This approach means that their business analysts can get started with visualizing the right data more quickly and reliably. • Flexibility to customize data – By creating datasets from existing datasets, business analysts get more flexibility to customize datasets for their own business needs. They can avoid worry about disrupting data for other teams. For example, let's say that you're part of an ecommerce central team of five data engineers. You and your team has access to sales, orders, cancellations, and returns data in a database. You have created a QuickSight dataset by joining 18 other dimension tables through a schema. A key metric that your team has created is the calculated field, order product sales (OPS). Its definition is: OPS = product quantity x price. Your team serves over 100 business analysts across 10 different teams in eight countries. These are the Coupons team, the Outbound Marketing team, the Mobile Platform team, and the From existing datasets 140 Amazon QuickSight User Guide Recommendations team. All of these teams use the OPS metric as a base to analyze their own business line. Rather than manually creating and maintaining hundreds of unconnected datasets, your team reuses datasets to create multiple levels of datasets for teams across the organization. Doing this centralizes data management and allows each team to customize the data for their own needs. At the same time, this syncs updates to the data, such as updates to metric definitions, and maintains row-level and column-level security. For example, individual teams in your organization can use the centralized datasets. They can then combine them with the data specific to their team to create new datasets and build analyses on top of them. Along with using the key OPS metric, other teams in your organization can reuse column metadata from the centralized datasets that you created. For example, the Data Engineering team can define metadata, such as name, description, data type, and folders, in a centralized dataset. All subsequent teams can use it. Note Amazon QuickSight supports creating up to two additional levels of datasets from a single dataset. For example, from a parent dataset, you can create a child dataset and then a grandchild dataset for a total of three dataset levels. Creating a dataset from an existing dataset Use the following procedure to create a dataset from an existing dataset. To create a dataset from an existing dataset 1. From the QuickSight start page, choose Datasets in the pane at left. 2. On the Datasets page, choose the dataset that you want to use to create a new dataset. 3. On the page that opens for that dataset, choose the drop-down menu for Use in analysis, and then choose Use in dataset. From existing datasets 141 Amazon QuickSight User Guide The data preparation page opens and preloads everything from the parent dataset, including calculated fields, joins, and security settings. 4. On the data preparation page that opens, for Query mode at bottom left, choose how you want the dataset to pull in changes and updates from |
amazon-quicksight-user-046 | amazon-quicksight-user.pdf | 46 | in the pane at left. 2. On the Datasets page, choose the dataset that you want to use to create a new dataset. 3. On the page that opens for that dataset, choose the drop-down menu for Use in analysis, and then choose Use in dataset. From existing datasets 141 Amazon QuickSight User Guide The data preparation page opens and preloads everything from the parent dataset, including calculated fields, joins, and security settings. 4. On the data preparation page that opens, for Query mode at bottom left, choose how you want the dataset to pull in changes and updates from the original, parent dataset. You can choose the following options: • Direct query – This is the default query mode. If you choose this option, the data for this dataset automatically refreshes when you open an associated dataset, analysis, or dashboard. However, the following limitations apply: • If the parent dataset allows direct querying, you can use direct query mode in the child dataset. • If you have multiple parent datasets in a join, you can choose direct query mode for your child dataset only if all the parents are from the same underlying data source. For example, the same Amazon Redshift connection. • Direct query is supported for a single SPICE parent dataset. It is not supported for multiple SPICE parent datasets in a join. • SPICE – If you choose this option, you can set up a schedule for your new dataset to sync with the parent dataset. For more information about creating SPICE refresh schedules for datasets, see Refreshing SPICE data. 5. (Optional) Prepare your data for analysis. For more information about preparing data, see Preparing data in Amazon QuickSight. 6. (Optional) Set up row-level or column-level security (RLS/CLS) to restrict access to the dataset. For more information about setting up RLS, see Using row-level security with user-based rules to restrict access to a dataset. For more information about setting up CLS, see Using column- level security to restrict access to a dataset. From existing datasets 142 Amazon QuickSight Note User Guide You can set up RLS/CLS on child datasets only. RLS/CLS on parent datasets is not supported. 7. When you're finished, choose Save & publish to save your changes and publish the new child dataset. Or choose Publish & visualize to publish the new child dataset and begin visualizing your data. Restricting others from creating new datasets from your dataset When you create a dataset in Amazon QuickSight, you can prevent others from using it as a source for other datasets. You can specify if others can use it to create any datasets at all. Or you can specify the type of datasets others can or can't create from your dataset, such as direct query datasets or SPICE datasets. Use the following procedure to learn how to restrict others from creating new datasets from your dataset. To restrict others from creating new datasets from your dataset 1. From the QuickSight start page, choose Datasets in the pane at left. 2. On the Datasets page, choose the dataset that you want to restrict creating new datasets from. 3. On the page that opens for that dataset, choose Edit dataset. 4. On the data preparation page that opens, choose Manage at upper right, and then choose Properties. 5. In the Dataset properties pane that opens, choose from the following options: • To restrict anyone from creating any type of new datasets from this dataset, turn off Allow new datasets to be created from this one. From existing datasets 143 Amazon QuickSight User Guide The toggle is blue when creating new datasets is allowed. It's gray when creating new datasets isn't allowed. • To restrict others from creating direct query datasets, clear Allow direct query. • To restrict others from creating SPICE copies of your dataset, clear Allow SPICE copies. For more information about SPICE datasets, see Importing data into SPICE. 6. Close the pane. Editing datasets You can edit an existing dataset to perform data preparation. For more information about Amazon QuickSight data preparation functionality, see Preparing data in Amazon QuickSight. You can open a dataset for editing from the Datasets page, or from the analysis page. Editing a dataset from either location modifies the dataset for all analyses that use it. Things to consider when editing datasets In two situations, changes to a dataset might cause concern. One is if you deliberately edit the dataset. The other is if your data source has changed so much that it affects the analyses based on it. Important Analyses that are in production usage should be protected so they continue to function correctly. We recommend the following when you're dealing with data changes: • Document your data sources and datasets, and the visuals that rely upon them. Documentation should include |
amazon-quicksight-user-047 | amazon-quicksight-user.pdf | 47 | a dataset from either location modifies the dataset for all analyses that use it. Things to consider when editing datasets In two situations, changes to a dataset might cause concern. One is if you deliberately edit the dataset. The other is if your data source has changed so much that it affects the analyses based on it. Important Analyses that are in production usage should be protected so they continue to function correctly. We recommend the following when you're dealing with data changes: • Document your data sources and datasets, and the visuals that rely upon them. Documentation should include screenshots, fields used, placement in field wells, filters, sorts, calculations, colors, formatting, and so on. Record everything that you need to recreate the visual. You can also track which QuickSight resources use a dataset in the dataset management options. For more information, see Tracking dashboards and analyses that use a dataset. • When you edit a dataset, try not to make changes that might break existing visuals. For example, don't remove columns that are being used in a visual. If you must remove a column, create a Editing datasets 144 Amazon QuickSight User Guide calculated column in its place. The replacement column should have the same name and data type as the original. • If your data source or dataset changes in your source database, adapt your visual to accommodate the change, as described previously. Or you can try to adapt the source database. For example, you might create a view of the source table (document). Then if the table changes, you can adjust the view to include or exclude columns (attributes), change data types, fill null values, and so on. Or, in another circumstance, if your dataset is based on a slow SQL query, you might create a table to hold the results of the query. If you can't sufficiently adapt the source of the data, recreate the visuals based on your documentation of the analysis. • If you no longer have access to a data source, your analyses based on that source are empty. The visuals that you created still exist, but they can't display until they have some data to show. This result can happen if permissions are changed by your administrator. • If you remove the dataset a visual is based on, you might need to recreate it from your documentation. You can edit the visual and select a new dataset to use with it. If you need to consistently use a new file to replace an older one, store your data in a location that is consistently available. For example, you might store your .csv file in Amazon S3 and create an S3 dataset to use for your visuals. For more information on access files stored in S3, see Creating a dataset using Amazon S3 files. Or you can import the data into a table, and base your visual on a query. This way, the data structures don't change, even if the data contained in them changes. • To centralize data management, consider creating general, multiple-purpose datasets that others can use to create their own datasets from. For more information, see Creating a dataset using an existing dataset in Amazon QuickSight. Editing a dataset from the Datasets page 1. From the QuickSight start page, choose Datasets at left. 2. On the Datasets page that opens, choose the dataset that you want to edit, and then choose Edit dataset at upper right. Editing a dataset from the Datasets page 145 Amazon QuickSight User Guide The data preparation page opens. For more information about the types of edits you can make to datasets, see Preparing data in Amazon QuickSight. Editing a dataset in an analysis Use the following procedure to edit a dataset from the analysis page. To edit a dataset from the analysis page 1. 2. In your analysis, choose the pencil icon at the top of the Fields list pane. In Data sets in this analysis page that opens, choose the three dots at right of the dataset that you want to edit, and then choose Edit. The dataset opens in the data preparation page.For more information about the types of edits you can make to datasets, see Preparing data in Amazon QuickSight. Reverting datasets back to previous published versions When you save and publish changes to a dataset in Amazon QuickSight, a new version of the dataset is created. At any time, you can see a list of all the previous published versions of that dataset. You can also preview a specific version in that history, or even revert the dataset back to a previous version, if needed. The following limitations apply to dataset versioning: • Only the most recent 1,000 versions of a dataset are shown in the publishing history, and are available |
amazon-quicksight-user-048 | amazon-quicksight-user.pdf | 48 | to datasets, see Preparing data in Amazon QuickSight. Reverting datasets back to previous published versions When you save and publish changes to a dataset in Amazon QuickSight, a new version of the dataset is created. At any time, you can see a list of all the previous published versions of that dataset. You can also preview a specific version in that history, or even revert the dataset back to a previous version, if needed. The following limitations apply to dataset versioning: • Only the most recent 1,000 versions of a dataset are shown in the publishing history, and are available for versioning. • After you exceed 1,000 published versions, the oldest versions are automatically removed from the publishing history, and the dataset can no longer be reverted back to them. Use the following procedure to revert a dataset to a previous published version. Editing a dataset in an analysis 146 Amazon QuickSight User Guide To revert a dataset to a previous published version 1. From the QuickSight start page, choose Datasets. 2. On the Datasets page, choose a dataset, and then choose Edit dataset at upper right. For more information about editing datasets, see Editing datasets. 3. On the dataset preparation page that opens, choose the Manage icon in the blue toolbar at upper right, and then choose Publishing history. A list of previous published versions appears at right. 4. In the Publishing history pane, find the version that you want and choose Revert. To preview the version before reverting, choose Preview. The dataset is reverted and a confirmation message appears. The Publishing history pane also updates to show the active version of the dataset. Troubleshooting reverting versions Sometimes, the dataset can't be reverted to a specific version for one the following reasons: • The dataset uses one or more data sources that were deleted. If this error occurs, you can't revert the dataset to a previous version. • Reverting would make a calculated field not valid. Troubleshooting 147 Amazon QuickSight User Guide If this error occurs, you can edit or remove the calculated field, and then save the dataset. Doing this creates a new version of the dataset. • One or more columns are missing in the data source. If this error occurs, QuickSight shows the latest schema from the data source in the preview to reconcile differences between versions. Any calculated field, field name, field type, and filter changes shown in the schema preview are from the version that you want to revert to. You can save this reconciled schema as a new version of the dataset. Or you can return to the active (latest) version by choosing Preview on the top (latest) version in the publishing history. Duplicating datasets You can duplicate an existing dataset to save a copy of it with a new name. The new dataset is a completely separate copy. The Duplicate dataset option is available if both of the following are true: you own the dataset and you have permission to the data source. To duplicate a dataset 1. From the QuickSight start page, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to duplicate. 3. On the dataset details page that opens, choose the drop-down for Edit datasource, and then choose Duplicate. 4. On the Duplicate dataset page that opens, give the duplicated dataset a name, and then choose Duplicate. The duplicated dataset details page opens. From this page, you can edit the dataset, set up a refresh schedule, and more. Duplicating datasets 148 Amazon QuickSight Sharing datasets User Guide You can give other Amazon QuickSight users and groups access to a dataset by sharing it with them. Then they can create analyses from it. If you make them co-owners, they can also refresh, edit, delete, or reshare the dataset. Sharing a dataset If you have owner permissions on a dataset, use the following procedure to share it. To share a dataset 1. From the QuickSight start page, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to share. 3. On the dataset details page that opens, choose the Permissions tab, and then choose Add users & groups. 4. Enter the user or group that you want to share this dataset with, and then choose Add. You can only invite users who belong to the same QuickSight account. Repeat this step until you have entered information for everyone you want to share the dataset with. 5. For the Permissions column, choose a role for each user or group to give them permissions on the dataset. Choose Viewer to allow the user to create analyses and datasets from the dataset. Choose Owner to allow the user to do that and also refresh, edit, delete, and reshare the dataset. Users receive emails |
amazon-quicksight-user-049 | amazon-quicksight-user.pdf | 49 | the user or group that you want to share this dataset with, and then choose Add. You can only invite users who belong to the same QuickSight account. Repeat this step until you have entered information for everyone you want to share the dataset with. 5. For the Permissions column, choose a role for each user or group to give them permissions on the dataset. Choose Viewer to allow the user to create analyses and datasets from the dataset. Choose Owner to allow the user to do that and also refresh, edit, delete, and reshare the dataset. Users receive emails with a link to the dataset. Groups don't receive invitation emails. Sharing datasets 149 Amazon QuickSight User Guide Viewing and editing the permissions of users that a dataset is shared with If you have owner permissions on a dataset, you can use the following procedure to view, edit, or change user access to it. To view, edit, or change user access to a dataset if you have owner permissions for it 1. From the QuickSight start page, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to share. 3. On the dataset details page that opens, choose the Permissions tab. A list of all users and groups with access to the dataset is displayed. 4. (Optional) To change permission roles for a user or group, choose the drop-down menu in the Permissions column for the user or group. Then choose either Viewer or Owner. Revoking access to a dataset If you have owner permissions on a dataset, you can use the following procedure to revoke user access to a dataset. To revoke user access to a dataset if you have owner permissions for it 1. From the QuickSight start page, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to share. 3. On the dataset details page that opens, choose the Permissions tab. A list of all users and groups with access to the dataset is displayed. 4. In the Actions column for the user or group, choose Revoke access. Tracking dashboards and analyses that use a dataset When you create a dataset in Amazon QuickSight, you can track which dashboards and analyses use that dataset. This approach is useful when you want to see which resources will be affected when you make changes to a dataset, or want to delete a dataset. Use the following procedure to see which dashboards and analyses use a dataset. Viewing and editing the permissions of users that a dataset is shared with 150 Amazon QuickSight User Guide To track resources that use a dataset 1. From the QuickSight start page, choose Datasets in the pane at left. 2. On the Datasets page, choose the dataset that you want to track resources for. 3. 4. In the page that opens for that dataset, choose Edit dataset. In the data preparation page that opens, choose Manage at upper right, and then choose Usage. 5. The dashboards and analyses that use the dataset are listed in the pane that opens. Using dataset parameters in Amazon QuickSight In Amazon QuickSight, authors can use dataset parameters in direct query to dynamically customize their datasets and apply reusable logic to their datasets. A dataset parameter is a parameter created at the dataset level. It's consumed by an analysis parameter through controls, calculated fields, filters, actions, URLs, titles, and descriptions. For more information on analysis parameters, see Parameters in Amazon QuickSight. The following list describes three actions that can be performed with dataset parameters: • Custom SQL in direct query – Dataset owners can insert dataset parameters into the custom SQL of a direct query dataset. When these parameters are applied to a filter control in a QuickSight analysis, users can filter their custom data faster and more efficiently. • Repeatable variables – Static values that appear in multiple locations in the dataset page can be modified in one action using custom dataset parameters. • Move calculated fields to datasets – QuickSight authors can copy calculated fields with parameters in an analysis and migrate them to the dataset level. This protects calculated fields at the analysis level from being accidentally modified and calculated fields be shared across multiple analyses. In some situations, dataset parameters improve filter control performance for direct query datasets that require complex custom SQL and simplify business logic at the dataset level. Topics • Dataset parameter limitations • Creating dataset parameters in Amazon QuickSight • Inserting dataset parameters into custom SQL Dataset parameters 151 Amazon QuickSight User Guide • Adding dataset parameters to calculated fields • Adding dataset parameters to filters • Using dataset parameters in QuickSight analyses • Advanced use cases of dataset parameters Dataset parameter limitations This section covers known limitations that you |
amazon-quicksight-user-050 | amazon-quicksight-user.pdf | 50 | being accidentally modified and calculated fields be shared across multiple analyses. In some situations, dataset parameters improve filter control performance for direct query datasets that require complex custom SQL and simplify business logic at the dataset level. Topics • Dataset parameter limitations • Creating dataset parameters in Amazon QuickSight • Inserting dataset parameters into custom SQL Dataset parameters 151 Amazon QuickSight User Guide • Adding dataset parameters to calculated fields • Adding dataset parameters to filters • Using dataset parameters in QuickSight analyses • Advanced use cases of dataset parameters Dataset parameter limitations This section covers known limitations that you might encounter when working with dataset parameters in Amazon QuickSight. • When dashboard readers schedule emailed reports, selected controls don't propagate to the dataset parameters that are included in the report that's attached to the email. Instead, the default values of the parameters are used. • Dataset parameters can't be inserted into custom SQL of datasets stored in SPICE. • Dynamic defaults can only be configured on the analysis page of the analysis that is using the dataset. You can't configure a dynamic default at the dataset level. • The Select all option is not supported on multivalue controls of analysis parameters that are mapped to dataset parameters. • Cascading controls are not supported for dataset parameters. • Dataset parameters can only be used by dataset filters when the dataset is using direct query. • In a custom SQL query, only 128 dataset parameters can be used. Creating dataset parameters in Amazon QuickSight Use the following procedures to get started using dataset parameters. To create a new dataset parameter 1. From the QuickSight start page, choose Datasets on the left, choose the ellipsis (three dots) next to the dataset that you want to change, and then choose Edit. 2. On the Dataset page that opens, choose Parameters on the left, and then choose the (+) icon to create a new dataset parameter. 3. In the Create new parameter pop-up that appears, enter a parameter name in the Name box. Dataset parameter limitations 152 Amazon QuickSight User Guide 4. In the Data type dropdown, choose the parameter data type that you want. Supported data types are String, Integer, Number, and Datetime. This option can't be changed after the parameter is created. 5. For Default value, enter the default value that you want the parameter to have. Note When you map a dataset parameter to an analysis parameter, a different default value can be chosen. When this happens, the default value configured here is overridden by the new default value. 6. For Values, choose the value type that you want the parameter to have. Single value parameters support single–select dropdowns, text field, and list controls. Multiple values parameters support multi–select dropdown controls. This option can't be changed after the parameter is created. 7. When you are finished configuring the new parameter, choose Create to create the parameter. Inserting dataset parameters into custom SQL You can insert dataset parameters into the custom SQL of a dataset in direct query mode by referencing it with <<$parameter_name>> in the SQL statement. At runtime, dashboard users can enter filter control values that are associated with a dataset parameter. Then, they can see the results in the dashboard visuals after the values propagate to the SQL query. You can use parameters to create basic filters based on customer input in where clauses. Alternatively, you Inserting dataset parameters into custom SQL 153 Amazon QuickSight User Guide could add case when or if else clauses to dynamically change the logic of the SQL query based on a parameter's input. For example, say you want to add a WHERE clause to your custom SQL that filters data based on an end user's Region name. In this case, you create a single value parameter called RegionName: SELECT * FROM transactions WHERE region = <<$RegionName>> You can also let users provide multiple values to the parameter: SELECT * FROM transactions WHERE region in (<<$RegionNames>>) In the following more complex example, a dataset author refers to two dataset parameters twice based on a user's first and last names that can be selected in a dashboard filter control: SELECT Region, Country, OrderDate, Sales FROM transactions WHERE region= (Case WHEN <<$UserFIRSTNAME>> In (select firstname from user where region='region1') and <<$UserLASTNAME>> In (select lastname from user where region='region1') THEN 'region1' WHEN <<$UserFIRSTNAME>> In (select firstname from user where region='region2') and <<$UserLASTNAME>> In (select lastname from user where region='region2') THEN 'region2' ELSE 'region3' END) You can also use parameters in SELECT clauses to create new columns in a dataset from user input: SELECT Region, Country, date, (case WHEN <<$RegionName>>='EU' Inserting dataset parameters into custom SQL 154 Amazon QuickSight User Guide THEN sum(sales) * 0.93 --convert US dollar to euro WHEN <<$RegionName>>='CAN' THEN sum(sales) * 0.78 --convert US |
amazon-quicksight-user-051 | amazon-quicksight-user.pdf | 51 | transactions WHERE region= (Case WHEN <<$UserFIRSTNAME>> In (select firstname from user where region='region1') and <<$UserLASTNAME>> In (select lastname from user where region='region1') THEN 'region1' WHEN <<$UserFIRSTNAME>> In (select firstname from user where region='region2') and <<$UserLASTNAME>> In (select lastname from user where region='region2') THEN 'region2' ELSE 'region3' END) You can also use parameters in SELECT clauses to create new columns in a dataset from user input: SELECT Region, Country, date, (case WHEN <<$RegionName>>='EU' Inserting dataset parameters into custom SQL 154 Amazon QuickSight User Guide THEN sum(sales) * 0.93 --convert US dollar to euro WHEN <<$RegionName>>='CAN' THEN sum(sales) * 0.78 --convert US dollar to Canadian Dollar ELSE sum(sales) -- US dollar END ) as "Sales" FROM transactions WHERE region = <<$RegionName>> To create a custom SQL query or to edit an existing query before adding a dataset parameter, see Using SQL to customize data. When you apply custom SQL with a dataset parameter, <<$parameter_name>> is used as a placeholder value. When a user chooses one of the parameter values from a control, QuickSight replaces the placeholder with the values that the user selects on the dashboard. In the following example, the user enters a new custom SQL query that filters data by state: select * from all_flights where origin_state_abr = <<$State>> The default value of the parameter is applied to the SQL query and the results appear in the Preview pane. In the following screenshot, the default value of the State parameter is IL, or Illinois. The SQL query filters the data from the dataset and returns all entries in the dataset where the origin state is IL. Inserting dataset parameters into custom SQL 155 Amazon QuickSight User Guide Adding dataset parameters to calculated fields You can also add dataset parameters to calculated field expressions using the format ${parameter_name}. When you create a calculation, you can choose from the existing parameters from the list of parameters under the Parameters list. You can't create a calculated field that contains a multivalued parameter. For more information on adding calculated fields, see Using calculated fields with parameters in Amazon QuickSight. Adding dataset parameters to filters For datasets in direct query mode, dataset authors can use dataset parameters in filters without custom SQL. Dataset parameters can't be added to filters if the dataset is in SPICE. To add a dataset parameter to a filter 1. Open the dataset page of the dataset that you want to create a filter for. Choose Filters on the left, and then choose Add filter. 2. Enter the name that you want the filter to have and choose the field that you want filtered in the dropdown. Adding dataset parameters to calculated fields 156 Amazon QuickSight User Guide 3. After you create the new filter, navigate to the filter in the Filters pane, choose the ellipsis 4. 5. 6. (three dots) next to the filter, and then choose Edit. For Filter type, choose Custom filter. For Filter condition, choose the condition that you want. Select the Use parameter box and choose the dataset parameter that you want the filter to use. 7. When you are finished making changes, choose Apply. Adding dataset parameters to filters 157 Amazon QuickSight User Guide Adding dataset parameters to filters 158 Amazon QuickSight User Guide Using dataset parameters in QuickSight analyses Once you create a dataset parameter, after you add the dataset to an analysis, map the dataset parameter to a new or existing analysis parameter. After you map a dataset parameter to an analysis parameter, you can use them with filters, controls, and any other analysis parameter feature. You can manage your dataset parameters in the Parameters pane of the analysis that is using the dataset that the parameters belong to. In the Dataset Parameters section of the Parameters pane, you can choose to see only the unmapped dataset parameters (default). Alternatively, you can choose to see all mapped and unmapped dataset parameters by choosing ALL from the Viewing dropdown. Mapping dataset parameters in new QuickSight analyses When you create a new analysis from a dataset that contains parameters, you need to map the dataset parameters to the analysis before you can use them. This is also true when you add a dataset with parameters to an analysis. You can view all unmapped parameters in an analysis in the Parameters pane of the analysis. Alternatively, choose VIEW in the notification message that appears in the top right of the page when you create the analysis or add the dataset. To map a dataset parameter to an analysis parameter 1. Open the QuickSight console. 2. Choose the analysis that you want to change. 3. Choose the Parameters icon to open the Parameters pane. 4. Choose the ellipsis (three dots) next to the dataset parameter that you want to map, choose Map Parameter, and then choose the |
amazon-quicksight-user-052 | amazon-quicksight-user.pdf | 52 | to an analysis. You can view all unmapped parameters in an analysis in the Parameters pane of the analysis. Alternatively, choose VIEW in the notification message that appears in the top right of the page when you create the analysis or add the dataset. To map a dataset parameter to an analysis parameter 1. Open the QuickSight console. 2. Choose the analysis that you want to change. 3. Choose the Parameters icon to open the Parameters pane. 4. Choose the ellipsis (three dots) next to the dataset parameter that you want to map, choose Map Parameter, and then choose the analysis parameter that you want to map your dataset parameter to. If your analysis doesn't have any analysis parameters, you can choose Map parameter and Create new to create an analysis parameter that is automatically mapped to the dataset parameter upon creation. a. (Optional) In the Create new parameter pop-up that appears, for Name, enter a name for the new analysis parameter. Using dataset parameters in QuickSight analyses 159 Amazon QuickSight User Guide b. c. d. (Optional) For Static default value, choose the static default value that you want the parameter to have. (Optional) Choose Set a dynamic default to set a dynamic default for the new parameter. In the Mapped dataset parameters table, you will see the dataset parameter that you are mapping to the new analysis parameter. You can add other dataset parameters to this analysis parameter by choosing the ADD DATASET PARAMETER dropdown and then choosing the parameter that you want to map. You can unmap a dataset parameter by choosing the Remove button next to the dataset parameter that you want to remove. The following screenshot shows the configuration of a new analysis parameter that is mapped to a dataset parameter. For more information on creating analysis parameters, see Setting up parameters in Amazon QuickSight. Using dataset parameters in QuickSight analyses 160 Amazon QuickSight User Guide When you map a dataset parameter to an analysis parameter, the analysis parameter represents the dataset parameter wherever it is used in the analysis. You can also map and unmap dataset parameters to analysis parameters in the Edit parameter window. To open the Edit parameter window, navigate to the Parameters pane, choose the ellipsis (three dots) next to the analysis parameter that you want to change, and then choose Edit parameter. You can add other dataset parameters to this analysis parameter by choosing the ADD DATASET PARAMETER dropdown and then choosing the parameter that you want to map. You can unmap a dataset parameter by choosing the Remove button next to the dataset parameter that you want to remove. You can also remove all mapped dataset parameters by choosing REMOVE ALL. When you are done making changes, choose Update. When you delete an analysis parameter, all dataset parameters are unmapped from the analysis and appear in the UNMAPPED section of the Parameters pane. You can only map a dataset parameter to one analysis parameter at a time. To map a dataset parameter to a different analysis parameter, unmap the dataset parameter and then map it to the new analysis parameter. Using dataset parameters in QuickSight analyses 161 Amazon QuickSight User Guide Adding filter controls to mapped analysis parameters After you map a dataset parameter to an analysis parameter in QuickSight, you can create filter controls for filters, actions, calculated fields, titles, descriptions, and URLs. To add a control to a mapped parameter 1. In the Parameters pane of the analysis page, choose the ellipsis (three dots) next to the mapped analysis parameter that you want, and then choose Add control. 2. In the Add control window that appears, enter the Name that you want and choose the Style that you want the control to have. For single value controls, choose between Dropdown, List, and Text field. For multivalue controls, choose Dropdown. 3. Choose Add to create the control. Advanced use cases of dataset parameters This section covers more advanced options and use cases working with dataset parameters and dropdown controls. Use the following walkthroughs to create dynamic dropdown values with dataset parameters. Using multivalue controls with dataset parameters When you use dataset parameters that are inserted into the custom SQL of a dataset, the dataset parameters commonly filter data by values from a specific column. If you create a dropdown control and assign the parameter as the value, the dropdown only shows the value that the parameter filtered. The following procedure shows how you can create a control that is mapped to a dataset parameter and shows all unfiltered values. To populate all assigned values in a dropdown control 1. Create a new single–column dataset in SPICE or direct query that includes all unique values from the original dataset. For example, let's say that your original dataset is using the |
amazon-quicksight-user-053 | amazon-quicksight-user.pdf | 53 | of a dataset, the dataset parameters commonly filter data by values from a specific column. If you create a dropdown control and assign the parameter as the value, the dropdown only shows the value that the parameter filtered. The following procedure shows how you can create a control that is mapped to a dataset parameter and shows all unfiltered values. To populate all assigned values in a dropdown control 1. Create a new single–column dataset in SPICE or direct query that includes all unique values from the original dataset. For example, let's say that your original dataset is using the following custom SQL: select * from all_flights where origin_state_abr = <<$State>> Advanced use 162 Amazon QuickSight User Guide To create a single–column table with all unique origin states, apply the following custom SQL to the new dataset: SELECT distinct origin_state_abr FROM all_flights order by origin_state_abr asc The SQL expression returns all unique states in alphabetic order. The new dataset does not have any dataset parameters. 2. Enter a Name for the new dataset, and then save and publish the dataset. In our example, the new dataset is called State Codes. 3. Open the analysis that contains the original dataset, and add the new dataset to the analysis. For information on adding datasets to an existing analysis, see Adding a dataset to an analysis. 4. Navigate to the Controls pane and find the dropdown control that you want to edit. Choose the ellipsis (three dots) next to the control, and then choose Edit. 5. In the Format control that appears on the left, and choose Link to a dataset field in the Values section. 6. For the Dataset dropdown that appears, choose the new dataset that you created. In our example, the State Codes dataset is chosen. 7. For the Field dropdown that appears, choose the appropriate field. In our example, the origin_state_abr field is chosen. Advanced use 163 Amazon QuickSight User Guide After you finish linking the control to the new dataset, all unique values appear in the control's dropdown. These include the values that are filtered out by the dataset parameter. Advanced use 164 Amazon QuickSight User Guide Using controls with Select all options By default, when one or more dataset parameters are mapped to an analysis parameter and added to a control, the Select all option is not available. The following procedure shows a workaround that uses the same example scenario from the previous section. Note This walkthrough is for datasets that are small enough to load in direct query. If you have a large dataset and want to use the Select All option, it is recommended that you load the dataset into SPICE. However, if you want to use the Select All option with dataset parameters, this walkthrough describes a way to do so. Advanced use 165 Amazon QuickSight User Guide To begin, let's say you have a direct query dataset with custom SQL that contains a multivalue parameter called States: select * from all_flights where origin_state_abr in (<<$States>>) To use the Select all option in a control that uses dataset parameters 1. In the Parameters pane of the analysis, find the dataset parameter that you want to use and choose Edit from the ellipsis (three dots) next to the parameter. 2. In the Edit parameter window that appears, enter a new default value in the Static multiple default values section. In our example, the default value is All States. Note that the example uses a leading space character so that the default value appears as the first item in the control. 3. Choose Update to update the parameter. 4. Navigate to the dataset that contains the dataset parameter that you're using in the analysis- by-analysis. Edit the custom SQL of the dataset to include a default use case for your new static multiple default values. Using the All States example, the SQL expression appears as follows: select * from public.all_flights where ' All States' in (<<$States>>) or Advanced use 166 Amazon QuickSight User Guide origin_state_abr in (<<$States>>) If the user chooses All States in the control, the new SQL expression returns all unique records. If the user chooses a different value from the control, the query returns values that were filtered by the dataset parameter. Using controls with Select all and multivalue options You can combine the previous Select all procedure with the multivalue control method discussed earlier to create dropdown controls that contain a Select all value in addition to multiple values that the user can select. This walkthrough assumes that you have followed the previous procedures, that you know how to map dataset parameters to analysis parameters, and that you can create controls in an analysis. For more information on mapping analysis parameters, see Mapping dataset parameters in new QuickSight analyses. For more information |
amazon-quicksight-user-054 | amazon-quicksight-user.pdf | 54 | returns values that were filtered by the dataset parameter. Using controls with Select all and multivalue options You can combine the previous Select all procedure with the multivalue control method discussed earlier to create dropdown controls that contain a Select all value in addition to multiple values that the user can select. This walkthrough assumes that you have followed the previous procedures, that you know how to map dataset parameters to analysis parameters, and that you can create controls in an analysis. For more information on mapping analysis parameters, see Mapping dataset parameters in new QuickSight analyses. For more information on creating controls in an analysis that is using dataset parameters, see Adding filter controls to mapped analysis parameters. To add multiple values to a control with a Select all option and a mapped dataset parameter 1. Open the analysis that has the original dataset with a Select all custom SQL expression and a second dataset that includes all possible values of the filtered column that exists in the original dataset. 2. Navigate to the secondary dataset that was created earlier to return all values of a filtered column. Add a custom SQL expression that adds your previously configured Select all option to the query. The following example adds the All States record to the top of the list of returned values of the dataset: (Select ' All States' as origin_state_abr) Union All (SELECT distinct origin_state_abr FROM all_flights order by origin_state_abr asc) 3. Go back to the analysis that the datasets belong to and map the dataset parameter that you are using to the analysis parameter that you created in step 3 of the previous procedure. The analysis parameter and dataset parameter can have the same name. In our example, the analysis parameter is called States. 4. Create a new filter control or edit an existing filter control and choose Hide Select All to hide the disabled Select All option that appears in multivalue controls. Advanced use 167 Amazon QuickSight User Guide Once you create the control, users can use the same control to select all or multiple values of a filtered column in a dataset. Using row-level security in Amazon QuickSight Applies to: Enterprise Edition In the Enterprise edition of Amazon QuickSight, you can restrict access to a dataset by configuring row-level security (RLS) on it. You can do this before or after you have shared the dataset. When you share a dataset with RLS with dataset owners, they can still see all the data. When you share it with readers, however, they can only see the data restricted by the permission dataset rules. Also, when you embed Amazon QuickSight dashboards in your application for unregistered users of QuickSight, you can use row-level security (RLS) with tags. In this case, you use tags to specify which data your users can see in the dashboard depending on who they are. You can restrict access to a dataset using username or group-based rules, tag-based rules, or both. Choose user-based rules if you want to secure data for users or groups provisioned (registered) in QuickSight. To do so, select a permissions dataset that contains rules set by columns for each user or group accessing the data. Only users or groups identified in the rules have access to data. Choose tag-based rules only if you are using embedded dashboards and want to secure data for users not provisioned (unregistered users) in QuickSight. To do so, define tags on columns to secure data. Values to tags must be passed when embedding dashboards. Topics • Using row-level security with user-based rules to restrict access to a dataset • Using row-level security with tag-based rules to restrict access to a dataset when embedding dashboards for anonymous users Using row-level security with user-based rules to restrict access to a dataset Applies to: Enterprise Edition Using row-level security 168 Amazon QuickSight User Guide In the Enterprise edition of Amazon QuickSight, you can restrict access to a dataset by configuring row-level security (RLS) on it. You can do this before or after you have shared the dataset. When you share a dataset with RLS with dataset owners, they can still see all the data. When you share it with readers, however, they can only see the data restricted by the permission dataset rules. By adding row-level security, you can further control their access. Note When applying SPICE datasets to row-level security, each field in the dataset can contain up to 2,047 Unicode characters. Fields that contain more than this quota are truncated during ingestion. To learn more about SPICE data quotas, see SPICE quotas for imported data. To do this, you create a query or file that has one column named UserName, GroupName, or both. Or you can create a query or file that has one column named UserARN, |
amazon-quicksight-user-055 | amazon-quicksight-user.pdf | 55 | readers, however, they can only see the data restricted by the permission dataset rules. By adding row-level security, you can further control their access. Note When applying SPICE datasets to row-level security, each field in the dataset can contain up to 2,047 Unicode characters. Fields that contain more than this quota are truncated during ingestion. To learn more about SPICE data quotas, see SPICE quotas for imported data. To do this, you create a query or file that has one column named UserName, GroupName, or both. Or you can create a query or file that has one column named UserARN, GroupARN, or both. You can think of this as adding a rule for that user or group. Then you can add one column to the query or file for each field that you want to grant or restrict access to. For each user or group name that you add, you add the values for each field. You can use NULL (no value) to mean all values. To see examples of dataset rules, see Creating dataset rules for row-level security. To apply the dataset rules, you add the rules as a permissions dataset to your dataset. Keep in mind the following points: • The permissions dataset can't contain duplicate values. Duplicates are ignored when evaluating how to apply the rules. • Each user or group specified can see only the rows that match the field values in the dataset rules. • If you add a rule for a user or group and leave all other columns with no value (NULL), you grant them access to all the data. • If you don't add a rule for a user or group, that user or group can't see any of the data. • The full set of rule records that are applied per user must not exceed 999. This limitation applies to the total number of rules that are directly assigned to a username, plus any rules that are assigned to the user through group names. • If a field includes a comma (,) Amazon QuickSight treats each word separated from another by a comma as an individual value in the filter. For example, in ('AWS', 'INC'), AWS,INC is Using user-based rules 169 Amazon QuickSight User Guide considered as two strings: AWS and INC. To filter with AWS,INC, wrap the string with double quotation marks in the permissions dataset. If the restricted dataset is a SPICE dataset, the number of filter values applied per user can't exceed 192,000 for each restricted field. This applies to the total number of filter values that are directly assigned to a username, plus any filter values that are assigned to the user through group names. If the restricted dataset is a direct query dataset, the number of filter values applied per user varies from data sources. Exceeding the filter value limit may cause the visual rendering to fail. We recommend adding an additional column to your restricted dataset to divide the rows into groups based on the original restricted column so that the filter list can be shortened. Amazon QuickSight treats spaces as literal values. If you have a space in a field that you are restricting, the dataset rule applies to those rows. Amazon QuickSight treats both NULLs and blanks (empty strings "") as "no value". A NULL is an empty field value. Depending on what data source your dataset is coming from, you can configure a direct query to access a table of permissions. Terms with spaces inside them don't need to be delimited with quotes. If you use a direct query, you can easily change the query in the original data source. Or you can upload dataset rules from a text file or spreadsheet. If you are using a comma- separated value (CSV) file, don't include any spaces on the given line. Terms with spaces inside them need to be delimited with quotation marks. If you use dataset rules that are file-based, apply any changes by overwriting the existing rules in the dataset's permissions settings. Datasets that are restricted are marked with the word RESTRICTED in the Datasets screen. Child datasets that are created from a parent dataset that has RLS rules active retain the same RLS rules that the parent dataset has. You can add more RLS rules to the child dataset, but you can't remove the RLS rules that the dataset inherits from the parent dataset. Child datasets that are created from a parent dataset that has RLS rules active can only be created with Direct Query. Child datasets that inherit the parent dataset's RLS rules aren't supported in SPICE. Using user-based rules 170 Amazon QuickSight User Guide Row-level security works only for fields containing textual data (string, char, varchar, and so on). It doesn't currently work for dates or numeric fields. |
amazon-quicksight-user-056 | amazon-quicksight-user.pdf | 56 | the same RLS rules that the parent dataset has. You can add more RLS rules to the child dataset, but you can't remove the RLS rules that the dataset inherits from the parent dataset. Child datasets that are created from a parent dataset that has RLS rules active can only be created with Direct Query. Child datasets that inherit the parent dataset's RLS rules aren't supported in SPICE. Using user-based rules 170 Amazon QuickSight User Guide Row-level security works only for fields containing textual data (string, char, varchar, and so on). It doesn't currently work for dates or numeric fields. Anomaly detection is not supported for datasets that use row-level security (RLS). Creating dataset rules for row-level security Use the following procedure to create a permissions file or query to use as dataset rules. To create a permissions files or query to use as dataset rules 1. Create a file or a query that contains the dataset rules (permissions) for row-level security. It doesn't matter what order the fields are in. However, all the fields are case-sensitive. Make sure that they exactly match the field names and values. The structure should look similar to one of the following. Make sure that you have at least one field that identifies either users or groups. You can include both, but only one is required, and only one is used at a time. The field that you use for users or groups can have any name you choose. Note If you are specifying groups, use only Amazon QuickSight groups or Microsoft AD groups. The following example shows a table with groups. GroupName Region Segment EMEA-Sales EMEA US-Sales US-Sales US-Sales US US US Enterprise, SMB, Startup Enterprise SMB, Startup Startup APAC-Sales APAC Enterprise, SMB Using user-based rules 171 Amazon QuickSight User Guide GroupName Region Segment Corporate-Reporting APAC-Sales APAC Enterprise, Startup The following example shows a table with usernames. UserName Region Segment AlejandroRosalez EMEA MarthaRivera NikhilJayashankar PauloSantos US US US Enterprise, SMB, Startup Enterprise SMB, Startup Startup SaanviSarkar APAC Enterprise, SMB sales-tps@example. com ZhangWei APAC Enterprise, Startup The following example shows a table with user and group Amazon Resource Names (ARNs). UserARN GroupARN Region arn:aws:quicksight arn:aws:quicksight APAC :us-east-1:1234567 :us-east-1:1234567 89012:user/default 89012:group/defaul /Bob t/group-1 Using user-based rules 172 Amazon QuickSight User Guide UserARN GroupARN Region arn:aws:quicksight arn:aws:quicksight US :us-east-1:1234567 :us-east-1:1234567 89012:user/default 89012:group/defaul /Sam t/group-2 Or if you use a .csv file, the structure should look similar to one of the following. UserName,Region,Segment AlejandroRosalez,EMEA,"Enterprise,SMB,Startup" MarthaRivera,US,Enterprise NikhilJayashankars,US,SMB PauloSantos,US,Startup SaanviSarkar,APAC,"SMB,Startup" sales-tps@example.com,"","" ZhangWei,APAC-Sales,"Enterprise,Startup" GroupName,Region,Segment EMEA-Sales,EMEA,"Enterprise,SMB,Startup" US-Sales,US,Enterprise US-Sales,US,SMB US-Sales,US,Startup APAC-Sales,APAC,"SMB,Startup" Corporate-Reporting,"","" APAC-Sales,APAC,"Enterprise,Startup" UserARN,GroupARN,Region arn:aws:quicksight:us-east-1:123456789012:user/Bob,arn:aws:quicksight:us- east-1:123456789012:group/group-1,APAC arn:aws:quicksight:us-east-1:123456789012:user/Sam,arn:aws:quicksight:us- east-1:123456789012:group/group-2,US Following is a SQL example. /* for users*/ select User as UserName, Region, Segment from tps-permissions; Using user-based rules 173 Amazon QuickSight /* for groups*/ select Group as GroupName, Region, Segment from tps-permissions; User Guide 2. Create a dataset for the dataset rules. To make sure that you can easily find it, give it a meaningful name, for example Permissions-Sales-Pipeline. Rules Dataset flagging for row-level security Use the following procedure to appropriately flag a dataset as a rules dataset. Rules Dataset is a flag that distinguishes permission datasets used for row-level security from regular datasets. If a permissions dataset was applied to a regular dataset before March 31, 2025, it will have a Rules Dataset flag in the Dataset landing page. If a permissions dataset was not applied to a regular dataset by March 31, 2025, it will be categorized as a regular dataset. To use it as a rules dataset, duplicate the permissions dataset and flag it as a rules dataset on the console when creating the dataset. Select EDIT DATASET and under the options, choose DUPLICATE AS RULES DATASET, as shown below. To successfully duplicate it as a rules dataset, ensure the original dataset has: 1. Required user metadata or group metadata column(s) and 2. Only string type columns. Using user-based rules 174 Amazon QuickSight User Guide To create a new rules dataset on the console, select NEW RULES DATASET under the NEW DATASET dropdown. When creating a rules dataset programmatically, add the following parameter: UseAs: RLS_RULES. This is an optional parameter that is only used to create a rules dataset. Once a dataset has been created, either through the console or programmatically, and flagged as either a rules dataset or a regular dataset, it cannot be changed. Once datasets are flagged as rules datasets, Amazon QuickSight will apply strict SPICE ingestion rules on them. To ensure data integrity, SPICE ingestions for rules datasets will fail if there are invalid rows or cells exceeding length limits. You must fix the ingestion issues in order to re-initiate a successful ingestion. Strict ingestion rules are only applicable to rules datasets. Regular datasets will not have dataset ingestion failures when there are skipped rows or string truncations. Applying row-level security Use the following procedure to |
amazon-quicksight-user-057 | amazon-quicksight-user.pdf | 57 | programmatically, and flagged as either a rules dataset or a regular dataset, it cannot be changed. Once datasets are flagged as rules datasets, Amazon QuickSight will apply strict SPICE ingestion rules on them. To ensure data integrity, SPICE ingestions for rules datasets will fail if there are invalid rows or cells exceeding length limits. You must fix the ingestion issues in order to re-initiate a successful ingestion. Strict ingestion rules are only applicable to rules datasets. Regular datasets will not have dataset ingestion failures when there are skipped rows or string truncations. Applying row-level security Use the following procedure to apply row-level security (RLS) by using a file or query as a dataset that contains the rules for permissions. To apply row-level security by using a file or query 1. Confirm that you have added your rules as a new dataset. If you added them, but don't see them under the list of datasets, refresh the screen. 2. On the Datasets page, choose the dataset 3. On the dataset details page that opens, for Row-level security, choose Set up. 4. On the Set up row-level security page that opens, choose User-based rules. Using user-based rules 175 Amazon QuickSight User Guide 5. From the list of datasets that appears, choose your permissions dataset. If your permissions dataset doesn't appear on this screen, return to your datasets, and refresh the page. 6. For Permissions policy choose Grant access to dataset. Each dataset has only one active permissions dataset. If you try to add a second permissions dataset, it overwrites the existing one. Important Some restrictions apply to NULL and empty string values when working with row-level security: • If your dataset has NULL values or empty strings ("") in the restricted fields, these rows are ignored when the restrictions are applied. • Inside the permissions dataset, NULL values and empty strings are treated the same. For more information, see the following table. • To prevent accidentally exposing sensitive information, Amazon QuickSight skips empty RLS rules that grant access to everyone. An empty RLS rule occurs when all columns of a row have no value. QuickSight RLS treats NULL, empty strings (""), or empty comma separated strings (for example ",,,") as no value. • After skipping empty rules, other nonempty RLS rules still apply. • If a permission dataset has only empty rules and all of them were skipped, no one will have access to any data restricted by this permission dataset. Rules for UserName, GroupName, Region, Segment Granted access AlejandroRosalez,EMEA-Sales ,EMEA,"Enterprise,SMB,Startup" Sees all EMEA Enterpris e, SMB, and Startup sales-tps@example.com,Corporate- Reporting,"","" Sees all rows User or group has no entry Sees no rows Using user-based rules 176 Amazon QuickSight User Guide Rules for UserName, GroupName, Region, Segment Granted access “”,“”,“”,“” NULL,“”,“”,NULL Skipped; sees no rows if all other rules are empty. Skipped; sees no rows if all other rules are empty. Anyone whom you shared your dashboard with can see all the data in it, unless the dataset is restricted by dataset rules. 7. Choose Apply dataset to save your changes. Then, on the Save data set rules? page, choose Apply and activate. Changes in permissions apply immediately to existing users. 8. (Optional) To remove permissions, first remove the dataset rules from the dataset. Make certain that the dataset rules are removed. Then, choose the permissions dataset and choose Remove data set. To overwrite permissions, choose a new permissions dataset and apply it. You can reuse the same dataset name. However, make sure to apply the new permissions in the Permissions screen to make these permissions active. SQL queries dynamically update, so these can be managed outside of Amazon QuickSight. For queries, the permissions are updated when the direct query cache is automatically refreshed. If you delete a file-based permissions dataset before you remove it from the target dataset, restricted users can't access the dataset. While the dataset is in this state, it remains marked as RESTRICTED. However, when you view Permissions for that dataset, you can see that it has no selected dataset rules. To fix this, specify new dataset rules. Creating a dataset with the same name is not enough to fix this. You must choose the new permissions dataset on the Permissions screen. This restriction doesn't apply to direct SQL queries. Using user-based rules 177 Amazon QuickSight User Guide Using row-level security with tag-based rules to restrict access to a dataset when embedding dashboards for anonymous users Applies to: Enterprise Edition Intended audience: Amazon QuickSight Administrators and Amazon QuickSight developers When you embed Amazon QuickSight dashboards in your application for users who are not provisioned (registered) in QuickSight, you can use row-level security (RLS) with tags. In this case, you use tags to specify which data your users can see in the dashboard depending on who they are. For |
amazon-quicksight-user-058 | amazon-quicksight-user.pdf | 58 | on the Permissions screen. This restriction doesn't apply to direct SQL queries. Using user-based rules 177 Amazon QuickSight User Guide Using row-level security with tag-based rules to restrict access to a dataset when embedding dashboards for anonymous users Applies to: Enterprise Edition Intended audience: Amazon QuickSight Administrators and Amazon QuickSight developers When you embed Amazon QuickSight dashboards in your application for users who are not provisioned (registered) in QuickSight, you can use row-level security (RLS) with tags. In this case, you use tags to specify which data your users can see in the dashboard depending on who they are. For example, let's say you're a logistics company that has a customer-facing application for various retailers. Thousands of users from these retailers access your application to see metrics related to how their orders are getting shipped from your warehouse. You don't want to manage thousands of users in QuickSight, so you use anonymous embedding to embed the selected dashboards in your application that your authenticated and authorized users can see. However, you want to make sure retailers see only data that is for their business and not for others. You can use RLS with tags to make sure your customers only see data that's relevant to them. To do so, complete the following steps: 1. Add RLS tags to a dataset. 2. Assign values to those tags at runtime using the GenerateEmbedUrlForAnonymousUser API operation. For more information about embedding dashboards for anonymous users using the GenerateEmbedUrlForAnonymousUser API operation, see Embedding QuickSight dashboards for anonymous (unregistered) users. Before you can use RLS with tags, keep in mind the following points: • Using RLS with tags is currently only supported for anonymous embedding, specifically for embedded dashboards that use the GenerateEmbedUrlForAnonymousUser API operation. Using tag-based rules 178 Amazon QuickSight User Guide • Using RLS with tags isn't supported for embedded dashboards that use the GenerateEmbedURLForRegisteredUser API operation or the old GetDashboardEmbedUrl API operation. • RLS tags aren't supported with AWS Identity and Access Management (IAM) or the QuickSight identity type. • When applying SPICE datasets to row-level security, each field in the dataset can contain up to 2,047 Unicode characters. Fields that contain more than this quota are truncated during ingestion. To learn more about SPICE data quotas, see SPICE quotas for imported data. Step 1: Add RLS tags to a dataset You can add tag-based rules to a dataset in Amazon QuickSight. Alternatively, you can call the CreateDataSet or UpdateDataSet API operation and add tag-based rules that way. For more information, see Add RLS tags to a dataset using the API. Use the following procedure to add RLS tags to a dataset in QuickSight. To add RLS tags to a dataset 1. From the QuickSight start page, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to add RLS to. 3. On the dataset details page that opens, for Row-level security, choose Set up. Using tag-based rules 179 Amazon QuickSight User Guide 4. On the Set up row-level security page that opens, choose Tag-based rules. 5. For Column, choose a column that you want to add tag rules to. For example, in the case for the logistics company, the retailer_id column is used. Only columns with a string data type are listed. 6. For Tag, enter a tag key. You can enter any tag name that you want. For example, in the case for the logistics company, the tag key tag_retailer_id is used. Doing this sets row-level security based on the retailer that's accessing the application. 7. (Optional) For Delimiter, choose a delimiter from the list, or enter your own. You can use delimiters to separate text strings when assigning more than one value to a tag. The value for a delimiter can be 10 characters long, at most. 8. (Optional) For Match all, choose the *, or enter your own character or characters. This option can be any character that you want to use when you want to filter by all the values in that column in the dataset. Instead of listing the values one by one, you can use the character. If this value is specified, it can be at least one character, or at most 256 characters long 9. Choose Add. The tag rule is added to the dataset and is listed at the bottom, but it isn't applied yet. To add another tag rule to the dataset, repeat steps 5–9. To edit a tag rule, choose the pencil icon that follows the rule. To delete a tag rule, choose the delete icon that follows the rule. You can add up to 50 tags to a dataset. 10. When you're ready to apply the tag rules to the dataset, choose Apply rules. Using tag-based rules 180 Amazon QuickSight User Guide 11. On the |
amazon-quicksight-user-059 | amazon-quicksight-user.pdf | 59 | at most 256 characters long 9. Choose Add. The tag rule is added to the dataset and is listed at the bottom, but it isn't applied yet. To add another tag rule to the dataset, repeat steps 5–9. To edit a tag rule, choose the pencil icon that follows the rule. To delete a tag rule, choose the delete icon that follows the rule. You can add up to 50 tags to a dataset. 10. When you're ready to apply the tag rules to the dataset, choose Apply rules. Using tag-based rules 180 Amazon QuickSight User Guide 11. On the Turn on tag-based security? page that opens, choose Apply and activate. The tag-based rules are now active. On the Set up row-level securitypage, a toggle appears for you to turn tag rules on and off for the dataset. To turn off all tag-based rules for the dataset, switch the Tag-Based rules toggle off, and then enter "confirm" in the text box that appears. On the Datasets page, a lock icon appears in the dataset row to indicate that tag rules are enabled. You can now use tag rules to set tag values at runtime, described in Step 2: Assign values to RLS tags at runtime. The rules only affect QuickSight readers when active. Important After tags are assigned and enabled on the dataset, make sure to give QuickSight authors permissions to see any of the data in the dataset when authoring a dashboard. To give QuickSight authors permission to see data in the dataset, create a permissions file or query to use as dataset rules. For more information, see Creating dataset rules for row-level security. After you create a tag-based rule, a new Manage rules table appears that shows how your tag- based rules relate to each other. To make changes to the rules listed in the Manage rules table, Using tag-based rules 181 Amazon QuickSight User Guide choose the pencil icon that follows the rule. Then add or remove tags, and choose Update. To apply your updated rule to the dataset, choose Apply. (Optional) Add the OR condition to RLS tags You can also add the OR condition to your tag-based rules to further customize the way data is presented to your QuickSight account users. When you use the OR condition with your tag-based rules, visuals in QuickSight appear if at least one tag defined in the rule is valid. To add the OR condition to your tag-based rules 1. In the Manage rules table, choose Add OR condition. Using tag-based rules 182 Amazon QuickSight User Guide 2. In the Select tag dropdown list that appears, choose the tag that you want to create an OR condition for. You can add up to 50 OR conditions to the Manage rules table. You can add multiple tags to a single column in a dataset, but at least one column tag needs to be included in a rule. 3. Choose Update to add the condition to your rule, then choose Apply to apply the updated rule to your dataset. Using tag-based rules 183 Amazon QuickSight User Guide Add RLS tags to a dataset using the API Alternatively, you can configure and enable tag-based row-level security on your dataset by calling the CreateDataSet or UpdateDataSet API operation. Use the following examples to learn how. CreateDataSet The following is an example for creating a dataset that uses RLS with tags. It assumes the scenario of the logistics company described previously. The tags are defined in the row- level-permission-tag-configuration element. The tags are defined on the columns that you want to secure the data for. For more information about this optional element, see RowLevelPermissionTagConfiguration in the Amazon QuickSight API Reference. create-data-set --aws-account-id <value> --data-set-id <value> --name <value> --physical-table-map <value> [--logical-table-map <value>] --import-mode <value> [--column-groups <value>] [--field-folders <value>] [--permissions <value>] [--row-level-permission-data-set <value>] [--column-level-permission-rules <value>] [--tags <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] [--row-level-permission-tag-configuration '{ "Status": "ENABLED", "TagRules": [ { "TagKey": "tag_retailer_id", "ColumnName": "retailer_id", "TagMultiValueDelimiter": ",", "MatchAllValue": "*" }, { "TagKey": "tag_role", "ColumnName": "role" } Using tag-based rules 184 Amazon QuickSight ], "TagRuleConfigurations": [ tag_retailer_id ], [ tag_role ] }' ] User Guide The tags in this example are defined in the TagRules part of the element. In this example, two tags are defined based on two columns: • The tag_retailer_id tag key is defined for the retailer_id column. In this case for the logistics company, this sets row-level security based on the retailer that's accessing the application. • The tag_role tag key is defined for the role column. In this case for the logistics company, this sets an additional layer of row-level security based on the role of the user accessing your application from a specific retailer. An example is store_supervisor or manager. For each tag, you can define TagMultiValueDelimiter and MatchAllValue. These are optional. • TagMultiValueDelimiter |
amazon-quicksight-user-060 | amazon-quicksight-user.pdf | 60 | this example, two tags are defined based on two columns: • The tag_retailer_id tag key is defined for the retailer_id column. In this case for the logistics company, this sets row-level security based on the retailer that's accessing the application. • The tag_role tag key is defined for the role column. In this case for the logistics company, this sets an additional layer of row-level security based on the role of the user accessing your application from a specific retailer. An example is store_supervisor or manager. For each tag, you can define TagMultiValueDelimiter and MatchAllValue. These are optional. • TagMultiValueDelimiter – This option can be any string that you want to use to delimit the values when you pass them at runtime. The value can be 10 characters long, at most. In this case, a comma is used as the delimiter value. • MatchAllValue – This option can be any character that you want to use when you want to filter by all the values in that column in the dataset. Instead of listing the values one by one, you can use the character. If specified, this value can be at least one character, or at most 256 characters long. In this case, an asterisk is used as the match all value. While configuring the tags for dataset columns, turn them on or off using the mandatory property Status. For enabling the tag rules use the value ENABLED for this property. By turning on tag rules, you can use them to set tag values at runtime, described in Step 2: Assign values to RLS tags at runtime. The following is an example of the response definition. Using tag-based rules 185 Amazon QuickSight User Guide { "Status": 201, "Arn": "arn:aws:quicksight:us-west-2:11112222333:dataset/RLS-Dataset", "DataSetId": "RLS-Dataset", "RequestId": "aa4f3c00-b937-4175-859a-543f250f8bb2" } UpdateDataSet UpdateDataSet You can use the UpdateDataSet API operation to add or update RLS tags for an existing dataset. The following is an example of updating a dataset with RLS tags. It assumes the scenario of the logistics company described previously. update-data-set --aws-account-id <value> --data-set-id <value> --name <value> --physical-table-map <value> [--logical-table-map <value>] --import-mode <value> [--column-groups <value> [--field-folders <value>] [--row-level-permission-data-set <value>] [--column-level-permission-rules <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] [--row-level-permission-tag-configuration '{ "Status": "ENABLED", "TagRules": [ { "TagKey": "tag_retailer_id", "ColumnName": "retailer_id", "TagMultiValueDelimiter": ",", "MatchAllValue": "*" }, { "TagKey": "tag_role", Using tag-based rules 186 Amazon QuickSight User Guide "ColumnName": "role" } ], "TagRuleConfigurations": [ tag_retailer_id ], [ tag_role ] }' ] The following is an example of the response definition. { "Status": 201, "Arn": "arn:aws:quicksight:us-west-2:11112222333:dataset/RLS-Dataset", "DataSetId": "RLS-Dataset", "RequestId": "aa4f3c00-b937-4175-859a-543f250f8bb2" } Important After tags are assigned and enabled on the dataset, make sure to give QuickSight authors permissions to see any of the data in the dataset when authoring a dashboard. To give QuickSight authors permission to see data in the dataset, create a permissions file or query to use as dataset rules. For more information, see Creating dataset rules for row- level security. For more information about the RowLevelPermissionTagConfiguration element, see RowLevelPermissionTagConfiguration in the Amazon QuickSight API Reference. Step 2: Assign values to RLS tags at runtime You can use tags for RLS only for anonymous embedding. You can set values for tags using the GenerateEmbedUrlForAnonymousUser API operation. Using tag-based rules 187 Amazon QuickSight User Guide The following example shows how to assign values to RLS tags that were defined in the dataset in the previous step. POST /accounts/AwsAccountId/embed-url/anonymous-user HTTP/1.1 Content-type: application/json { “AwsAccountId”: “string”, “SessionLifetimeInMinutes”: integer, “Namespace”: “string”, // The namespace to which the anonymous end user virtually belongs “SessionTags”: // Optional: Can be used for row-level security [ { “Key”: “tag_retailer_id”, “Value”: “West,Central,South” } { “Key”: “tag_role”, “Value”: “shift_manager” } ], “AuthorizedResourceArns”: [ “string” ], “ExperienceConfiguration”: { “Dashboard”: { “InitialDashboardId”: “string” // This is the initial dashboard ID the customer wants the user to land on. This ID goes in the output URL. } } } The following is an example of the response definition. HTTP/1.1 Status Content-type: application/json { Using tag-based rules 188 Amazon QuickSight "EmbedUrl": "string", "RequestId": "string" } User Guide RLS support without registering users in QuickSight is supported only in the GenerateEmbedUrlForAnonymousUser API operation. In this operation, under SessionTags, you can define the values for the tags associated with the dataset columns. In this case, the following assignments are defined: • Values West, Central, and South are assigned to the tag_retailer_id tag at runtime. A comma is used for the delimiter, which was defined in TagMultipleValueDelimiter in the dataset. To use call values in the column, you can set the value to *, which was defined as the MatchAllValue when creating the tag. • The value shift_manager is assigned to the tag_role tag. The user using the generated URL can view only the rows having the shift_manager value in the role column. That user can view only the value West, Central, or South in the |
amazon-quicksight-user-061 | amazon-quicksight-user.pdf | 61 | the following assignments are defined: • Values West, Central, and South are assigned to the tag_retailer_id tag at runtime. A comma is used for the delimiter, which was defined in TagMultipleValueDelimiter in the dataset. To use call values in the column, you can set the value to *, which was defined as the MatchAllValue when creating the tag. • The value shift_manager is assigned to the tag_role tag. The user using the generated URL can view only the rows having the shift_manager value in the role column. That user can view only the value West, Central, or South in the retailer_id column. For more information about embedding dashboards for anonymous users using the GenerateEmbedUrlForAnonymousUser API operation, see Embedding QuickSight dashboards for anonymous (unregistered) users, or GenerateEmbedUrlForAnonymousUser in the Amazon QuickSight API Reference Using column-level security to restrict access to a dataset In the Enterprise edition of Amazon QuickSight, you can restrict access to a dataset by configuring column-level security (CLS) on it. A dataset or analysis with CLS enabled has the restricted symbol next to it. By default, all users and groups have access to the data. By using CLS, you can manage access to specific columns in your dataset. If you use an analysis or dashboard that contains datasets with CLS restrictions that you don't have access to, you can't create, view, or edit visuals that use the restricted fields. For most visual types, if a visual has restricted columns that you don't have access to, you can't see the visual in your analysis or dashboard. Using column-level security 189 Amazon QuickSight User Guide Tables and pivot tables behave differently. If a table or pivot table uses restricted columns in the Rows or Columns field wells, and you don't have access to these restricted columns, you can't see the visual in an analysis or dashboard. If a table or pivot table has restricted columns in the Values field well, you can see the table in an analysis or dashboard with only the values that you have access to. The values for restricted columns show as Not Authorized. To enable column-level security on an analysis or dashboard, you need administrator access. To create a new analysis with CLS 1. On the Amazon QuickSight start page, choose the Analyses tab. 2. At upper right, choose New analysis. 3. Choose a dataset, and choose Column-level security. 4. Select the columns that you want to restrict, and then choose Next. By default, all groups and users have access to all columns. 5. Choose who can access each column, and then choose Apply to save your changes. To use an existing analysis for CLS 1. On the Amazon QuickSight start page, choose the Datasets tab. 2. On the Datasets page, open your dataset 3. On the dataset details page that opens, for Column-level security, choose Set up. 4. Select the columns that you want to restrict, and then choose Next. By default, all groups and users have access to all columns. 5. Choose who can access each column, and then choose Apply to save your changes. Using column-level security 190 Amazon QuickSight To create a dashboard with CLS User Guide 1. On the Amazon QuickSight navigation pane, choose the Analyses tab. 2. Choose the analysis that you want to create a dashboard of. 3. At upper right, choose Publish. 4. Choose one of the following: • To create a new dashboard, choose Publish new dashboard as and enter a name for the new dashboard. • To replace an existing dashboard, choose Replace an existing dashboard and choose the dashboard from the list. Additionally, you can choose Advanced publish options. For more information, see Publishing dashboards. 5. Choose Publish dashboard. 6. (Optional) Do one of the following: • To publish a dashboard without sharing, choose x at the upper right of the Share dashboard with users screen when it appears. You can share the dashboard later by choosing Share from the application bar. • To share the dashboard, follow the procedure in Sharing Amazon QuickSight dashboards. Running queries as an IAM role in Amazon QuickSight You can enhance data security by using fine-grained access policies rather than broader permissions for data sources connected to Amazon Athena, Amazon Redshift or Amazon S3. You start by creating an AWS Identity and Access Management (IAM) role with permissions to be activated when a person or an API starts a query. Then, an Amazon QuickSight administrator or a developer assigns the IAM Role to an Athena or Amazon S3 data source. With the role in place, any person or API that runs the query has the exact permissions necessary to run the query. Here are some things to consider before you commit to implementing run-as roles to enhance data security: • Articulate how the additional security works to your |
amazon-quicksight-user-062 | amazon-quicksight-user.pdf | 62 | Athena, Amazon Redshift or Amazon S3. You start by creating an AWS Identity and Access Management (IAM) role with permissions to be activated when a person or an API starts a query. Then, an Amazon QuickSight administrator or a developer assigns the IAM Role to an Athena or Amazon S3 data source. With the role in place, any person or API that runs the query has the exact permissions necessary to run the query. Here are some things to consider before you commit to implementing run-as roles to enhance data security: • Articulate how the additional security works to your advantage. Running queries as an IAM role 191 Amazon QuickSight User Guide • Work with your QuickSight administrator to learn if adding roles to data sources helps you to better meet your security goals or requirements. • Ask if this type of security, for the number of data sources and people and applications involved, can be feasibly documented and maintained by your team? If not, then who will undertake that part of the work? • In a structured organization, locate stakeholders in parallel teams in Operations, Development, and IT Support. Ask for their experience, advice, and willingness to support your plan. • Before you launch your project, consider doing a proof of concept that involves the people who need access to the data. The following rules apply to using run-as roles with Athena, Amazon Redshift, and Amazon S3: • Each data source can have only one associated RoleArn. Consumers of the data source, who typically access datasets and visuals, can generate many different types of queries. The role places boundaries on which queries work and which don't work. • The ARN must correspond to an IAM role in the same AWS account as the QuickSight instance that uses it. • The IAM role must have a trust relationship allowing QuickSight to assume the role. • The identity that calls QuickSight's APIs must have permission to pass the role before they can update the RoleArn property. You only need to pass the role when creating or updating the role ARN. The permissions aren't re-evaluated later on. Similarly, the permission isn't required when the role ARN is omitted. • When the role ARN is omitted, the Athena or Amazon S3 data source uses the account-wide role and scope-down policies. • When the role ARN is present, the account-wide role and any scope-down policies are both ignored. For Athena data sources, Lake Formation permissions are not ignored. • For Amazon S3 data sources, both the manifest file and the data specified by the manifest file must be accessible using the IAM role. • The ARN string needs to match an existing IAM role in the AWS account and AWS Region where the data is located and queried. When QuickSight connects to another service in AWS, it uses an IAM role. By default, this less granular version of the role is created by QuickSight for each service it uses, and the role is managed by AWS account administrators. When you add an IAM role ARN with a custom Running queries as an IAM role 192 Amazon QuickSight User Guide permissions policy, you override the broader role for your data sources that need extra protection. For more information about policies, see Create a customer managed policy in the IAM User Guide. Run queries with Athena data sources Use the API to attach the ARN to the Athena data source. To do so, add the role ARN in the RoleArn property of AthenaParameters. For verification, you can see the role ARN on the Edit Athena data source dialog box. However, Role ARN is a read-only field. To get started, you need a custom IAM role, which we demonstrate in the following example. Keep in mind that the following code example is for learning purposes only. Use this example in a temporary development and testing environment only, and not in a production environment. The policy in this example doesn't secure any specific resource, which must be in a deployable policy. Also, even for development, you need to add your own AWS account information. The following commands create a simple new role and attach a few policies that grant permissions to QuickSight. aws iam create-role \ --role-name TestAthenaRoleForQuickSight \ --description "Test Athena Role For QuickSight" \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ Athena data sources 193 Amazon QuickSight { "Effect": "Allow", "Principal": { "Service": "quicksight.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' User Guide After you've identified or created an IAM role to use with each data source, attach the policies by using the attach-role-policy. aws iam attach-role-policy \ --role-name TestAthenaRoleForQuickSight \ --policy-arn arn:aws:iam::222222222222:policy/service-role/ AWSQuickSightS3Policy1 aws iam attach-role-policy \ --role-name TestAthenaRoleForQuickSight \ --policy-arn arn:aws:iam::aws:policy/service-role/AWSQuicksightAthenaAccess1 aws iam attach-role-policy \ --role-name TestAthenaRoleForQuickSight \ --policy-arn arn:aws:iam::aws:policy/AmazonS3Access1 After you verify your permissions, you |
amazon-quicksight-user-063 | amazon-quicksight-user.pdf | 63 | grant permissions to QuickSight. aws iam create-role \ --role-name TestAthenaRoleForQuickSight \ --description "Test Athena Role For QuickSight" \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ Athena data sources 193 Amazon QuickSight { "Effect": "Allow", "Principal": { "Service": "quicksight.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' User Guide After you've identified or created an IAM role to use with each data source, attach the policies by using the attach-role-policy. aws iam attach-role-policy \ --role-name TestAthenaRoleForQuickSight \ --policy-arn arn:aws:iam::222222222222:policy/service-role/ AWSQuickSightS3Policy1 aws iam attach-role-policy \ --role-name TestAthenaRoleForQuickSight \ --policy-arn arn:aws:iam::aws:policy/service-role/AWSQuicksightAthenaAccess1 aws iam attach-role-policy \ --role-name TestAthenaRoleForQuickSight \ --policy-arn arn:aws:iam::aws:policy/AmazonS3Access1 After you verify your permissions, you can use the role in QuickSight data sources by creating a new role or updating an existing role. When using these commands, update the AWS account ID and AWS Region to match your own. Remember, these example code snippets are not for production environments. AWS strongly recommends that you identify and use a set of least privilege policies for your production cases. aws quicksight create-data-source --aws-account-id 222222222222 \ --region us-east-1 \ --data-source-id "athena-with-custom-role" \ --cli-input-json '{ "Name": "Athena with a custom Role", "Type": "ATHENA", "data sourceParameters": { Athena data sources 194 Amazon QuickSight User Guide "AthenaParameters": { "RoleArn": "arn:aws:iam::222222222222:role/ TestAthenaRoleForQuickSight" } } }' Run queries with Amazon Redshift data sources Connect your Amazon Redshift data with the run-as role to enhance your data security with fine- grained access policies. You can create a run-as role for Amazon Redshift data sources that use a public network or a VPC connection. You specify the connection type that you want to use in the Edit Amazon Redshift data source dialog box. The run-as role is not supported for Amazon Redshift Serverless data sources. The image below shows an Amazon Redshift data source that uses the Public network connection type. To get started, you need a custom IAM role, which we demonstrate in the following example. The following commands create a sample new role and attach policies that grant permissions to QuickSight. aws iam create-role \ Amazon Redshift data sources 195 Amazon QuickSight User Guide --role-name TestRedshiftRoleForQuickSight \ --description "Test Redshift Role For QuickSight" \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "quicksight.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' After you identify or create an IAM role to use with each data source, attach the policies with an attach-role-policy. If the redshift:GetClusterCredentialsWithIAM permission is attached to the role that you want to use, the values for DatabaseUser and DatabaseGroups are optional. aws iam attach-role-policy \ --role-name TestRedshiftRoleForQuickSight \ --policy-arn arn:aws:iam:111122223333:policy/service-role/AWSQuickSightRedshiftPolicy aws iam create-policy --policy-name RedshiftGetClusterCredentialsPolicy1 \ --policy-document file://redshift-get-cluster-credentials-policy.json aws iam attach-role-policy \ --role-name TestRedshiftRoleForQuickSight \ --policy-arn arn:aws:iam:111122223333:policy/RedshiftGetClusterCredentialsPolicy1 // redshift-get-cluster-credentials-policy.json { "Version": "2012-10-17", "Statement": [ { "Sid": "RedshiftGetClusterCredentialsPolicy", "Effect": "Allow", "Action": [ "redshift:GetClusterCredentials" ], "Resource": [ Amazon Redshift data sources 196 Amazon QuickSight "*" ] } ] } User Guide The example above creates a data source that uses the RoleARN, DatabaseUser, and DatabaseGroups IAM parameters. If you want to establish the connection only through the IAM RoleARN parameter, attach the redshift:GetClusterCredentialsWithIAM permission to your role, shown in the example below. aws iam attach-role-policy \ --role-name TestRedshiftRoleForQuickSight \ --policy-arn arn:aws:iam:111122223333:policy/RedshiftGetClusterCredentialsPolicy1 // redshift-get-cluster-credentials-policy.json { "Version": "2012-10-17", "Statement": [ { "Sid": "RedshiftGetClusterCredentialsPolicy", "Effect": "Allow", "Action": [ "redshift:GetClusterCredentialsWithIAM" ], "Resource": [ "*" ] } ] }" After you verify your permissions, you can use the role in QuickSight data sources by creating a new role or updating an existing role. When using these commands, update the AWS account ID and AWS Region to match your own. aws quicksight create-data-source \ --region us-west-2 \ --endpoint https://quicksight.us-west-2.quicksight.aws.com/ \ --cli-input-json file://redshift-data-source-iam.json \ redshift-data-source-iam.json is shown as below { "AwsAccountId": "AWSACCOUNTID", "DataSourceId": "DATSOURCEID", "Name": "Test redshift demo iam", "Type": "REDSHIFT", "DataSourceParameters": { "RedshiftParameters": { Amazon Redshift data sources 197 Amazon QuickSight User Guide "Database": "integ", "Host": "redshiftdemocluster.us-west-2.redshift.amazonaws.com", "Port": 8192, "ClusterId": "redshiftdemocluster", "IAMParameters": { "RoleArn": "arn:aws:iam::222222222222:role/TestRedshiftRoleForQuickSight", "DatabaseUser": "user", "DatabaseGroups": ["admin_group", "guest_group", "guest_group_1"] } } }, "Permissions": [ { "Principal": "arn:aws:quicksight:us-east-1:AWSACCOUNTID:user/default/demoname", "Actions": [ "quicksight:DescribeDataSource", "quicksight:DescribeDataSourcePermissions", "quicksight:PassDataSource", "quicksight:UpdateDataSource", "quicksight:DeleteDataSource", "quicksight:UpdateDataSourcePermissions" ] } ] } If your data source uses the VPC connection type, use the following VPC configuration. { "AwsAccountId": "AWSACCOUNTID", "DataSourceId": "DATSOURCEID", "Name": "Test redshift demo iam vpc", "Type": "REDSHIFT", "DataSourceParameters": { "RedshiftParameters": { "Database": "mydb", "Host": "vpcdemo.us-west-2.redshift.amazonaws.com", "Port": 8192, "ClusterId": "vpcdemo", "IAMParameters": { "RoleArn": "arn:aws:iam::222222222222:role/TestRedshiftRoleForQuickSight", Amazon Redshift data sources 198 Amazon QuickSight User Guide "DatabaseUser": "user", "AutoCreateDatabaseUser": true } } }, "VpcConnectionProperties": { "VpcConnectionArn": "arn:aws:quicksight:us-west-2:222222222222:vpcConnection/VPC Name" }, "Permissions": [ { "Principal": "arn:aws:quicksight:us-east-1:222222222222:user/default/demoname", "Actions": [ "quicksight:DescribeDataSource", "quicksight:DescribeDataSourcePermissions", "quicksight:PassDataSource", "quicksight:UpdateDataSource", "quicksight:DeleteDataSource", "quicksight:UpdateDataSourcePermissions" ] } ] } If your data source uses the redshift:GetClusterCredentialsWithIAM permission and doesn't use the DatabaseUser or DatabaseGroups parameters, grant the role access to some or all tables in the schema. To see if a role has been granted SELECT permissions to a specific table, input the following command into the Amazon Redshift Query Editor. SELECT u.usename, |
amazon-quicksight-user-064 | amazon-quicksight-user.pdf | 64 | 8192, "ClusterId": "vpcdemo", "IAMParameters": { "RoleArn": "arn:aws:iam::222222222222:role/TestRedshiftRoleForQuickSight", Amazon Redshift data sources 198 Amazon QuickSight User Guide "DatabaseUser": "user", "AutoCreateDatabaseUser": true } } }, "VpcConnectionProperties": { "VpcConnectionArn": "arn:aws:quicksight:us-west-2:222222222222:vpcConnection/VPC Name" }, "Permissions": [ { "Principal": "arn:aws:quicksight:us-east-1:222222222222:user/default/demoname", "Actions": [ "quicksight:DescribeDataSource", "quicksight:DescribeDataSourcePermissions", "quicksight:PassDataSource", "quicksight:UpdateDataSource", "quicksight:DeleteDataSource", "quicksight:UpdateDataSourcePermissions" ] } ] } If your data source uses the redshift:GetClusterCredentialsWithIAM permission and doesn't use the DatabaseUser or DatabaseGroups parameters, grant the role access to some or all tables in the schema. To see if a role has been granted SELECT permissions to a specific table, input the following command into the Amazon Redshift Query Editor. SELECT u.usename, t.schemaname||'.'||t.tablename, has_table_privilege(u.usename,t.tablename,'select') AS user_has_select_permission FROM pg_user u CROSS JOIN pg_tables t WHERE u.usename = 'IAMR:RoleName' AND t.tablename = tableName For more information about the SELECT action in the Amazon Redshift Query Editor, see SELECT. Amazon Redshift data sources 199 Amazon QuickSight User Guide To grant SELECT permisions to the role, input the following command in the Amazon Redshift Query Editor. GRANT SELECT ON { [ TABLE ] table_name [, ...] | ALL TABLES IN SCHEMA schema_name [, ...] } TO "IAMR:Rolename"; For more information about the GRANT action in the Amazon Redshift Query Editor, see GRANT. Run queries with Amazon S3 data sources Amazon S3 data sources contain a manifest file that QuickSight uses to find and parse your data. You can upload a JSON manifest file through the QuickSight console, or you can provide a URL that points to a JSON file in an S3 bucket. If you choose to provide a URL, QuickSight must be granted permission to access the file in Amazon S3. Use the QuickSight administration console to control access to the manifest file and the data that it references. With the RoleArn property, you can grant access to the manifest file and the data that it references through a custom IAM role that overrides the account-wide role. Use the API to attach the ARN to the manifest file of the Amazon S3 data source. To do so, include the role ARN in the RoleArn property of S3Parameters. For verification, you can see the role ARN in the Edit S3 data source dialog box. However, Role ARN is a read-only field, as shown in the following screenshot. Amazon S3 data sources 200 Amazon QuickSight User Guide To get started, create an Amazon S3 manifest file. Then, you can either upload it to Amazon QuickSight when you create a new Amazon S3 dataset or place the file into the Amazon S3 bucket that contains your data files. View the following example to see what a manifest file might look like: { "fileLocations": [ { "URIPrefixes": [ "s3://quicksightUser-run-as-role/data/" ] } ], "globalUploadSettings": { "format": "CSV", "delimiter": ",", "textqualifier": "'", "containsHeader": "true" } } For instructions on how to create a manifest file, see Supported formats for Amazon S3 manifest files. After you have created a manifest file and added it to your Amazon S3 bucket or uploaded it to QuickSight, create or update an existing role in IAM that grants s3:GetObject access. The following example illustrates how to update an existing IAM role with the AWS API: aws iam put-role-policy \ --role-name QuickSightAccessToS3RunAsRoleBucket \ --policy-name GrantS3RunAsRoleAccess \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::s3-bucket-name" }, { "Effect": "Allow", "Action": "s3:GetObject", Amazon S3 data sources 201 Amazon QuickSight User Guide "Resource": "arn:aws:s3:::s3-bucket-name/manifest.json" }, { "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::s3-bucket-name/*" } ] }' After your policy grants s3:GetObject access, you can begin creating data sources that apply the updated put-role-policy to the Amazon S3 data source's manifest file. aws quicksight create-data-source --aws-account-id 111222333444 --region us-west-2 -- endpoint https://quicksight.us-west-2.quicksight.aws.com/ \ --data-source-id "s3-run-as-role-demo-source" \ --cli-input-json '{ "Name": "S3 with a custom Role", "Type": "S3", "DataSourceParameters": { "S3Parameters": { "RoleArn": "arn:aws:iam::111222333444:role/ QuickSightAccessRunAsRoleBucket", "ManifestFileLocation": { "Bucket": "s3-bucket-name", "Key": "manifest.json" } } } }' After you verify your permissions, you can use the role in QuickSight data sources, either by creating a new role or updating an existing role. When using these commands, be sure to update the AWS account ID and AWS Region to match your own. Amazon S3 data sources 202 Amazon QuickSight Deleting datasets Important User Guide Currently, deleting a dataset is irreversible and can cause irreversible loss of work. Deletes don't cascade to delete dependent objects. Instead, dependent objects stop working, even if you replace the deleted dataset with an identical dataset. Before you delete a dataset, we strongly recommend that you first point each dependent analysis or dashboard to a new dataset. Currently, when you delete a dataset while dependent visuals still exist, the analyses and dashboards that contain those visuals have no way to assimilate new metadata. They remain visible, but they can't function. They can't be repaired by adding an identical dataset. This is because datasets include metadata that is integral |
amazon-quicksight-user-065 | amazon-quicksight-user.pdf | 65 | cause irreversible loss of work. Deletes don't cascade to delete dependent objects. Instead, dependent objects stop working, even if you replace the deleted dataset with an identical dataset. Before you delete a dataset, we strongly recommend that you first point each dependent analysis or dashboard to a new dataset. Currently, when you delete a dataset while dependent visuals still exist, the analyses and dashboards that contain those visuals have no way to assimilate new metadata. They remain visible, but they can't function. They can't be repaired by adding an identical dataset. This is because datasets include metadata that is integral to the analyses and dashboards that depend on that dataset. This metadata is uniquely generated for each dataset. Although the Amazon QuickSight engine can read the metadata, it isn't readable by humans (for example, it doesn't contain field names). So, an exact replica of the dataset has different metadata. Each dataset's metadata is unique, even for multiple datasets that share the same name and the same fields. To delete a dataset 1. Make sure that the dataset isn't being used by any analysis or dashboard that someone wants to keep using. On the Datasets page, choose the dataset that you no longer need. Then choose Delete Dataset at upper-right. 2. If you receive a warning if this dataset is in use, track down all dependent analyses and dashboards and point them at a different dataset. If this isn't feasible, try one or more of these best practices instead of deleting it: • Rename the dataset, so that the dataset is clearly deprecated. • Filter the data, so that the dataset has no rows. • Remove everyone else's access to the dataset. Deleting datasets 203 Amazon QuickSight User Guide We recommend that you use whatever means you can to inform owners of dependent objects that this dataset is being deprecated. Also, make sure that you provide sufficient time for them to take action. 3. After you make sure that there are no dependent objects that will stop functioning after the dataset is deleted, choose the dataset and choose Delete Data Set. Confirm your choice, or choose Cancel. Important Currently, deleting a dataset is irreversible and can cause irreversible loss of work. Deletes don't cascade to delete dependent objects. Instead, dependent objects stop working, even if you replace the deleted dataset with an identical dataset. Adding a dataset to an analysis After you have created an analysis, you can add more datasets to the analysis. Then, you can use them to create more visuals. From within the analysis, you can open any dataset for editing, for example to add or remove fields, or perform other data preparation. You can also remove or replace data sets. Adding a dataset to an analysis 204 Amazon QuickSight User Guide The currently selected dataset displays at the top of the Data pane. This is the dataset that is used by the currently selected visual. Each visual can use only one dataset. Choosing a different visual changes the selected dataset to the one used by that visual. To change the selected dataset manually, choose the dataset list at the top of the Data pane and then choose a different dataset. This deselects the currently selected visual if it doesn't use this dataset. Then, choose a visual that uses the selected dataset. Or choose Add in the Visuals pane to create a new visual using the selected dataset. If you choose Suggested on the tool bar to see suggested visuals, you'll see visuals based on the currently selected dataset. Only filters for the currently selected dataset are shown in the Filter pane, and you can only create filters on the currently selected dataset. Topics • Replacing datasets • Remove a dataset from an analysis Use the following procedure to add a dataset to an analysis or edit a dataset used by an analysis. To add a dataset to an analysis 1. On the analysis page, navigate to the Data pane and expand the Dataset dropdown. 2. Choose Add a new dataset to add a dataset. Or, choose Manage datasets to edit a dataset. For more information about editing a dataset, see Editing datasets. Adding a dataset to an analysis 205 Amazon QuickSight User Guide 3. A list of your datasets appears. Choose a dataset and then choose Select. To cancel, choose Cancel. Replacing datasets In an analysis, you can add, edit, replace, or remove datasets. Use this section to learn how to replace your dataset. When you replace a dataset, the new dataset should have similar columns, if you expect the visual to work the way you designed it. Replacing the dataset also clears the undo and redo history for the analysis. This means you can't use the undo and redo buttons on the application bar to navigate |
amazon-quicksight-user-066 | amazon-quicksight-user.pdf | 66 | analysis 205 Amazon QuickSight User Guide 3. A list of your datasets appears. Choose a dataset and then choose Select. To cancel, choose Cancel. Replacing datasets In an analysis, you can add, edit, replace, or remove datasets. Use this section to learn how to replace your dataset. When you replace a dataset, the new dataset should have similar columns, if you expect the visual to work the way you designed it. Replacing the dataset also clears the undo and redo history for the analysis. This means you can't use the undo and redo buttons on the application bar to navigate your changes. So, when you decide to change the dataset, your analysis design should be somewhat stable—not in the middle of an editing phase. To replace a dataset 1. On the analysis page, navigate to the Data pane and expand the Dataset dropdown. 2. Choose Manage datasets. 3. Choose the ellipsis (three dots) next to the dataset that you want to replace, and then choose Replace. 4. In the Select replacement dataset page, choose a dataset from the list, and then choose Select. Replacing datasets 206 Amazon QuickSight Note User Guide Replacing a dataset clears the undo and redo history for this analysis. The dataset is replaced with the new one. The field list and visuals are updated with the new dataset. At this point, you can choose to add a new dataset, edit the new dataset, or replace it with a different one. Choose Close to exit. If your new dataset doesn't match In some cases, the selected replacement dataset doesn't contain all of the fields and hierarchies used by the visuals, filters, parameters, and calculated fields in your analysis. If so, you receive a warning from Amazon QuickSight that shows a list of mismatched or missing columns. If this happens, you can update the field mapping between the two datasets. To update the field mapping 1. 2. In the Mismatch in replacement dataset page, choose Update field mapping. In the Update field mapping page, choose the drop-down menu for the field(s) you want to map and choose a field from the list to map it to. If the field is missing from the new dataset, choose Ignore this field. 3. Choose Confirm to confirm your updates. 4. Choose Close to close the page and return to your analysis. The dataset is replaced with the new one. The fields list and visuals are updated with the new dataset. Any visuals that were using a field that's now missing from the new dataset update to blank. You can readd fields to the visual or remove the visual from your analysis. If you change your mind after replacing the dataset, you can still recover. Let's say you replace the dataset and then find that it's too difficult to change your analysis to match the new dataset. You Replacing datasets 207 Amazon QuickSight User Guide can undo any changes you made to your analysis. You can then replace the new dataset with the original one, or with a dataset that more closely matches the requirements of the analysis. Remove a dataset from an analysis Use the following procedure to delete a dataset from an analysis. To delete a dataset from an analysis 1. On the analysis page, navigate to the Data pane and expand the Dataset dropdown. 2. Choose Manage datasets. 3. Choose the ellipsis (three dots) next to the dataset that you want to replace, and then choose Remove. You can't delete a dataset if it's the only one in the analysis. Remove a dataset from an analysis 208 Amazon QuickSight User Guide 4. Choose Close to close the dialog box. Working with data sources in Amazon QuickSight Use a data source to access an external data store. Amazon S3 data sources save the manifest file information. In contrast, Salesforce and database data sources save connection information like credentials. In such cases, you can easily create multiple datasets from the data store without having to re-enter information. Connection information isn't saved for text or Microsoft Excel files. Topics • Creating a data source • Editing a data source • Deleting a data source Creating a data source Intended audience: Amazon QuickSight authors As an analysis author in Amazon QuickSight, you don't need to understand anything about the infrastructure that you use to connect to your data. You set up a new data source only once. After a data source is set up, you can access it from its tile in the Amazon QuickSight console. You can use it to create one or more datasets. After a dataset is set up, you can also access the dataset from its tile. By abstracting away the technical details, Amazon QuickSight simplifies data connections. Note You don't need to store connection settings for files that |
amazon-quicksight-user-067 | amazon-quicksight-user.pdf | 67 | authors As an analysis author in Amazon QuickSight, you don't need to understand anything about the infrastructure that you use to connect to your data. You set up a new data source only once. After a data source is set up, you can access it from its tile in the Amazon QuickSight console. You can use it to create one or more datasets. After a dataset is set up, you can also access the dataset from its tile. By abstracting away the technical details, Amazon QuickSight simplifies data connections. Note You don't need to store connection settings for files that you plan to upload manually. For more information about file uploads, see Creating datasets. Before you begin adding a new data-source connection profile to Amazon QuickSight, first collect the information that you need to connect to the data source. In some cases, you might plan to copy and paste settings from a file. If so, make sure that the file doesn't contain formatting Working with data sources 209 Amazon QuickSight User Guide characters (list bullets or numbers) or blank space characters (spaces, tabs). Also make sure that the file doesn't contain nontext "gremlin" characters such as non-ASCII, null (ASCII 0), and control characters. The following list includes the information to collect the most commonly used settings: • The data source to connect to. Make sure that you know which source that you need to connect to for reporting. This source might be different than the source that stores, processes, or provides access to the data. For example, let's say that you're a new analyst in a large company. You want to analyze data from your ordering system, which you know uses Oracle. However, you can't directly query the online transaction processing (OLTP) data. A subset of data is extracted and stored in a bucket on Amazon S3, but you don't have access to that either. Your new co-workers explain that they use AWS Glue crawlers to read the files and AWS Lake Formation to access them. With more research, you learn that you need to use an Amazon Athena query as your data source in Amazon QuickSight. The point here is that it isn't always obvious which type of data source to choose. • A descriptive name for the new data source tile. Each new data source connection needs a unique and descriptive name. This name displays on the Amazon QuickSight list of existing data sources, which is at the bottom of the Create a Data Set screen. Use a name that makes it easy to distinguish your data sources from other similar data sources. Your new Amazon QuickSight data source profile displays both the database software logo and the custom name that you assign. • The name of the server or instance to connect to. A unique name or other identifier identifies the server connector of the data source on your network. The descriptors vary depending on which one you're connecting to, but it's usually one or more of the following: • Hostname • IP address • Cluster ID • Instance ID • Connector • Site-based URL • The name of the collection of data that you want to use. Creating a data source 210 Amazon QuickSight User Guide The descriptor varies depending on the data source, but it's usually one of the following: • Database • Warehouse • S3 bucket • Catalog • Schema In some cases, you might need to include a manifest file or a query. • The user name that you want Amazon QuickSight to use. Every time Amazon QuickSight connects using this data source profile (tile), it uses the user name from the connection settings. In some cases, this might be your personal login. But if you're going to share this with other people, ask the system administrator about creating credentials to use for Amazon QuickSight connections. • What type of connection to use. You can choose a public network or a VPC connection. If you have more than one VPC connection available, identify which one to use to reach your source of data. • Additional settings, such as Secure Sockets Layer (SSL) or API tokens, are required by some data sources. After you save the connection settings as a data source profile, you can create a dataset by selecting its tile. The connections are stored as data source connection profiles in Amazon QuickSight. To view your existing connection profiles, open the Amazon QuickSight start page, choose Datasets, choose New Dataset, and then scroll to the heading FROM EXISTING DATA SOURCES. For a list of supported data source connections and examples, see Amazon QuickSight Connection examples. After you create a data source in QuickSight, you can create a dataset in QuickSight that contains data from the connected data source. You can also update data |
amazon-quicksight-user-068 | amazon-quicksight-user.pdf | 68 | save the connection settings as a data source profile, you can create a dataset by selecting its tile. The connections are stored as data source connection profiles in Amazon QuickSight. To view your existing connection profiles, open the Amazon QuickSight start page, choose Datasets, choose New Dataset, and then scroll to the heading FROM EXISTING DATA SOURCES. For a list of supported data source connections and examples, see Amazon QuickSight Connection examples. After you create a data source in QuickSight, you can create a dataset in QuickSight that contains data from the connected data source. You can also update data source connection information at any time. Creating a data source 211 Amazon QuickSight Editing a data source User Guide You can edit an existing database data source to update the connection information, such as the server name or the user credentials. You can also edit an existing Amazon Athena data source to update the data source name. You can't edit Amazon S3 or Salesforce data sources. Editing a database data source Use the following procedure to edit a database data source. 1. 2. From the QuickSight start page, choose Datasets at left, and then choose New dataset. Scroll down to the FROM EXISTING DATA SOURCES section and choose a database data source. 3. Choose Edit Data Source. 4. Modify the data source information: • If you are editing an autodiscovered database data source, you can modify any of the following settings: • For Data source name, enter a name for the data source. • For Instance ID, choose the name of the instance or cluster that you want to connect to from the list provided. • Database name shows the default database for the Instance ID cluster or instance. If you want to use a different database on that cluster or instance, enter its name. • For UserName, enter the user name of a user account that has permissions to do the following: • Access the target database. • Read (perform a SELECT statement on) any tables in that database that you want to use. • For Password, enter the password for the account that you entered. • If you are editing an external database data source, you can modify any of the following settings: • For Data source name, enter a name for the data source. • For Database server, enter one of the following values: • For an Amazon Redshift cluster, enter the endpoint of the cluster without the port number. For example, if the endpoint value is Editing a data source 212 Amazon QuickSight User Guide clustername.1234abcd.us-west-2.redshift.amazonaws.com:1234, then enter clustername.1234abcd.us-west-2.redshift.amazonaws.com. You can get the endpoint value from the Endpoint field on the cluster detail page on the Amazon Redshift console. • For an Amazon EC2 instance of PostgreSQL, MySQL, or SQL Server, enter the public DNS address. You can get the public DNS value from the Public DNS field on the instance detail pane in the EC2 console. • For a non–Amazon EC2 instance of PostgreSQL, MySQL, or SQL Server, enter the hostname or public IP address of the database server. • For Port, enter the port that the cluster or instance uses for connections. • For Database name, enter the name of the database that you want to use. • For UserName, enter the user name of a user account that has permissions to do the following: • Access the target database. • Read (perform a SELECT statement on) any tables in that database that you want to use. • For Password, enter the password for the account that you entered. 5. Choose Validate connection. 6. 7. If the connection validates, choose Update data source. If not, correct the connection information and try validating again. If you want to create a new dataset using the updated data source, proceed with the instructions at Creating a dataset from a database. Otherwise, close the Choose your table dialog box. Editing an Athena data source Use the following procedure to edit an Athena data source. 1. 2. From the QuickSight start page, choose Datasets at left, and then choose New dataset. Scroll down to the FROM EXISTING DATA SOURCES section, and then choose an Athena data source. 3. Choose Edit Data Source. 4. For Data source name, enter a new name. Editing a data source 213 Amazon QuickSight User Guide 5. 6. The Manage data source sharing screen appears. On the Users tab, locate the user that you want to remove. If you want to create a new dataset using the updated data source, proceed with the instructions at Creating a dataset using Amazon Athena data. Otherwise, close the Choose your table dialog box. Deleting a data source You can delete a data source if you no longer need it. Deleting a query-based database data source makes any associated |
amazon-quicksight-user-069 | amazon-quicksight-user.pdf | 69 | Edit Data Source. 4. For Data source name, enter a new name. Editing a data source 213 Amazon QuickSight User Guide 5. 6. The Manage data source sharing screen appears. On the Users tab, locate the user that you want to remove. If you want to create a new dataset using the updated data source, proceed with the instructions at Creating a dataset using Amazon Athena data. Otherwise, close the Choose your table dialog box. Deleting a data source You can delete a data source if you no longer need it. Deleting a query-based database data source makes any associated datasets unusable. Deleting an Amazon S3, Salesforce, or SPICE-based database data source doesn't affect your ability to use any associated datasets. This is because the data is stored in SPICE. However, you can no longer refresh those datasets. To delete a data source 1. In the FROM EXISTING DATA SOURCES section of the Create a Data Set page, choose the data source that you want to delete. 2. Choose Delete. Deleting a data source 214 Amazon QuickSight User Guide Refreshing data in Amazon QuickSight When refreshing data, Amazon QuickSight handles datasets differently depending on the connection properties and the storage location of the data. If QuickSight connects to the data store by using a direct query, the data automatically refreshes when you open an associated dataset, analysis, or dashboard. Filter controls are refreshed automatically every 24 hours. To refresh SPICE datasets, QuickSight must independently authenticate using stored credentials to connect to the data. QuickSight can't refresh manually uploaded data—even from S3 buckets, even though it's stored in SPICE—because QuickSight doesn't store its connection and location metadata. If you want to automatically refresh data that's stored in an S3 bucket, create a dataset by using the S3 data source card. For files that you manually uploaded to SPICE, you refresh these manually by importing the file again. If you want to reuse the name of the original dataset for the new file, first rename or delete the original dataset. Then give the preferred name to the new dataset. Also, check that the field names are the same name and data type. Open your analysis, and replace the original dataset with the new dataset. For more information, see Replacing datasets. You can refresh your SPICE datasets at any time. Refreshing imports the data into SPICE again, so the data includes any changes since the last import. For Amazon QuickSight Standard Edition, you can do a full refresh of your SPICE data at any time. For Amazon QuickSight Enterprise Edition, you can do a full refresh or an incremental refresh (SQL- based data sources only) at any time. Note If your dataset uses CustomSQL, refreshing incrementally might not benefit you. If the SQL query is complex, your database may not be able to optimize the filter with the look-back window. This can cause the query that pulls in the data to take longer than a full refresh. We recommend that you try reducing query execution time by refactoring the custom SQL. Note that results might vary depending on the type of optimization you make. You can refresh SPICE data by using any of the following approaches: 215 Amazon QuickSight User Guide • You can use the options on Datasets page. • You can refresh a dataset while editing a dataset. • You can schedule refreshes in the dataset settings. • You can use the CreateIngestion API operation to refresh the data. When you create or edit a SPICE dataset, you can enable email notifications about data loading status. This option notifies the owners of the dataset if the data fails to load or refresh. To turn on notifications, select the Email owners when a refresh fails option that appears on the Finish data set creation screen. This option isn't available for datasets that you create by using Upload a File on the datasets page. In the following topics, you can find an explanation of different approaches to refreshing and working with SPICE data. Topics • Importing data into SPICE • Refreshing SPICE data • Using SPICE data in an analysis • View SPICE ingestion history • Troubleshooting skipped row errors • SPICE ingestion error codes • Updating files in a dataset Importing data into SPICE When you import data into a dataset rather than using a direct SQL query, it becomes SPICE data because of how it's stored. SPICE (Super-fast, Parallel, In-memory Calculation Engine) is the robust in-memory engine that Amazon QuickSight uses. It's engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest. When you create or edit a dataset, you choose to use either SPICE or a direct query, unless the dataset contains uploaded files. Importing (also called ingesting) your data |
amazon-quicksight-user-070 | amazon-quicksight-user.pdf | 70 | error codes • Updating files in a dataset Importing data into SPICE When you import data into a dataset rather than using a direct SQL query, it becomes SPICE data because of how it's stored. SPICE (Super-fast, Parallel, In-memory Calculation Engine) is the robust in-memory engine that Amazon QuickSight uses. It's engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest. When you create or edit a dataset, you choose to use either SPICE or a direct query, unless the dataset contains uploaded files. Importing (also called ingesting) your data into SPICE can save time and money: • Your analytical queries process faster. Importing data into SPICE 216 Amazon QuickSight User Guide • You don't need to wait for a direct query to process. • Data stored in SPICE can be reused multiple times without incurring additional costs. If you use a data source that charges per query, you're charged for querying the data when you first create the dataset and later when you refresh the dataset. SPICE capacity is allocated separately for each AWS Region. Default SPICE capacity is automatically allocated to your home AWS Region. For each AWS account, SPICE capacity is shared by all the people using QuickSight in a single AWS Region. The other AWS Regions have no SPICE capacity unless you choose to purchase some. QuickSight administrators can view how much SPICE capacity you have in each AWS Region and how much of it is currently in use. A QuickSight administrator can purchase more SPICE capacity or release unused SPICE capacity as needed. For more information, see Managing SPICE memory capacity. Topics • Estimating the size of SPICE datasets Estimating the size of SPICE datasets The size of a dataset in SPICE relative to your account's SPICE capacity is called logical size. A dataset's logical size isn't the same as the size of the dataset's source file or table. The computation of a dataset's logical size occurs after all the data type transformations and calculated columns are defined during data preparation. These fields are materialized in SPICE in a way that enhances query performance. Any changes you make in an analysis have no effect on the logical size of the data in SPICE. Only changes that are saved in the dataset apply to SPICE capacity. The logical size of a SPICE dataset depends on the data types of the dataset fields and the number of rows in the dataset. The three types of SPICE data are decimals, dates, and strings. You can transform a field's data type during the data preparation phase to fit your data visualization needs. For example, the file you want to import might contain all strings (text). But for these to be used in a meaningful way in an analysis, you prepare the data by changing the data types to their proper form. Fields containing prices can be changed from strings to decimals, and fields containing dates can be changed from strings to dates. You can also create calculated fields and exclude fields that you don't need from the source table. When you are finished preparing your dataset and all transformations are complete, you can estimate the logical size of the final schema. Estimating the size of SPICE datasets 217 Amazon QuickSight Note User Guide Geospatial data types use metadata to interpret the physical data type. Latitude and longitude are numeric. All other geospatial categories are strings. In the formula below, decimals and dates are calculated as 8 bytes per cell with 4 extra bytes for auxillary. Strings are calculated based on the text's length in UTF-8 encoding plus 24 bytes for auxillary. String data types require more space because of the extra indexing required by SPICE to provide high query performance. Logical dataset size in bytes = (Number of Numeric cells * (12 bytes per cell)) + (Number of Date cells * (12 bytes per cell)) + SUM ((24 bytes + UTF-8 encoded length) per Text cell) The formula above should only be used to estimate the size of a single dataset in SPICE. The SPICE capacity usage is the total size of all datasets in an account in a specific region. does not recommend that you use this formula to estimate the total SPICE capacity that your account is using. Refreshing SPICE data Refreshing a dataset Use the following procedure to refresh a SPICE dataset based on an Amazon S3 or database data source on the Datasets page. To refresh SPICE data from the datasets page 1. On the Datasets page, choose the dataset to open it. 2. On the dataset details page that opens, choose the Refresh tab and then choose Refresh now. Refreshing SPICE data 218 Amazon QuickSight User Guide 3. Keep the refresh type as Full |
amazon-quicksight-user-071 | amazon-quicksight-user.pdf | 71 | specific region. does not recommend that you use this formula to estimate the total SPICE capacity that your account is using. Refreshing SPICE data Refreshing a dataset Use the following procedure to refresh a SPICE dataset based on an Amazon S3 or database data source on the Datasets page. To refresh SPICE data from the datasets page 1. On the Datasets page, choose the dataset to open it. 2. On the dataset details page that opens, choose the Refresh tab and then choose Refresh now. Refreshing SPICE data 218 Amazon QuickSight User Guide 3. Keep the refresh type as Full refresh. 4. If you are refreshing an Amazon S3 dataset, choose one of the following options for S3 Manifest: • To use the same manifest file you last provided to Amazon QuickSight, choose Existing Manifest. If you have changed the manifest file at the file location or URL that you last provided, the data returned reflects those changes. • To specify a new manifest file by uploading it from your local network, choose Upload Manifest, and then choose Upload manifest file. For Open, choose a file, and then choose Open. • To specify a new manifest file by providing a URL, enter the URL of the manifest in Input manifest URL. You can find the manifest file URL in the Amazon S3 console by opening the context (right-click) menu for the manifest file, choosing Properties, and looking at the Link box. 5. Choose Refresh. 6. If you are refreshing an Amazon S3 dataset, choose OK, then OK again. If you are refreshing a database dataset, choose OK. Incrementally refreshing a dataset Applies to: Enterprise Edition For SQL-based data sources, such as Amazon Redshift, Amazon Athena, PostgreSQL, or Snowflake, you can refresh your data incrementally within a look-back window of time. Incrementally refreshing a dataset 219 Amazon QuickSight User Guide An incremental refresh queries only data defined by the dataset within a specified look-back window. It transfers all insertions, deletions, and modifications to the dataset, within that window's timeframe, from its source to the dataset. The data currently in SPICE that's within that window is deleted and replaced with the updates. With incremental refreshes, less data is queried and transferred for each refresh. For example, let's say you have a dataset with 180,000 records that contains data from January 1 to June 30. On July 1, you run an incremental refresh on the data with a look-back window of seven days. QuickSight queries the database asking for all data since June 24 (7 days ago), which is 7,000 records. QuickSight then deletes the data currently in SPICE from June 24 and after, and appends the newly queried data. The next day (July 2), QuickSight does the same thing, but queries from June 25 (7,000 records again), and then deletes from the existing dataset from the same date. Rather than having to ingest 180,000 records every day, it only has to ingest 7,000 records. Use the following procedure to incrementally refresh a SPICE dataset based on a SQL data source on the Datasets page. To incrementally refresh a SQL-based SPICE dataset 1. On the Datasets page, choose the dataset to open it. 2. On the dataset details page that opens, choose the Refresh tab and then choose Refresh now. 3. 4. For Refresh type, choose Incremental refresh. If this is your first incremental refresh on the dataset, choose Configure. 5. On the Configure incremental refresh page, do the following: a. b. For Date column, choose a date column that you want to base the look-back window on. For Window size, enter a number for size, and then choose an amount of time that you want to look back for changes. Incrementally refreshing a dataset 220 Amazon QuickSight User Guide You can choose to refresh changes to the data that occurred within a specified number of hours, days, or weeks from now. For example, you can choose to refresh changes to the data that occurred within two weeks of the current date. 6. Choose Submit. Refreshing a dataset during data preparation Use the following procedure to refresh a SPICE dataset based on an Amazon S3 or database data source during data preparation. To refresh SPICE data during data preparation 1. On the Datasets page, choose the dataset, and then choose Edit Data Set. 2. On the dataset screen, choose Refresh now. 3. Keep the refresh type set to Full refresh. 4. (Optional) If you are refreshing an Amazon S3 dataset, choose one of the following options for S3 Manifest: • To use the same manifest file that you last provided to Amazon QuickSight, choose Existing Manifest. If you have changed the manifest file at the file location or URL that you last provided, the data returned reflects those changes. • To specify a |
amazon-quicksight-user-072 | amazon-quicksight-user.pdf | 72 | refresh SPICE data during data preparation 1. On the Datasets page, choose the dataset, and then choose Edit Data Set. 2. On the dataset screen, choose Refresh now. 3. Keep the refresh type set to Full refresh. 4. (Optional) If you are refreshing an Amazon S3 dataset, choose one of the following options for S3 Manifest: • To use the same manifest file that you last provided to Amazon QuickSight, choose Existing Manifest. If you have changed the manifest file at the file location or URL that you last provided, the data returned reflects those changes. • To specify a new manifest file by uploading it from your local network, choose Upload Manifest, and then choose Upload manifest file. For Open, choose a file, and then choose Open. • To specify a new manifest file by providing a URL, enter the URL of the manifest in Input manifest URL. You can find the manifest file URL in the Amazon S3 console by opening the context (right-click) menu for the manifest file, choosing Properties, and looking at the Link box. 5. Choose Refresh. 6. If you are refreshing an Amazon S3 dataset, choose OK, then OK again. If you are refreshing a database dataset, choose OK. Refreshing a dataset during data preparation 221 Amazon QuickSight User Guide Refreshing a dataset on a schedule Use the following procedure to schedule refreshing the data. If your dataset is based on a direct query and not stored in SPICE, you can refresh your data by opening the dataset. You can also refresh your data by refreshing the page in an analysis or dashboard. To refresh SPICE data on a schedule 1. On the Datasets page, choose the dataset to open it. 2. On the dataset details page that opens, choose the Refresh tab and then choose Add new schedule. 3. On the Create a refresh schedule screen, choose settings for your schedule: a. b. For Time zone, choose the time zone that applies to the data refresh. For Starting time, choose a date and time for the refresh to start. Use HH:MM and 24- hour format, for example 13:30. c. For Frequency, choose one of the following: • For Standard or Enterprise editions, you can choose Daily, Weekly, or Monthly. • Daily: Repeats every day. • Weekly: Repeats on the same day of each week. • Monthly: Repeats on the same day number of each month. To refresh data on the 29th, 30th or 31st day of the month, choose Last day of month from the list. • For Enterprise edition only, you can choose Hourly. This setting refreshes your dataset every hour, beginning at the time that you choose. So, if you select 1:05 as the starting time, the data refreshes at five minutes after the hour, every hour. Refreshing a dataset on a schedule 222 Amazon QuickSight User Guide If you decide to use an hourly refresh, you can't also use additional refresh schedules. To create an hourly schedule, remove any other existing schedules for that dataset. Also, remove any existing hourly schedule before you create a daily, weekly, or monthly schedule. 4. Choose Save. Scheduled dataset ingestions take place within 10 minutes of the scheduled date and time. Using the Amazon QuickSight console, you can create five schedules for each dataset. When you have created five, the Create button is turned off. Incrementally refreshing a dataset on a schedule Applies to: Enterprise Edition For SQL-based data sources, such as Amazon Redshift, Athena, PostgreSQL, or Snowflake, you can schedule incremental refreshes. Use the following procedure to incrementally refresh a SPICE dataset based on a SQL data source on the Datasets page. To set an incremental refresh schedule for a SQL-based SPICE dataset 1. On the Datasets page, choose the dataset to open it. 2. On the dataset details page that opens, choose the Refresh tab and then choose Add new schedule. 3. On the Create a schedule page, for Refresh type, choose Incremental refresh. Incrementally refreshing a dataset on a schedule 223 Amazon QuickSight User Guide 4. If this is your first incremental refresh for this dataset, choose Configure, and then do the following: a. b. For Date column, choose a date column that you want to base the look-back window on. For Window size, enter a number for size, and then choose an amount of time that you want to look back for changes. You can choose to refresh changes to the data that occurred within a specified number of hours, days, or weeks from now. For example, you can choose to refresh changes to the data that occurred within two weeks of the current date. 5. 6. 7. 8. c. Choose Submit. For Time zone, choose the time zone that applies to the data refresh. For Repeats, choose one of |
amazon-quicksight-user-073 | amazon-quicksight-user.pdf | 73 | column that you want to base the look-back window on. For Window size, enter a number for size, and then choose an amount of time that you want to look back for changes. You can choose to refresh changes to the data that occurred within a specified number of hours, days, or weeks from now. For example, you can choose to refresh changes to the data that occurred within two weeks of the current date. 5. 6. 7. 8. c. Choose Submit. For Time zone, choose the time zone that applies to the data refresh. For Repeats, choose one of the following: • You can choose Every 15 minutes, Every 30 minutes, Hourly, Daily, Weekly, or Monthly. • Every 15 minutes: Repeats every 15 minutes, beginning at the time you choose. So, if you select 1:05 as the starting time, the data refreshes at 1:20, then again at 1:35, and so on. • Every 30 minutes: Repeats every 30 minutes, beginning at the time you choose. So, if you select 1:05 as the starting time, the data refreshes at 1:35, then again at 2:05, and so on. • Hourly: Repeats every hour, beginning at the time you choose. So, if you select 1:05 as the starting time, the data refreshes at five minutes after the hour, every hour. • Daily: Repeats every day. • Weekly: Repeats on the same day of each week. • Monthly: Repeats on the same day number of each month. To refresh data on the 29th, 30th or 31st day of the month, choose Last day of month from the list. • If you decide to use refresh every 15 or 30 minutes, or hourly, you can't also use additional refresh schedules. To create a refresh schedule every 15 minutes, 30 minutes, or hourly, remove any other existing schedules for that dataset. Also, remove any existing minute or hourly schedule before you create a daily, weekly, or monthly schedule. For Starting, choose a date for the refresh to start. For At, specify the time that the refresh should start. Use HH:MM and 24-hour format, for example 13:30. Scheduled dataset ingestions take place within 10 minutes of the scheduled date and time. Incrementally refreshing a dataset on a schedule 224 Amazon QuickSight User Guide In some cases, something might go wrong with the incremental refresh dataset that makes you want to roll back your dataset. Or you might no longer want to refresh the dataset incrementally. If so, you can delete the scheduled refresh. To do so, choose the dataset on the Datasets page, choose Schedule a refresh, and then choose the x icon to the right of the scheduled refresh. Deleting an incremental refresh configuration starts a full refresh. As part of this full refresh, all the configurations prepared for incremental refreshes are removed. Using SPICE data in an analysis When you use stored data to create an analysis, a data import indicator appears next to the dataset list at the top of the Fields list pane. When you first open the analysis and the dataset is importing, this icon appears as a spinner. After the SPICE import is complete, the indicator displays the percentage of rows that were successfully imported. A message also appears at the top of the visualization pane to provide counts of the rows imported and skipped. Using SPICE data in an analysis 225 Amazon QuickSight User Guide If any rows were skipped, you can choose View summary in this message bar to see details about why those rows failed to import. To edit the dataset and resolve the issues that led to skipped rows, choose Edit data set. For more information about common causes for skipped rows, see Troubleshooting skipped row errors. If an import fails altogether, the data import indicator appears as an exclamation point icon, and an Import failed message is displayed. View SPICE ingestion history You can view the ingestion history for SPICE datasets to find out, for example, when the latest ingestion started and what its status is. The SPICE ingestion history page includes the following information: • Date and time that the ingestion started (UTC) • Status of the ingestion • Amount of time that the ingestion took • The number of aggregated rows in the dataset. • The number of rows ingested during a refresh. • Rows skipped and rows ingested (imported) successfully • The job type for the refresh: scheduled, full refresh, and so on Use the following procedure to view a dataset's SPICE ingestion history. View SPICE ingestion history 226 Amazon QuickSight User Guide To view a dataset's SPICE ingestion history 1. From the start screen, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to examine. 3. On the dataset details page that opens, choose the |
amazon-quicksight-user-074 | amazon-quicksight-user.pdf | 74 | ingestion took • The number of aggregated rows in the dataset. • The number of rows ingested during a refresh. • Rows skipped and rows ingested (imported) successfully • The job type for the refresh: scheduled, full refresh, and so on Use the following procedure to view a dataset's SPICE ingestion history. View SPICE ingestion history 226 Amazon QuickSight User Guide To view a dataset's SPICE ingestion history 1. From the start screen, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to examine. 3. On the dataset details page that opens, choose the Refresh tab. SPICE ingestion history is shown at bottom. 4. (Optional) Choose a time frame to filter the entries from the last hour to the last 90 days. View SPICE ingestion history 227 Amazon QuickSight User Guide 5. (Optional) Choose a specific job status to filter the entries, for example Running or Completed. Otherwise, you can view all entries by choosing All. Troubleshooting skipped row errors When you import data, Amazon QuickSight previews a portion of your data. If it can't interpret a row for any reason, QuickSight skips the row. In some cases, the import will fail. When this happens, QuickSight returns an error message that explains the failure. Fortunately, there's a limited number of things that can go wrong. Some issues can be avoided by being aware of examples like the following: • Make sure that there is no inconsistency between the field data type and the field data, for example occasional string data in a field with a numeric data type. Here are a few examples that can be difficult to detect when scanning the contents of a table: • '' – Using an empty string to indicate a missing value • 'NULL' – Using the word "null" to indicate a missing value • $1000 – Including a dollar sign in a currency value turns it into a string • 'O'Brien' – Using punctuation to mark a string that itself contains the same punctuation. However, this type of error isn't always this easy to find, especially if you have a lot of data, or if your data is typed in by hand. For example, some customer service or sales applications involve entering information verbally provided by customers. The person who originally typed in the data might have put it in the wrong field. They might add, or forget to add, a character or digit. For example, they might enter a date of "0/10/12020" or enter someone's gender in a field meant for age. Troubleshooting skipped row errors 228 Amazon QuickSight User Guide • Make sure that your imported file is correctly processed with or without a header. If there is a header row, make sure that you choose the Contains header upload option. • Make sure that the data doesn't exceed one or more of the Data source quotas. • Make sure that the data is compatible with the Supported data types and values. • Make sure that your calculated fields contain data that works with the calculation, rather than being incompatible with or excluded by the function in the calculated field. For example, if you have a calculated field in your dataset that uses parseDate, QuickSight skips rows where that field doesn't contain a valid date. QuickSight provides a detailed list of the errors that occur when the SPICE engine attempts to ingest data. When a saved dataset reports skipped rows, you can view the errors so you can take action to fix the issues. To view errors for rows that were skipped during SPICE ingestion (data import) 1. On the Datasets page, choose the problematic dataset to open it. 2. On the dataset details page that opens, choose the Refresh tab. SPICE ingestion history is shown at bottom. 3. For the ingestion with the error, choose View error summary. This link is located under the Status column. 4. Examine the File import log that opens. It displays the following sections: • Summary – Provides a percentage score of how many rows were skipped out of the total number of rows in the import. For example, if there are 864 rows skipped out of a total of 1,728, the score is 50.00%. • Skipped Rows – Provides the row count, field name, and error message for each set of similar skipped rows. • Troubleshooting – Provides a link to download a file that contains error information. 5. Under Troubleshooting, choose Download error rows file. The error file has a row for each error. The file is named error-report_123_fe8.csv, where 123_fe8 is replaced with a unique identifying string. The file contains the following columns: Troubleshooting skipped row errors 229 Amazon QuickSight User Guide • ERROR_TYPE – The type or error code for the error that occurred when |
amazon-quicksight-user-075 | amazon-quicksight-user.pdf | 75 | of 1,728, the score is 50.00%. • Skipped Rows – Provides the row count, field name, and error message for each set of similar skipped rows. • Troubleshooting – Provides a link to download a file that contains error information. 5. Under Troubleshooting, choose Download error rows file. The error file has a row for each error. The file is named error-report_123_fe8.csv, where 123_fe8 is replaced with a unique identifying string. The file contains the following columns: Troubleshooting skipped row errors 229 Amazon QuickSight User Guide • ERROR_TYPE – The type or error code for the error that occurred when importing this row. You can look up this error in the SPICE ingestion error codes section that follows this procedure. • COLUMN_NAME – The name of the column in your data that caused the error. • All the columns from your imported row – The remaining columns duplicate the entire row of data. If a row has more than one error, it can appear multiple times in this file. 6. Choose Edit data set to make changes to your dataset. You can filter the data, omit fields, change data types, adjust existing calculated fields, and add calculated fields that validate the data. 7. After you've made changes indicated by the error codes, import the data again. If more SPICE ingestion errors appear in the log, step through this procedure again to fix all remaining errors. Tip If you can't solve the data issues in a reasonable amount of time by using the dataset editor, consult the administrators or developers who own the data. In the long run, it's more cost-effective to cleanse the data closer to its source, rather than adding exception processing while you're preparing the data for analysis. By fixing it at the source, you avoid a situation where multiple people fix the errors in different ways, resulting in different reporting results later on. To practice troubleshooting skipped rows 1. Download CSV files for troubleshooting skipped rows.zip. 2. Extract the files into a folder that you can use to upload the sample .csv file into QuickSight. The zip file contains the following two text files: • sample dataset - data ingestion error.csv – A sample .csv file that contains issues that cause skipped rows. You can try to import the file yourself to see how the error process works. • sample data ingestion error file – A sample error file generated during SPICE ingestion while importing the sample .csv file into QuickSight. 3. Import the data by following these steps: Troubleshooting skipped row errors 230 Amazon QuickSight User Guide a. Choose Datasets, New dataset. b. Choose Upload a file. c. Find and choose the file named sample dataset - data ingestion error.csv. d. Choose Upload a file, Edit settings and prepare data. e. Choose Save to exit. 4. Choose your dataset to view its information, then choose View error summary. Examine the errors and the data to help you resolve the issues. SPICE ingestion error codes The following list of errors codes and descriptions can help you understand and troubleshoot issues with data ingestion into SPICE. Error codes for skipped rows The following list of errors codes and descriptions can help you understand and troubleshoot issues with skipped rows. ARITHMETIC_EXCEPTION – An arithmetic exception occurred while processing a value. ENCODING_EXCEPTION – An unknown exception occurred while converting and encoding data to SPICE. OPENSEARCH_CURSOR_NOT_ENABLED – The OpenSearch domain doesn't have SQL cursors enabled ("opendistro.sql.cursor.enabled" : "true"). For more information, see Authorizing connections to Amazon OpenSearch Service. INCORRECT_FIELD_COUNT – One or more rows have too many fields. Make sure that the number of fields in each row matches the number of fields defined in the schema. INCORRECT_SAGEMAKER_OUTPUT_FIELD_COUNT – The SageMaker AI output has an unexpected number of fields. INDEX_OUT_OF_BOUNDS – The system requested an index that isn't valid for the array or list being processed. MALFORMED_DATE – A value in a field can't be transformed to a valid date. For example, if you try to convert a field that contains a value like "sale date" or "month-1", the action generates a SPICE ingestion error codes 231 Amazon QuickSight User Guide malformed date error. To fix this error, remove nondate values from your data source. Check that you aren't importing a file with a column header mixed into the data. If your string contains a date or time that doesn't convert, see Using unsupported or custom dates. MISSING_SAGEMAKER_OUTPUT_FIELD – A field in the SageMaker AI output is unexpectedly empty. NUMBER_BITWIDTH_TOO_LARGE – A numeric value exceeds the length supported in SPICE. For example, your numeric value has more than 19 digits, which is the length of a bigint data type. For a long numeric sequence that isn't a mathematical value, use a string data type. NUMBER_PARSE_FAILURE – A value in a numeric |
amazon-quicksight-user-076 | amazon-quicksight-user.pdf | 76 | from your data source. Check that you aren't importing a file with a column header mixed into the data. If your string contains a date or time that doesn't convert, see Using unsupported or custom dates. MISSING_SAGEMAKER_OUTPUT_FIELD – A field in the SageMaker AI output is unexpectedly empty. NUMBER_BITWIDTH_TOO_LARGE – A numeric value exceeds the length supported in SPICE. For example, your numeric value has more than 19 digits, which is the length of a bigint data type. For a long numeric sequence that isn't a mathematical value, use a string data type. NUMBER_PARSE_FAILURE – A value in a numeric field is not a number. For example, a field with a data type of int contains a string or a float. SAGEMAKER_OUTPUT_COLUMN_TYPE_MISMATCH – The data type defined in the SageMaker AI schema doesn't match the data type received from SageMaker AI. STRING_TRUNCATION – A string is being truncated by SPICE. Strings are truncated where the length of the string exceeds the SPICE quota. For more information about SPICE, see Importing data into SPICE. For more information about quotas, see Service Quotas. UNDEFINED – An unknown error occurred while ingesting data. UNSUPPORTED_DATE_VALUE – A date field contains a date that is in a supported format but is not in the supported range of dates, for example "12/31/1399" or "01/01/10000". For more information, see Using unsupported or custom dates. Error codes during data import For imports and data refresh jobs that fail, QuickSight provides an error code indicating what caused the failure. The following list of errors codes and descriptions can help you understand and troubleshoot issues with data ingestion into SPICE. ACCOUNT_CAPACITY_LIMIT_EXCEEDED – This data exceeds your current SPICE capacity. Purchase more SPICE capacity or clean up existing SPICE data and then retry this ingestion. CONNECTION_FAILURE – Amazon QuickSight can't connect to your data source. Check the data source connection settings and try again. CUSTOMER_ERROR – There was a problem parsing the data. If this persists, contact Amazon QuickSight technical support. Data import errors 232 Amazon QuickSight User Guide DATA_SET_DELETED – The data source or dataset was deleted or became unavailable during ingestion. DATA_SET_SIZE_LIMIT_EXCEEDED – This dataset exceeds the maximum allowable SPICE dataset size. Use filters to reduce the dataset size and try again. For information on SPICE quotas, see Data source quotas. DATA_SOURCE_AUTH_FAILED – Data source authentication failed. Check your credentials and use the Edit data source option to replace expired credentials. DATA_SOURCE_CONNECTION_FAILED – Data source connection failed. Check the URL and try again. If this error persists, contact your data source administrator for assistance. DATA_SOURCE_NOT_FOUND – No data source found. Check your Amazon QuickSight data sources. DATA_TOLERANCE_EXCEPTION – There are too many invalid rows. Amazon QuickSight has reached the quota of rows that it can skip and still continue ingesting. Check your data and try again. FAILURE_TO_ASSUME_ROLE – Amazon QuickSight couldn't assume the correct AWS Identity and Access Management (IAM) role. Verify the policies for Amazon QuickSight-service-role in the IAM console. FAILURE_TO_PROCESS_JSON_FILE – Amazon QuickSight couldn't parse a manifest file as valid JSON. IAM_ROLE_NOT_AVAILABLE – Amazon QuickSight doesn't have permission to access the data source. To manage Amazon QuickSight permissions on AWS resources, go to the Security and Permissions page under the Manage Amazon QuickSight option as an administrator. INGESTION_CANCELED – The ingestion was canceled by the user. INGESTION_SUPERSEDED – This ingestion has been superseded by another workflow. This happens when a new ingestion is created while another one is still in progress. Avoid manually editing the dataset multiple times in a short period, because each manual edit creates a new ingestion which will supersede and end the previous ingestion. INTERNAL_SERVICE_ERROR – An internal service error occurred. INVALID_DATA_SOURCE_CONFIG – Invalid values appeared in connection settings. Check your connection details and try again. Data import errors 233 Amazon QuickSight User Guide INVALID_DATAPREP_SYNTAX – Your calculated field expression contains invalid syntax. Correct the syntax and try again. INVALID_DATE_FORMAT – An invalid date format appeared. IOT_DATA_SET_FILE_EMPTY – No AWS IoT Analytics data was found. Check your account and try again. IOT_FILE_NOT_FOUND – An indicated AWS IoT Analytics file wasn't found. Check your account and try again. OAUTH_TOKEN_FAILURE – Credentials to the data source have expired. Renew your credentials and retry this ingestion. PASSWORD_AUTHENTICATION_FAILURE – Incorrect credentials appeared for a data source. Update your data source credentials and retry this ingestion. PERMISSION_DENIED – Access to the requested resources was denied by the data source. Request permissions from your database administrator or ensure proper permission has been granted to Amazon QuickSight before retrying. QUERY_TIMEOUT – A query to the data source timed out waiting for a response. Check your data source logs and try again. ROW_SIZE_LIMIT_EXCEEDED – The row size quota exceeded the maximum. S3_FILE_INACCESSIBLE – Couldn't connect to an S3 bucket. Make sure that you grant Amazon QuickSight and users necessary permissions before |
amazon-quicksight-user-077 | amazon-quicksight-user.pdf | 77 | PASSWORD_AUTHENTICATION_FAILURE – Incorrect credentials appeared for a data source. Update your data source credentials and retry this ingestion. PERMISSION_DENIED – Access to the requested resources was denied by the data source. Request permissions from your database administrator or ensure proper permission has been granted to Amazon QuickSight before retrying. QUERY_TIMEOUT – A query to the data source timed out waiting for a response. Check your data source logs and try again. ROW_SIZE_LIMIT_EXCEEDED – The row size quota exceeded the maximum. S3_FILE_INACCESSIBLE – Couldn't connect to an S3 bucket. Make sure that you grant Amazon QuickSight and users necessary permissions before you connect to the S3 bucket. S3_MANIFEST_ERROR – Couldn't connect to S3 data. Make sure that your S3 manifest file is valid. Also verify access to the S3 data. Both Amazon QuickSight and the Amazon QuickSight user need permissions to connect to the S3 data. S3_UPLOADED_FILE_DELETED – The file or files for the ingestion were deleted (between ingestions). Check your S3 bucket and try again. SOURCE_API_LIMIT_EXCEEDED_FAILURE – This ingestion exceeds the API quota for this data source. Contact your data source administrator for assistance. SOURCE_RESOURCE_LIMIT_EXCEEDED – A SQL query exceeds the resource quota of the data source. Examples of resources involved can include the concurrent query quota, the connection quota, and physical server resources. Contact your data source administrator for assistance. Data import errors 234 Amazon QuickSight User Guide SPICE_TABLE_NOT_FOUND – An Amazon QuickSight data source or dataset was deleted or became unavailable during ingestion. Check your dataset in Amazon QuickSight and try again. For more information, see Troubleshooting skipped row errors. SQL_EXCEPTION – A general SQL error occurred. This error can be caused by query timeouts, resource constraints, unexpected data definition language (DDL) changes before or during a query, and other database errors. Check your database settings and your query, and try again. SQL_INVALID_PARAMETER_VALUE – An invalid SQL parameter appeared. Check your SQL and try again. SQL_NUMERIC_OVERFLOW – Amazon QuickSight encountered an out-of-range numeric exception. Check related values and calculated columns for overflows, and try again. SQL_SCHEMA_MISMATCH_ERROR – The data source schema doesn't match the Amazon QuickSight dataset. Update your Amazon QuickSight dataset definition. SQL_TABLE_NOT_FOUND – Amazon QuickSight can't find the table in the data source. Verify the table specified in the dataset or custom SQL and try again. SSL_CERTIFICATE_VALIDATION_FAILURE – Amazon QuickSight can't validate the Secure Sockets Layer (SSL) certificate on your database server. Check the SSL status on that server with your database administrator and try again. UNRESOLVABLE_HOST – Amazon QuickSight can't resolve the host name of the data source. Verify the host name of the data source and try again. UNROUTABLE_HOST – Amazon QuickSight can't reach your data source because it's inside a private network. Ensure that your private VPC connection is configured correctly in Enterprise Edition, or allow Amazon QuickSight IP address ranges to allow connectivity for Standard Edition. Updating files in a dataset To get the latest version of files, you can update files in your dataset. You can update these types of files: • Comma-delimited (CSV) and tab-delimited (TSV) text files • Extended and common log format files (ELF and CLF) • Flat or semistructured data files (JSON) Updating files in a dataset 235 Amazon QuickSight • Microsoft Excel (XLSX) files User Guide Before updating a file, make sure that the new file has the same fields in the same order as the original file currently in the dataset. If there are field (column) discrepancies between the two files, an error occurs and you need to fix the discrepancies before attempting to update again. You can do this by editing the new file to match the original. Note that if you want to add new fields, you can append them after the original fields in the file. For example, in a Microsoft Excel spreadsheet, you can append new fields to the right of the original fields. To update a file in a dataset 1. In QuickSight, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to update, and then choose Edit dataset. 3. On the data preparation page that opens, choose the drop-down list for the file that you want to update, and then choose Update file. 4. On the Update file page that opens, choose Upload file, and then navigate to a file. QuickSight scans the file. 5. If the file is a Microsoft Excel file, choose the sheet that you want on the Choose your sheet page that opens, and then choose Select. 6. Choose Confirm file update on the following page. A preview of some of the sheet columns is shown for your reference. A message that the file updated successfully appears at top right and the table preview updates to show the new file data. Updating files in a dataset 236 Amazon QuickSight |
amazon-quicksight-user-078 | amazon-quicksight-user.pdf | 78 | 4. On the Update file page that opens, choose Upload file, and then navigate to a file. QuickSight scans the file. 5. If the file is a Microsoft Excel file, choose the sheet that you want on the Choose your sheet page that opens, and then choose Select. 6. Choose Confirm file update on the following page. A preview of some of the sheet columns is shown for your reference. A message that the file updated successfully appears at top right and the table preview updates to show the new file data. Updating files in a dataset 236 Amazon QuickSight User Guide Preparing data in Amazon QuickSight Datasets store any data preparation you have done on that data, so that you can reuse that prepared data in multiple analyses. Data preparation provides options such as adding calculated fields, applying filters, and changing field names or data types. If you are basing the data source on a SQL database, you can also use data preparation to join tables. Or you can enter a SQL query if you want to work with data from more than a single table. If you want to transform the data from a data source before using it in Amazon QuickSight, you can prepare it to suit your needs. You then save this preparation as part of the dataset. You can prepare a dataset when you create it, or by editing it later. For more information about creating a new dataset and preparing it, see Creating datasets. For more information about opening an existing dataset for data preparation, see Editing datasets. Use the following topics to learn more about data preparation. Topics • Describing data • Choosing file upload settings • Preparing data fields for analysis in Amazon QuickSight • Adding calculations • Previewing tables in a dataset • Joining data • Filtering data in Amazon QuickSight • Using SQL to customize data • Adding geospatial data • Using unsupported or custom dates • Adding a unique key to an Amazon QuickSight dataset • Integrating Amazon SageMaker AI models with Amazon QuickSight • Preparing dataset examples 237 Amazon QuickSight Describing data User Guide Using Amazon QuickSight, you can add information, or metadata, about the columns (fields) in your datasets. By adding metadata, you make the dataset self-explanatory and easier to reuse. Doing this can help data curators and their customers know where the data came from and what it means. It's a way of communicating to the people who use your dataset or combine it with other datasets to build dashboards. Metadata is especially important for information that is shared between organizations. After you add metadata to a dataset, field descriptions become available to anyone who is using the dataset. A column description appears when someone who is actively browsing the Fields list pauses on a field name. Column descriptions are visible to people who are editing a dataset or an analysis, but not to someone who is viewing a dashboard. Descriptions aren't formatted. You are able to enter line feeds and formatting marks and these are preserved by the editor. However, the description tooltip that displays is only able to show words, numbers, and symbols—but not formatting. To edit a description to a column or field 1. On the QuickSight start page, choose Datasets at left. 2. On the Datasets page, choose the dataset that you want to work on. 3. On the dataset details page that opens, choose Edit dataset at upper right. 4. On the dataset page that opens, choose a column in the table preview at bottom or in the field list at left. 5. To add or change the description, do one of the following: • At the bottom of the screen, open the settings for the field from the pencil icon next to the field's name. • In the field list, open the settings for the field from the menu next to the field's name. Then choose Edit name & description from the context menu. 6. Add or change the description for the field. To delete an existing description, delete all the text in the Description box. 7. (Optional) For Name, if you want to change the name of the field, you can enter a new one here. Describing data 238 Amazon QuickSight User Guide 8. Choose Apply to save your changes. Choose cancel to exit. Choosing file upload settings If you are using a file data source, confirm the upload settings, and correct them if necessary. Important If it's necessary to change upload settings, make these changes before you make any other changes to the dataset. Changing upload settings causes Amazon QuickSight to reimport the file. This process overwrites any changes you have made so far. Changing text file upload settings Text file upload settings include the file header indicator, |
amazon-quicksight-user-079 | amazon-quicksight-user.pdf | 79 | you can enter a new one here. Describing data 238 Amazon QuickSight User Guide 8. Choose Apply to save your changes. Choose cancel to exit. Choosing file upload settings If you are using a file data source, confirm the upload settings, and correct them if necessary. Important If it's necessary to change upload settings, make these changes before you make any other changes to the dataset. Changing upload settings causes Amazon QuickSight to reimport the file. This process overwrites any changes you have made so far. Changing text file upload settings Text file upload settings include the file header indicator, file format, text delimiter, text qualifier, and start row. If you are working with an Amazon S3 data source, the upload settings you select are applied to all files you choose to use in this dataset. Use the following procedure to change text file upload settings. 1. On the data preparation page, open the Upload Settings pane by choosing the expand icon. 2. 3. 4. 5. In File format, choose the file format type. If you chose the custom separated (CUSTOM) format, specify the separating character in Delimiter. If the file doesn't contain a header row, deselect the Files include headers check box. If you want to start from a row other than the first row, specify the row number in Start from row. If the Files include headers check box is selected, the new starting row is treated as the header row. If the Files include headers check box is not selected, the new starting row is treated as the first data row. 6. In Text qualifier, choose the text qualifier, either single quotes (') or double quotes ("). Changing Microsoft Excel file upload settings Microsoft Excel file upload settings include the range header indicator and whole worksheet selector. Choosing file upload settings 239 Amazon QuickSight User Guide Use the following procedure to change Microsoft Excel file upload settings. 1. On the data preparation page, open the Upload Settings pane by choosing the expand icon. 2. 3. Leave Upload whole sheet selected. If the file doesn't contain a header row, deselect the Range contains headers check box. Preparing data fields for analysis in Amazon QuickSight Before you start analyzing and visualizing your data, you can prepare the fields (columns) in your dataset for analysis. You can edit field names and descriptions, change the data type for fields, set up drill-down hierarchies for fields, and more. Use the following topics to prepare fields in your dataset. Topics • Editing field names and descriptions • Setting fields as a dimensions or measures • Changing a field data type • Adding drill-downs to visual data in Amazon QuickSight • Selecting fields • Organizing fields into folders in Amazon QuickSight • Mapping and joining fields Editing field names and descriptions You can change any field name and description from what is provided by the data source. If you change the name of a field used in a calculated field, make sure also to change it in the calculated field function. Otherwise, the function fails. To change a field name or description 1. In the Fields pane of the data prep page, choose the three-dot icon on the field that you want to change. Then choose Edit name & description. Preparing data fields 240 Amazon QuickSight User Guide 2. Enter the new name or description that you want to change, and choose Apply. You can also change the name and description of a field on the data prep page. To do this, select the column header of the field that you want to change in the Dataset table in that page's lower half. Then make any changes there. Editing field names and descriptions 241 Amazon QuickSight User Guide Setting fields as a dimensions or measures In the Field list pane, dimension fields have blue icons and measure fields have green icons. Dimensions are text or date fields that can be items, like products, or attributes that are related to measures. You can use dimensions to partition these items or attributes, like sales date for sales figures. Measures are numeric values that you use for measurement, comparison, and aggregation. In some cases, Amazon QuickSight interprets a field as a measure that you want to use it as a dimension (or the other way around). If so, you can change the setting for that field. Changing a field's measure or dimension setting changes it for all visuals in the analysis that use that dataset. However, it doesn't change it in the dataset itself. Changing a field's dimension or measure setting Use the following procedure to change a field's dimension or measure setting To change a field's dimension or measure setting 1. In the Field list pane, hover over the field that you want to change. 2. |
amazon-quicksight-user-080 | amazon-quicksight-user.pdf | 80 | a field as a measure that you want to use it as a dimension (or the other way around). If so, you can change the setting for that field. Changing a field's measure or dimension setting changes it for all visuals in the analysis that use that dataset. However, it doesn't change it in the dataset itself. Changing a field's dimension or measure setting Use the following procedure to change a field's dimension or measure setting To change a field's dimension or measure setting 1. In the Field list pane, hover over the field that you want to change. 2. Choose the selector icon to the right of the field name, and then choose Convert to dimension or Convert to measure as appropriate. Setting fields as a dimensions or measures 242 Amazon QuickSight User Guide Changing a field data type When Amazon QuickSight retrieves data, it assigns each field a data type based on the data in the field. The possible data types are as follows: • Date – The date data type is used for date data in a supported format. For information about the date formats Amazon QuickSight supports, see Data source quotas. • Decimal – The decimal data type is used for numeric data that requires one or more decimal places of precision, for example 18.23. The decimal data type supports values with up to four decimal places to the right of the decimal point. Values that have a higher scale than this are truncated to the fourth decimal place in two cases. One is when these values are displayed in data preparation or analyses, and one is when these values are imported into QuickSight. For example, 13.00049 is truncated to 13.0004. • Geospatial – The geospatial data type is used for geospatial data, for example longitude and latitude, or cities and countries. • Integer – The int data type is used for numeric data that only contains integers, for example 39. • String – The string data type is used for nondate alphanumeric data. QuickSight reads a small sample of rows in the column to determine the data type. The data type that occurs most in the small sample size is the suggested type. In some cases, there might be blank values (treated as strings by QuickSight) in a column that contains mostly numbers. In these cases, it might be that the String data type is the most frequent type in the sample set of rows. You can manually modify the data type of the column to make it integer. Use the following procedures to learn how. Changing a field data type during data prep During data preparation, you can change the data type of any field from the data source. On the Change data type menu, you can change calculated fields that don't include aggregations to geospatial types. You can make other changes to the data type of a calculated field by modifying its expression directly. Amazon QuickSight converts the field data according to the data type that you choose. Rows that contain data that is incompatible with that data type are skipped. For example, suppose that you convert the following field from String to Integer. 10020 36803 14267a Changing a field data type 243 Amazon QuickSight 98457 78216b User Guide All records containing alphabetic characters in that field are skipped, as shown following. 10020 36803 98457 If you have a database dataset with fields whose data types aren't supported by Amazon QuickSight, use a SQL query during data preparation. Then use CAST or CONVERT commands (depending on what is supported by the source database) to change the field data types. For more information about adding a SQL query during data preparation, see Using SQL to customize data. For more information about how different source data types are interpreted by Amazon QuickSight, see Supported data types from external data sources. You might have numeric fields that act as dimensions rather than metrics, for example ZIP codes and most ID numbers. In these cases, it's helpful to give them a string data type during data preparation. Doing this lets Amazon QuickSight understand that they are not useful for performing mathematical calculations and can only be aggregated with the Count function. For more information about how Amazon QuickSight uses dimensions and measures, see Setting fields as a dimensions or measures. In SPICE, numbers converted from numeric into an integer are truncated by default. If you want to round your numbers instead, you can create a calculated field using the round function. To see whether numbers are rounded or truncated before they are ingested into SPICE, check your database engine. To change a field data type during data prep 1. From the QuickSight start page, choose Datasets, choose the dataset that you want, and then choose Edit dataset. |
amazon-quicksight-user-081 | amazon-quicksight-user.pdf | 81 | with the Count function. For more information about how Amazon QuickSight uses dimensions and measures, see Setting fields as a dimensions or measures. In SPICE, numbers converted from numeric into an integer are truncated by default. If you want to round your numbers instead, you can create a calculated field using the round function. To see whether numbers are rounded or truncated before they are ingested into SPICE, check your database engine. To change a field data type during data prep 1. From the QuickSight start page, choose Datasets, choose the dataset that you want, and then choose Edit dataset. 2. In the data preview pane, choose the data type icon under the field you want to change. 3. Choose the target data type. Only data types other than the one currently in use are listed. Changing a field data type in an analysis You can use the Field list pane, visual field wells, or on-visual editors to change numeric field data types within the context of an analysis. Numeric fields default to displaying as numbers, but you Changing a field data type 244 Amazon QuickSight User Guide can choose to have them display as currency or as a percentage instead. You can't change the data types for string or date fields. Changing a field's data type in an analysis changes it for all visuals in the analysis that use that dataset. However, it doesn't change it in the dataset itself. Note If you are working in a pivot table visual, applying a table calculation changes the data type of the cell values in some cases. This type of change occurs if the data type doesn't make sense with the applied calculation. For example, suppose that you apply the Rank function to a numeric field that you modified to use a currency data type. In this case, the cell values display as numbers rather than currency. Similarly, if you apply the Percent difference function instead, the cell values display as percentages rather than currency. To change a field's data type 1. Choose one of the following options: • In the Field list pane, hover over the numeric field that you want to change. Then choose the selector icon to the right of the field name. • On any visual that contains an on-visual editor associated with the numeric field that you want to change, choose that on-visual editor. • Expand the Field wells pane, and then choose the field well associated with the numeric field that you want to change. 2. Choose Show as, and then choose Number, Currency, or Percent. Adding drill-downs to visual data in Amazon QuickSight All visual types except pivot tables offer the ability to create a hierarchy of fields for a visual element. The hierarchy lets you drill down to see data at different levels of the hierarchy. For example, you can associate the country, state, and city fields with the x-axis on a bar chart. Then, you can drill down or up to see data at each of those levels. As you drill down each level, the data displayed is refined by the value in the field you drill down on. For example, if you drill down on the state of California, you see data on all of the cities in California. Adding drill-downs 245 Amazon QuickSight User Guide The field wells you can use to create drill-downs varies by visual type. Refer to the topic on each visual type to learn more about its drill-down support. Drill-down functionality is added automatically for dates when you associate a date field with the drill-down field well of a visual. In this case, you can always drill up and down through the levels of date granularity. Drill-down functionality is also added automatically for geospatial groupings, after you define these in the dataset. Use the following table to identify the field wells/on-visual editors that support drill-down for each visual type. Visual type Field well or on-visual editor Bar charts (all horizontal) Y axis and Group/Color Bar charts (all vertical) X axis and Group/Color Combo charts (all) Geospatial charts Heat map KPIs Line charts (all) Pie chart Scatter plot Tree map Important X axis and Group/Color Geospatial and Color Rows and Columns Trend Group X axis and Color Group/Color Group/Color Group by Drill-downs are not suppoted for tables or pivot tables. Adding drill-downs 246 Amazon QuickSight Adding a drill-down User Guide Use the following procedure to add drill-down levels to a visual. To add drill-down levels to a visual 1. On the analysis page, choose the visual that you want to add drill-downs to. 2. Drag a field item into a Field well. 3. If your dataset has a defined hierarchy, you can drag the entire hierarchy into the field well as one. An example is geospatial or |
amazon-quicksight-user-082 | amazon-quicksight-user.pdf | 82 | Color Rows and Columns Trend Group X axis and Color Group/Color Group/Color Group by Drill-downs are not suppoted for tables or pivot tables. Adding drill-downs 246 Amazon QuickSight Adding a drill-down User Guide Use the following procedure to add drill-down levels to a visual. To add drill-down levels to a visual 1. On the analysis page, choose the visual that you want to add drill-downs to. 2. Drag a field item into a Field well. 3. If your dataset has a defined hierarchy, you can drag the entire hierarchy into the field well as one. An example is geospatial or coordinate data. In this case, you don't need to follow the remaining steps. If you don't have a predefined hierarchy, you can create one in your analysis, as described in the remaining steps. 4. Drag a field that you want to use in the drill-down hierarchy to an appropriate field well, depending on the visual type. Make sure that the label for the dragged field says Add drill- down layer. Position the dragged field above or below the existing field based on where you want it to be in the hierarchy you're creating. Adding drill-downs 247 Amazon QuickSight User Guide Adding drill-downs 248 Amazon QuickSight User Guide 5. Continue until you have added all of the levels of hierarchy that you want. To remove a field from the hierarchy, choose the field, and then choose Remove. 6. To drill down or up to see data at a different level of the hierarchy, choose an element on the visual (like a line or bar), and then choose Drill down to <lower level> or Drill up to <higher level>. In this example, from the car-make level you can drill down to car-model to see data at that level. If you drill down to car-model from the Ford car-make, you see only car- models in that car-make. After you drill down to the car-model level, you can then drill down further to see make- year data, or go back up to car-make. If you drill down to make-year from the bar representing Ranger, you see only years for that model of car. Selecting fields When you prepare data, you can select one or more fields to perform an action on them, such as excluding them or adding them to a folder. To select one or more fields in the data preparation pane, click or tap the field or fields in the Fields pane at left. You can then choose the field menu (the three dots) to the right of the field name and choose an action to take. The action is performed on all selected fields. You can select or deselect all fields at once by choosing either All or None at the top of the Fields pane. Selecting fields 249 Amazon QuickSight User Guide If you edit a dataset and exclude a field that is used in a visual, that visual breaks. You can fix it the next time you open that analysis. Searching for fields If you have a long field list in the Fields pane, you can search to locate a specific field by entering a search term for Search fields. Any field whose name contains the search term is shown. Search is case-insensitive and wildcards are not supported. Choose the cancel icon (X) to the right of the search box to return to viewing all fields. Organizing fields into folders in Amazon QuickSight When prepping your data in Amazon QuickSight, you can use folders to organize your fields for multiple authors across your enterprise. Arranging fields into folders and subfolders can make it easier for authors to find and understand fields in your dataset. You can create folders while preparing your dataset, or when editing a dataset. For more information about creating a new dataset and preparing it, see Creating datasets. For more information about opening an existing dataset for data preparation, see Editing datasets. Organizing fields into folders 250 Amazon QuickSight User Guide While performing an analysis, authors can expand and collapse folders, search for specific fields within folders, and see your descriptions of folders on the folder menu. Folders appear at the top of the Fields pane in alphabetical order. Creating a folder Use the following procedure to create a new folder in the Fields pane. To create a new folder 1. On the data preparation page, in the Fields pane, select the field menu for the fields that you want to place in a folder and choose Add to folder. To select more than one field at a time, press the Ctrl key while you select (Command key on Mac). 2. On the Add to folder page that appears, choose Create a new folder and enter a name for the new folder. 3. Choose Apply. The |
amazon-quicksight-user-083 | amazon-quicksight-user.pdf | 83 | in alphabetical order. Creating a folder Use the following procedure to create a new folder in the Fields pane. To create a new folder 1. On the data preparation page, in the Fields pane, select the field menu for the fields that you want to place in a folder and choose Add to folder. To select more than one field at a time, press the Ctrl key while you select (Command key on Mac). 2. On the Add to folder page that appears, choose Create a new folder and enter a name for the new folder. 3. Choose Apply. The folder appears at the top of the Fields pane with the fields that you chose inside it. Fields inside folders are arranged in alphabetical order. Organizing fields into folders 251 Amazon QuickSight Creating a subfolder User Guide To further organize your data fields in the Fields pane, you can create subfolders within parent folders. To create a subfolder 1. On the data preparation page, in the Fields pane, select the field menu for a field already in a folder and choose Move to folder. 2. On the Move to folder page that appears, choose Create a new folder and enter a name for the new folder. 3. Choose Apply. The subfolder appears within the parent folder at the top of the list of fields. Subfolders are arranged in alphabetical order. Adding fields to an existing Folder Use the following procedure to add fields to an existing folder in the Fields pane. To add one or more fields to a folder 1. On the data preparation page, in the Fields pane, select the fields that you want to add to a folder. To select more than one field at a time, press the Ctrl key while you select (Command key on Mac). 2. On the field menu, choose Add to folder. 3. On the Add to folder page that appears, choose a folder for Existing folder. 4. Choose Apply. The field or fields are added to the folder. Moving fields between folders Use the following procedure to move fields between folders in the Fields pane. Organizing fields into folders 252 Amazon QuickSight To move fields between folders User Guide 1. On the data preparation page, in the Fields pane, select the fields that you want to move to another folder. To select more than one field at a time, press the Ctrl key while you select (Command key on Mac). 2. On the field menu, choose Move to folder. 3. On the Move to folder page that appears, choose a folder for Existing folder. 4. Choose Apply. Removing fields from a folder Use the following procedure to remove fields from a folder in the Fields pane. Removing a field from a folder doesn't delete the field. To remove fields from a folder 1. On the data preparation page, in the Fields pane, select the fields that you want to remove. 2. On the field menu, choose Remove from folder. The fields that you selected are removed from the folder and placed back in the list of fields in alphabetical order. Editing a folder name and adding a folder description You can edit the name or add a description of a folder to provide context about the data fields inside it. The folder name appears in the Fields pane. While performing an analysis, authors can read your folder's description when they select the folder menu in the Fields pane. To edit a folder name or edit or add a description for a folder 1. On the data preparation page, in the Fields pane, select the folder menu for the folder that you want to edit and choose Edit name & description. 2. On the Edit folder page that appears, do the following: • • For Name, enter a name for the folder. For Description, enter a description of the folder. Organizing fields into folders 253 Amazon QuickSight 3. Choose Apply. Moving folders User Guide You can move folders and subfolders to new or existing folders in the Fields pane. To move a folder 1. On the data preparation page, in the Fields pane, choose Move folder on the folder menu. 2. On the Move folder page that appears, do one of the following: • • Choose Create a new folder and enter a name for the folder. For Existing folder, choose a folder. 3. Choose Apply. The folder appears within the folder that you chose in the Fields pane. Removing folders from the fields pane Use the following procedure to remove a folder from the Fields pane. To remove a folder 1. On the data preparation page, in the Fields pane, choose Remove folder on the folder menu. 2. On the Remove folder? page that appears, choose Remove. The folder is removed |
amazon-quicksight-user-084 | amazon-quicksight-user.pdf | 84 | the Move folder page that appears, do one of the following: • • Choose Create a new folder and enter a name for the folder. For Existing folder, choose a folder. 3. Choose Apply. The folder appears within the folder that you chose in the Fields pane. Removing folders from the fields pane Use the following procedure to remove a folder from the Fields pane. To remove a folder 1. On the data preparation page, in the Fields pane, choose Remove folder on the folder menu. 2. On the Remove folder? page that appears, choose Remove. The folder is removed from the Fields pane. Any fields that were in the folder are placed back in the list of fields in alphabetical order. Removing folders doesn't exclude fields from view or delete fields from the dataset. Mapping and joining fields When you are using different datasets together in Amazon QuickSight, you can simplify the process of mapping fields or joining tables during the data preparation stage. You should already be verifying that your fields have the correct data type and an appropriate field name. However, if you already know which datasets are going to be used together, you can take a couple of extra steps to make your work easier later on. Mapping and joining fields 254 Amazon QuickSight Mapping fields User Guide Amazon QuickSight can automatically map fields between datasets in the same analysis. The following tips can help make it easier for Amazon QuickSight to automatically map fields between datasets, for example if you are creating a filter action across datasets: • Matching field names – Field names must match exactly, with no differences in case, spacing, or punctuation. You can rename fields that describe the same data, so an automatic mapping is accurate. • Matching data types – Fields must have the same data type for automatic mapping. You can change the data types while you are preparing the data. This step also gives you the opportunity to discover whether you need to filter out any data that isn't the correct data type. • Using calculated fields – You can use calculated fields to create a matching field, and give it the correct name and data type for automatic mapping. Note After an automatic mapping exists, you can rename a field without breaking the field mapping. However, if you change the data type, the mapping is broken. For more information on field mapping for filter actions across datasets, see Creating and editing custom actions in Amazon QuickSight. Joining fields You can create joins between data from different data sources, including files or databases. The following tips can help make it easier for you to join data from different files or data sources: • Similar field names – It is simpler to join fields when you can see what should match; for example, Order ID and order-id seem as if they should be the same. But if one is a work order, and the other is a purchase order, then the fields are probably different data. If possible, make sure that the files and tables that you want to join have field names making it clear what data they contain. • Matching data types – Fields must have the same data type before you can join on them. Make sure that the files and tables that you want to join having matching data types in join fields. You Mapping and joining fields 255 Amazon QuickSight User Guide can't use a calculated field for a join. Also, you can't join two existing datasets. You create the joined dataset by directly accessing the source data. For more information on joining data across data sources, see Joining data. Adding calculations Create calculated fields to transform your data by using one or more of the following: • Operators • Functions • Fields that contain data • Other calculated fields You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. For more information about adding calculated fields, see the following topics. Topics • Adding calculated fields • Order of evaluation in Amazon QuickSight • Using level-aware calculations in Amazon QuickSight • Calculated field function and operator reference for Amazon QuickSight Adding calculated fields Create calculated fields to transform your data by using one or more of the following: • Operators • Functions • Aggregate functions (you can only add these to an analysis) • Fields that contain data Adding calculations 256 Amazon QuickSight • Other calculated fields User Guide You can add calculated fields to a dataset during data |
amazon-quicksight-user-085 | amazon-quicksight-user.pdf | 85 | analysis. For more information about adding calculated fields, see the following topics. Topics • Adding calculated fields • Order of evaluation in Amazon QuickSight • Using level-aware calculations in Amazon QuickSight • Calculated field function and operator reference for Amazon QuickSight Adding calculated fields Create calculated fields to transform your data by using one or more of the following: • Operators • Functions • Aggregate functions (you can only add these to an analysis) • Fields that contain data Adding calculations 256 Amazon QuickSight • Other calculated fields User Guide You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. Analyses support both single-row operations and aggregate operations. Single-row operations are those that supply a (potentially) different result for every row. Aggregate operations supply results that are always the same for entire sets of rows. For example, if you use a simple string function with no conditions, it changes every row. If you use an aggregate function, it applies to all the rows in a group. If you ask for the total sales amount for the US, the same number applies to the entire set. If you ask for data on a particular state, the total sales amount changes to reflect your new grouping. It still provides one result for the entire set. By creating the aggregated calculated field within the analysis, you can then drill down into the data. The value of that aggregated field is recalculated appropriately for each level. This type of aggregation isn't possible during dataset preparation. For example, let's say that you want to figure out the percentage of profit for each country, region, and state. You can add a calculated field to your analysis, (sum(salesAmount - cost)) / sum(salesAmount). This field is then calculated for each country, region, and state, at the time your analyst drills down into the geography. Topics • Adding calculated fields to an analysis • Adding calculated fields to a dataset • Handling decimal values in calculated fields Adding calculated fields to an analysis When you add a dataset to an analysis, every calculated field that exists in the dataset is added to the analysis. You can add additional calculated fields at the analysis level to create calculated fields that are available only in that analysis. To add a calculated field to an analysis 1. Open the QuickSight console. Adding calculated fields 257 Amazon QuickSight User Guide 2. Open the analysis that you want to change. 3. In the Data pane, choose Add at top left, and then choose + CALCULATED FIELD. a. b. c. In the calculations editor that opens, do the following: Enter a name for the calculated field. Enter a formula using fields from your dataset, functions, and operators. 4. When finished, choose Save. For more information about how to create formulas using the available functions in QuickSight, see Calculated field function and operator reference for Amazon QuickSight . Adding calculated fields to a dataset Amazon QuickSight authors can genreate calculated fields during the data preparation phase of a dataset's creation. When you create a calculated field for a dataset, the field becomes a new column in the dataset. All analyses that use the dataset inherit the dataset's calculated fields. If the calculated field operates at the row level and the dataset is stored in SPICE, QuickSight computes and materializes the result in SPICE. If the calculated field relies on an aggregation function, QuickSight retains the formula and performs the calculation when the analysis is generated. This type of calculated field is called an unmaterialized calculated field. To add or edit a calculated field for a dataset 1. Open the dataset that you want to work with. For more information, see Editing datasets. 2. On the data prep page, do one of the following: • To create a new field, choose Add calculated field at left. • To edit an existing calculated field, choose it from Calculated fields at left, then choose Edit from the context (right-click) menu. Adding calculated fields 258 Amazon QuickSight User Guide 3. In the calculation editor, enter a descriptive name for Add title to name the new calculated field. This name appears in the field list in the dataset, so it should look similar to the other fields. For this example, we name the field Total Sales This Year. 4. (Optional) Add a comment, for example to explain what the expression does, by enclosing text in slashes and asterisks. /* Calculates sales per year for this year*/ 5. Identify the metrics, functions, and other items to use. |
amazon-quicksight-user-086 | amazon-quicksight-user.pdf | 86 | Edit from the context (right-click) menu. Adding calculated fields 258 Amazon QuickSight User Guide 3. In the calculation editor, enter a descriptive name for Add title to name the new calculated field. This name appears in the field list in the dataset, so it should look similar to the other fields. For this example, we name the field Total Sales This Year. 4. (Optional) Add a comment, for example to explain what the expression does, by enclosing text in slashes and asterisks. /* Calculates sales per year for this year*/ 5. Identify the metrics, functions, and other items to use. For this example, we need to identify the following: Adding calculated fields 259 Amazon QuickSight • The metric to use • Functions: ifelse and datediff User Guide We want to build a statement like "If the sale happened during this year, show the total sales, and otherwise show 0." To add the ifelse function, open the Functions list. Choose All to close the list of all functions. Now you should see the function groups: Aggregate, Conditional, Date, and so on. Choose Conditional, and then double-click on ifelse to add it to the workspace. ifelse() 6. Place your cursor inside the parenthesis in the workspace, and add three blank lines. ifelse( ) 7. With your cursor on the first blank line, find the dateDiff function. It's listed for Functions under Dates. You can also find it by entering date for Search functions. The dateDiff function returns all functions that have date as part of their name. It doesn't return all functions listed under Dates; for example, the now function is missing from the search results. Double-click on dateDiff to add it to the first blank line of the ifelse statement. ifelse( dateDiff() ) Add the parameters that dateDiff uses. Place your cursor inside the dateDiff parentheses to begin to add date1, date2, and period: 1. For date1: The first parameter is the field that has the date in it. Find it under Fields, and add it to the workspace by double-clicking it or entering its name. Adding calculated fields 260 Amazon QuickSight User Guide 2. For date2, add a comma, then choose truncDate() for Functions. Inside its parenthesis, add period and date, like this: truncDate( "YYYY", now() ) 3. For period: Add a comma after date2 and enter YYYY. This is the period for the year. To see a list of all the supported periods, find dateDiff in the Functions list, and open the documentation by choosing Learn more. If you're already viewing the documentation, as you are now, see dateDiff. Add a few spaces for readability, if you like. Your expression should look like the following. ifelse( dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) ) 8. Specify the return value. For our example, the first parameter in ifelse needs to return a value of TRUE or FALSE. Because we want the current year, and we're comparing it to this year, we specify that the dateDiff statement should return 0. The if part of the ifelse evaluates as true for rows where there is no difference between the year of the sale and the current year. dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0 To create a field for TotalSales for last year, you can change 0 to 1. Another way to do the same thing is to use addDateTime instead of truncDate. Then for each previous year, you change the first parameter for addDateTime to represent each year. For this, you use -1 for last year, -2 for the year before that, and so on. If you use addDateTime, you leave the dateDiff function = 0 for each year. dateDiff( {Discharge Date}, addDateTime(-1, "YYYY", now() ) ,"YYYY" ) = 0 /* Last year */ 9. Move your cursor to the first blank line, just under dateDiff. Add a comma. For the then part of the ifelse statement, we need to choose the measure (metric) that contains the sales amount, TotalSales. Adding calculated fields 261 Amazon QuickSight User Guide To choose a field, open the Fields list and double-click a field to add it to the screen. Or you can enter the name. Add curly braces { } around names that contain spaces. It's likely that your metric has a different name. You can know which field is a metric by the number sign in front of it (#). Your expression should look like the following now. ifelse( dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0 ,{TotalSales} ) 10. Add an else clause. The ifelse function doesn't require one, but we want to add it. For reporting purposes, you usually don't want to have any null values, because sometimes rows with nulls are omitted. We set the else part of the ifelse to 0. The result |
amazon-quicksight-user-087 | amazon-quicksight-user.pdf | 87 | around names that contain spaces. It's likely that your metric has a different name. You can know which field is a metric by the number sign in front of it (#). Your expression should look like the following now. ifelse( dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0 ,{TotalSales} ) 10. Add an else clause. The ifelse function doesn't require one, but we want to add it. For reporting purposes, you usually don't want to have any null values, because sometimes rows with nulls are omitted. We set the else part of the ifelse to 0. The result is that this field is 0 for rows that contain sales from previous years. To do this, on the blank line add a comma and then a 0. If you added the comment at the beginning, your finished ifelse expression should look like the following. /* Calculates sales per year for this year*/ ifelse( dateDiff( {Date}, truncDate( "YYYY", now() ) ,"YYYY" ) = 0 ,{TotalSales} ,0 ) 11. Save your work by choosing Save at upper right. If there are errors in your expression, the editor displays an error message at the bottom. Check your expression for a red squiggly line, then hover your cursor over that line to see what the error message is. Common errors include missing punctuation, missing parameters, misspellings, and invalid data types. To avoid making any changes, choose Cancel. Adding calculated fields 262 Amazon QuickSight User Guide To add a parameter value to a calculated field 1. You can reference parameters in calculated fields. By adding the parameter to your expression, you add the current value of that parameter. 2. To add a parameter, open the Parameters list, and select the parameter whose value you want to include. 3. (Optional) To manually add a parameter to the expression, type the name of the parameter. Then enclosed it in curly braces {}, and prefix it with a $, for example ${parameterName}. You can change the data type of any field in your dataset, including the types of calculated fields. You can only choose data types that match the data that's in the field. To change the data type of a calculated field • For Calculated fields (at left), choose the field that you want to change, then choose Change data type from the context (right-click) menu. Unlike the other fields in the dataset, calculated fields can't be disabled. Instead, delete them. To delete a calculated field • For Calculated fields (at left), choose the field that you want to change, then choose Delete from the context (right-click) menu. Handling decimal values in calculated fields When your dataset uses Direct Query mode, the calculation of the decimal data type is determined by the behavior of the source engine that the dataset originates from. In some particular cases, QuickSight applies special handlings to determine the output calculation's data type. When your dataset uses SPICE query mode and a calculated field is materialized, the data type of the result is contingent on the specific function operators and the data type of the input. The tables below show the expected bahavior for some numeric calculated fields. Unary operators Adding calculated fields 263 Amazon QuickSight User Guide The following table shows which data type is output based on the operator you use and the data type of the value that you input. For example, if you input an integer to an abs calculation, the output value's data type is integer. Operator Input type abs Decimal-fixed Int Output type Decimal-fixed Int Decimal-float Decimal-float ceil Decimal-fixed Int Decimal-float exp Decimal-fixed Int Decimal-float floor Decimal-fixed Int Decimal-float ln Decimal-fixed Int Decimal-float log Decimal-fixed Int Decimal-float Int Int Int Decimal-float Decimal-float Decimal-float Int Int Int Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float Adding calculated fields 264 Amazon QuickSight User Guide Operator Input type round Decimal-fixed Int Decimal-float sqrt Decimal-fixed Int Decimal-float Binary operators Output type Decimal-fixed Decimal-fixed Decimal-fixed Decimal-float Decimal-float Decimal-float The following tables show which data type is output based on the data types of the two values that you input. For example, for an arithmetic operator, if you provide two integer data types, the result of the calculation output as an integer. For basic operators (+, -, *): Integer Decimal-fixed Decimal-float Integer Integer Decimal-fixed Decimal-float Decimal-fixed Decimal-fixed Decimal-fixed Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float For division operators (/): Integer Decimal-fixed Decimal-float Integer Decimal-float Decimal-float Decimal-float Adding calculated fields 265 Amazon QuickSight User Guide Integer Decimal-fixed Decimal-float Decimal-fixed Decimal-float Decimal-fixed Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float For exponential and mod operators (^, %): Integer Decimal-fixed Decimal-float Integer Decimal-float Decimal-float Decimal-float Decimal-fixed Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float Order of evaluation in Amazon QuickSight When you open or update an analysis, before displaying it Amazon QuickSight evaluates everything that is configured in the analysis in a specific |
amazon-quicksight-user-088 | amazon-quicksight-user.pdf | 88 | operators (+, -, *): Integer Decimal-fixed Decimal-float Integer Integer Decimal-fixed Decimal-float Decimal-fixed Decimal-fixed Decimal-fixed Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float For division operators (/): Integer Decimal-fixed Decimal-float Integer Decimal-float Decimal-float Decimal-float Adding calculated fields 265 Amazon QuickSight User Guide Integer Decimal-fixed Decimal-float Decimal-fixed Decimal-float Decimal-fixed Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float For exponential and mod operators (^, %): Integer Decimal-fixed Decimal-float Integer Decimal-float Decimal-float Decimal-float Decimal-fixed Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float Decimal-float Order of evaluation in Amazon QuickSight When you open or update an analysis, before displaying it Amazon QuickSight evaluates everything that is configured in the analysis in a specific sequence. Amazon QuickSight translates the configuration into a query that a database engine can run. The query returns the data in a similar way whether you connect to a database, a software as a service (SaaS) source, or the Amazon QuickSight analytics engine (SPICE). If you understand the order that the configuration is evaluated in, you know the sequence that dictates when a specific filter or calculation is applied to your data. The following illustration shows the order of evaluation. The column on the left shows the order of evaluation when no level aware calculation window (LAC-W) nor aggregate (LAC-A) function is involved. The second column shows the order of evaluation for analyses that contain calculated fields to compute LAC-W expressions at the prefilter (PRE_FILTER) level. The third column shows the order of evaluation for analyses that contain calculated fields to compute LAC-W expressions at the preaggregate (PRE_AGG) level. The last column shows the order of evaluation for analyses that contain calculated fields to compute LAC-A expressions. Following the illustration, there is Order of evaluation 266 Amazon QuickSight User Guide a more detailed explanation of the order of evaluation. For more information about level aware calculations, see Using level-aware calculations in Amazon QuickSight. The following list shows the sequence in which Amazon QuickSight applies the configuration in your analysis. Anything that's set up in your data set happens outside your analysis, for example calculations at the data set level, filters, and security settings. These all apply to the underlying data. The following list only covers what happens inside the analysis. 1. LAC-W Prefilter level: Evaluates the data at the original table cardinality before analysis filters a. Simple calculations: Calculations at scalar level without any aggregations or window calculations. For example, date_metric/60, parseDate(date, 'yyyy/MM/dd'), ifelse(metric > 0, metric, 0), split(string_column, '|' 0). Order of evaluation 267 Amazon QuickSight User Guide b. LAC-W function PRE_FILTER: If any LAC-W PRE_FILTER expression is involved in the visual, Amazon QuickSight firstly computes the window function at the original table level, before any filters. If the LAC-W PRE_FILTER expression is used in filters, it is applied at this point. For example, maxOver(Population, [State, County], PRE_FILTER) > 1000. 2. LAC-W PRE_AGG: Evaluates the data at the original table cardinality before aggregations a. Filters added during analysis: Filters created for un-aggregated fields in the visuals are applied at this point, which are similar to WHERE clauses. For example, year > 2020. b. LAC-W function PRE_AGG: If any LAC-W PRE_AGG expression is involved in the visual, Amazon QuickSight computes the window function before any aggregation applied. If the LAC-W PRE_AGG expression is used in filters, it is applied at this point. For example, maxOver(Population, [State, County], PRE_AGG) > 1000. c. Top/bottom N filters: Filters that are configured on dimensions to display top/bottom N items. 3. LAC-A level: Evaluate aggregations at customized level, before visual aggregations a. Custom-level aggregations: If any LAC-A expression is involved in the visual, it is calculated at this point. Based on the table after the filters mentioned above, Amazon QuickSight computes the aggregation, grouped by the dimensions that are specified in the calculated fields. For example, max(Sales, [Region]). 4. Visual level: Evaluates aggregations at visual level, and post-aggregation table calculations, with the remaining configurations applied in the visuals a. Visual-level aggregations: Visual aggregations should always be applied except for tabular tables (where dimension is empty). With this setting, aggregations based on the fields in the field wells are calculated, grouped by the dimensions that put into the visuals. If any filter is built on top of aggregations, it is applied at this point, similar to HAVING clauses. For example, min(distance) > 100. b. Table calculations: If there is any post-aggregation table calculation (it should take aggregated expression as operand) referenced in the visual, it is calculated at this point. Amazon QuickSight performs window calculations after visual aggregations. Similarly, filters built on such calculations are applied. c. Other category calculations: This type of calculation only exists in line/bar/pie/donut charts. For more information, see Display limits. d. Totals and subtotals: Totals and Subtotals are calculated in donut charts (only totals), tables (only totals) and pivot tables, if requested. Order of evaluation 268 Amazon QuickSight User Guide Using |
amazon-quicksight-user-089 | amazon-quicksight-user.pdf | 89 | HAVING clauses. For example, min(distance) > 100. b. Table calculations: If there is any post-aggregation table calculation (it should take aggregated expression as operand) referenced in the visual, it is calculated at this point. Amazon QuickSight performs window calculations after visual aggregations. Similarly, filters built on such calculations are applied. c. Other category calculations: This type of calculation only exists in line/bar/pie/donut charts. For more information, see Display limits. d. Totals and subtotals: Totals and Subtotals are calculated in donut charts (only totals), tables (only totals) and pivot tables, if requested. Order of evaluation 268 Amazon QuickSight User Guide Using level-aware calculations in Amazon QuickSight Applies to: Enterprise Edition and Standard Edition With Level-aware calculations (LAC) you can specify the level of granularity that you want to compute window functions or aggregate functions. There are two types of LAC functions: level- aware calculation - aggregate (LAC-A) functions, and level-aware calculation - window (LAC-W) functions. Topics • Level-aware calculation - aggregate (LAC-A) functions • Level-aware calculation - window (LAC-W) functions Level-aware calculation - aggregate (LAC-A) functions With LAC-A functions, you can specify at what level to group the computation. By adding one argument into an existing aggregate function, such as sum() , max() , count(), you can define any group-by level that you want for the aggregation. The level added can be any dimension independent of the dimensions added to the visual. For example: sum(measure,[group_field_A]) To use LAC-A functions, type them directly in the calculation editor by adding the intended aggregation levels as the second argument between brackets. Following is an example of an aggregate function and a LAC-A function, for comparison. • Aggregate function: sum({sales}) • LAC-A function: sum({sales}, [{Country},{Product}]) The LAC-A results are computed with the specified level in the brackets [ ], can be used as operand of an aggregate function. The group-by level of the aggregate function is visual level, with Group by fields added to the field well of the visual. In addition to creating a static LAC group key in the bracket [ ], you can make it dynamically adapted to visual group-by fields, by putting a parameter $visualDimensions in the Level-aware calculations 269 Amazon QuickSight User Guide bracket. This is a system-provided parameter, in contrast to user-defined parameter. The [$visualDimensions]parameter represents the fields added to the Group by field well in current visual. The following examples show how to dynamically add group keys to the visual dimensions or remove group keys from visual dimensions • LAC-A with dynamic-added group key : sum({sales}, [${visualDimensions}, {Country},{Products}]) It calculates, before the visual level aggregation is calculated, the sum of sales, grouping by country, products, and any other fields in the Group by field well . • LAC-A with dynamic-removed group key : sum({sales}, [${visualDimensions},! {Country},!{Products}]) It calculates, before visual level aggregation is calculated, the sum of sales, grouping by the fields in the visual's Group by field well, except country and product. You can specify added group key or removed group key in on LAC expression, but not both. LAC-A functions are supported for the following aggregate functions: • avg • count • distinct_count • max • median • min • percentile • percentileCont • percentileDisc (percentile) • stdev • stdevp • sum • var • varp Level-aware calculations 270 Amazon QuickSight LAC-A examples You can do the following with LAC-A functions: User Guide • Run calculations that are independent of the levels in the visual. For example, if you have the following calculation, the sales numbers are aggregated only at the country level, but not across other dimensions (Region or Product) in the visual. sum({Sales},[{Country}]) Level-aware calculations 271 Amazon QuickSight User Guide • Run calculations for the dimensions that are not in the visual. For example, if you have the following function, you can calculate the average total country sales by region. Level-aware calculations 272 Amazon QuickSight User Guide sum({Sales},[{Country}]) Though Country is not included in the visual, the LAC-A function first aggregates the sales at the Country level and then the visual level calculation generates the average number for each region. If the LAC-A function isn't used to specify the level, the average sales are calculated at the lowest granular level (the base level of the dataset) for each region (showing in the sales column). • Use LAC-A combined with other aggregate functions and LAC-W functions. There are two ways you can nest LAC-A functions with other functions. • You can write a nested syntax when you create a calculation. For example, the LAC-A function can be nested with a LAC-W function to calculate the total sales by country of each product's average price: sum(avgOver({Sales},[{Product}],PRE_AGG),[{Country}]) • When adding a LAC-A function into a visual, the calculation can be further nested with visual- level aggregate functions that you selected in the fields well. For more information |
amazon-quicksight-user-090 | amazon-quicksight-user.pdf | 90 | the dataset) for each region (showing in the sales column). • Use LAC-A combined with other aggregate functions and LAC-W functions. There are two ways you can nest LAC-A functions with other functions. • You can write a nested syntax when you create a calculation. For example, the LAC-A function can be nested with a LAC-W function to calculate the total sales by country of each product's average price: sum(avgOver({Sales},[{Product}],PRE_AGG),[{Country}]) • When adding a LAC-A function into a visual, the calculation can be further nested with visual- level aggregate functions that you selected in the fields well. For more information about Level-aware calculations 273 Amazon QuickSight User Guide changing the aggregation of fields in the visual, see Changing or adding aggregation to a field by using a field well. LAC-A limitations The following limitations apply to LAC-A functions: • LAC-A functions are supported for all additive and non-additive aggregate functions, such as sum(), count(), and percentile(). LAC-A functions are not supported for conditional aggregate functions that end with "if", such as sumif() and countif(), nor for period aggregate functions that start with "periodToDate", such as periodToDateSum() and periodToDateMax(). • Row-level and column-level totals are not currently supported for for LAC-A functions in tables and pivot tables. When you add row-level or column-level totals to the chart, the total number will show as blank. Other non-LAC dimensions are not impacted. • Nested LAC-A functions are not currently supported. A limited capability of LAC-A functions nested with regular aggregate functions and LAC-W functions are supported. For example, the following functions are valid: • Aggregation(LAC-A()). For example: max(sum({sales}, [{country}])) • LAC-A(LAC-W()). For example: sum(sumOver({Sales},[{Product}],PRE_AGG), [{Country}]) The following functions are not valid: Level-aware calculations 274 Amazon QuickSight User Guide • LAC-A(Aggregation()). For example: sum(max({sales}), [{country}]) • LAC-A(LAC-A()). For example: sum(max({sales}, [{country}]),[category]) • LAC-W(LAC-A()). For example: sumOver(sum({Sales},[{Product}]), [{Country}],PRE_AGG) Level-aware calculation - window (LAC-W) functions With LAC-W functions, you can specify the window or partition to compute the calculation. LAC- W functions are a group of window functions, such as sumover(), (maxover), denseRank, that you can run at the prefilter or preaggregate level. For example: sumOver(measure, [partition_field_A],pre_agg). LAC-W functions used to be called level aware aggregations (LAA). LAC-W functions help you to answer the following types of questions: • How many of my customers made only 1 purchase order? Or 10? Or 50? We want the visual to use the count as a dimension rather than a metric in the visual. • What are the total sales per market segment for customers whose lifetime spend is greater than $100,000? The visual should only show the market segment and the total sales for each. • How much is the contribution of each industry to the entire company's profit (percent of total)? We want to be able to filter the visual to show some of the industries, and how they contribute to the total sales for the displayed industries. However, we also want to see each industry's percent of total sales for the entire company (including industries that are filtered out). • What are the total sales in each category as compared to the industry average? The industry average should include all of the categories, even after filtering. • How are my customers grouped into cumulative spending ranges? We want to use the grouping as a dimension rather than a metric. For more complex questions, you can inject a calculation or filter before QuickSight gets to a specific point in its evaluation of your settings. To directly influence your results, you add a calculation level keyword to a table calculation. For more information on how QuickSight evaluates queries, see Order of evaluation in Amazon QuickSight. The following calculation levels are supported for LAC-W functions: Level-aware calculations 275 Amazon QuickSight User Guide • PRE_FILTER – Before applying filters from the analysis, QuickSight evaluates prefilter calculations. Then it applies any filters that are configured on these prefilter calculations. • PRE_AGG – Before computing display-level aggregations, QuickSight performs preaggregate calculations. Then it applies any filters that are configured on these preaggregate calculations. This work happens before applying top and bottom N filters. You can use the PRE_FILTER or PRE_AGG keyword as a parameter in the following table calculation functions. When you specify a calculation level, you use an unaggregated measure in the function. For example, you can use countOver({ORDER ID}, [{Customer ID}], PRE_AGG). By using PRE_AGG, you specify that the countOver executes at the preaggregate level. • avgOver • countOver • denseRank • distinctCountOver • minOver • maxOver • percentileRank • rank • stdevOver • stdevpOver • sumOver • varOver • varpOver By default, the first parameter for each function must be an aggregated measure. If you use either PRE_FILTER or PRE_AGG, you use a nonaggregated measure for the first parameter. For LAC-W functions, the visual aggregation defaults to MIN to |
amazon-quicksight-user-091 | amazon-quicksight-user.pdf | 91 | specify a calculation level, you use an unaggregated measure in the function. For example, you can use countOver({ORDER ID}, [{Customer ID}], PRE_AGG). By using PRE_AGG, you specify that the countOver executes at the preaggregate level. • avgOver • countOver • denseRank • distinctCountOver • minOver • maxOver • percentileRank • rank • stdevOver • stdevpOver • sumOver • varOver • varpOver By default, the first parameter for each function must be an aggregated measure. If you use either PRE_FILTER or PRE_AGG, you use a nonaggregated measure for the first parameter. For LAC-W functions, the visual aggregation defaults to MIN to eliminate duplicates. To change the aggregation, open the field's context (right-click) menu, and then choose a different aggregation. For examples of when and how to use LAC-W functions in real life scenarios, see the following post in the AWS Big Data Blog: Create advanced insights using Level Aware Aggregations in Amazon QuickSight. Level-aware calculations 276 Amazon QuickSight User Guide Calculated field function and operator reference for Amazon QuickSight You can add calculated fields to a dataset during data preparation or from the analysis page. When you add a calculated field to a dataset during data preparation, it's available to all analyses that use that dataset. When you add a calculated field to a dataset in an analysis, it's available only in that analysis. You can create calculated fields to transform your data by using the following functions and operators. Topics • Operators • Functions by category • Functions • Aggregate functions • Table calculation functions Operators You can use the following operators in calculated fields. Amazon QuickSight uses the standard order of operations: parentheses, exponents, multiplication, division, addition, subtraction (PEMDAS). Equal (=) and not equal (<>) comparisons are case-sensitive. • Addition (+) • Subtraction (−) • Multiplication (*) • Division (/) • Modulo (%) – See also mod() in the following list. • Power (^) – See also exp() in the following list. • Equal (=) • Not equal (<>) • Greater than (>) • Greater than or equal to (>=) Functions and operators 277 Amazon QuickSight • Less than (<) • Less than or equal to (<=) • AND • OR • NOT User Guide Amazon QuickSight supports applying the following mathematical functions to an expression. • Mod(number, divisor) – Finds the remainder after dividing a number by a divisor. • Log(expression) – Returns the base 10 logarithm of a given expression. • Ln(expression) – Returns the natural logarithm of a given expression. • Abs(expression) – Returns the absolute value of a given expression. • Sqrt(expression) – Returns the square root of a given expression. • Exp(expression) – Returns the base of natural log e raised to the power of a given expression. To make lengthy calculations easier to read, you can use parenthesis to clarify groupings and precedence in calculations. In the following statement, you don't need parentheses. The multiplication statement is processed first, and then the result is added to five, returning a value of 26. However, parentheses make the statement easier to read and thus maintain. 5 + (7 * 3) Because parenthesis are first in the order of operations, you can change the order in which other operators are applied. For example, in the following statement the addition statement is processed first, and then the result is multiplied by three, returning a value of 36. (5 + 7) * 3 Example: Arithmetic operators The following example uses multiple arithmetic operators to determine a sales total after discount. (Quantity * Amount) - Discount Functions and operators 278 Amazon QuickSight Example: (/) Division User Guide The following example uses division to divide 3 by 2. A value of 1.5 is returned. Amazon QuickSight uses floating point divisions. 3/2 Example: (=) equal Using = performs a case-sensitive comparison of values. Rows where the comparison is TRUE are included in the result set. In the following example, rows where the Region field is South are included in the results. If the Region is south, these rows are excluded. Region = 'South' In the following example, the comparison evaluates to FALSE. Region = 'south' The following example shows a comparison that converts Region to all uppercase (SOUTH), and compares it to SOUTH. This returns rows where the region is south, South, or SOUTH. toUpper(Region) = 'SOUTH' Example: (<>) The not equal symbol <> means less than or greater than. So, if we say x<>1, then we are saying if x is less than 1 OR if x is greater than 1. Both < and > are evaluated together. In other words, if x is any value except 1. Or, x is not equal to 1. Note Use <>, not !=. The following example compares Status Code to a numeric value. This returns rows where the Status Code is not equal |
amazon-quicksight-user-092 | amazon-quicksight-user.pdf | 92 | to SOUTH. This returns rows where the region is south, South, or SOUTH. toUpper(Region) = 'SOUTH' Example: (<>) The not equal symbol <> means less than or greater than. So, if we say x<>1, then we are saying if x is less than 1 OR if x is greater than 1. Both < and > are evaluated together. In other words, if x is any value except 1. Or, x is not equal to 1. Note Use <>, not !=. The following example compares Status Code to a numeric value. This returns rows where the Status Code is not equal to 1. Functions and operators 279 Amazon QuickSight statusCode <> 1 User Guide The following example compares multiple statusCode values. In this case, active records have activeFlag = 1. This example returns rows where one of the following applies: • For active records, show rows where the status isn't 1 or 2 • For inactive records, show rows where the status is 99 or -1 ( activeFlag = 1 AND (statusCode <> 1 AND statusCode <> 2) ) OR ( activeFlag = 0 AND (statusCode= 99 OR statusCode= -1) ) Example: (^) The power symbol ^ means to the power of. You can use the power operator with any numeric field, with any valid exponent. The following example is a simple expression of 2 to the power of 4 or (2 * 2 * 2 * 2). This returns a value of 16. 2^4 The following example computes the square root of the revenue field. revenue^0.5 Example: AND, OR, and NOT The following example uses AND, OR, and NOT to compare multiple expressions. It does so using conditional operators to tag top customers NOT in Washington or Oregon with a special promotion, who made more than 10 orders. If no values are returned, the value 'n/a' is used. ifelse(( (NOT (State = 'WA' OR State = 'OR')) AND Orders > 10), 'Special Promotion XYZ', 'n/a') Example: Creating comparison lists like "in" or "not in" This example uses operators to create a comparison to find values that exist, or don't exist, in a specified list of values. Functions and operators 280 Amazon QuickSight User Guide The following example compares promoCode a specified list of values. This example returns rows where the promoCode is in the list (1, 2, 3). promoCode = 1 OR promoCode = 2 OR promoCode = 3 The following example compares promoCode a specified list of values. This example returns rows where the promoCode is NOT in the list (1, 2, 3). NOT(promoCode = 1 OR promoCode = 2 OR promoCode = 3 ) Another way to express this is to provide a list where the promoCode is not equal to any items in the list. promoCode <> 1 AND promoCode <> 2 AND promoCode <> 3 Example: Creating a "between" comparison This example uses comparison operators to create a comparison showing values that exist between one value and another. The following example examines OrderDate and returns rows where the OrderDate is between the first day and last day of 2016. In this case, we want the first and last day included, so we use "or equal to" on the comparison operators. OrderDate >= "1/1/2016" AND OrderDate <= "12/31/2016" Functions by category In this section, you can find a list of the functions available in Amazon QuickSight, sorted by category. Topics • Aggregate functions Functions and operators 281 User Guide Amazon QuickSight • Conditional functions • Date functions • Numeric functions • Mathematical functions • String functions • Table calculations Aggregate functions The aggregate functions for calculated fields in Amazon QuickSight include the following. These are only available during analysis and visualization. Each of these functions returns values grouped by the chosen dimension or dimensions. For each aggregation, there is also a conditional aggregation. These perform the same type of aggregation, based on a condition. • avg averages the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. • avgIf calculates the average based on a conditional statement. • count calculates the number of values in a dimension or measure, grouped by the chosen dimension or dimensions. • countIf calculates the count based on a conditional statement. • distinct_count calculates the number of distinct values in a dimension or measure, grouped by the chosen dimension or dimensions. • distinct_countIf calculates the distinct count based on a conditional statement. • max returns the maximum value of the specified measure, grouped by the chosen dimension or dimensions. • maxIf calculates the maximum based on a conditional statement. • median returns the median value of the specified measure, grouped by the chosen dimension or dimensions. • medianIf calculates the median based on a conditional statement. • min returns the minimum value of the specified measure, grouped |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.